[0001] The present invention related to use of Artificial Intelligence (Al) or autonomous
systems. It has numerous applications in everyday life and industry. Contexts, where
different Al systems are involved and take autonomous decisions, are increasingly
frequent in our society. These raise new challenges, problems, and therefore the need
of new solutions to safeguard and promote the welfare of people and society in general.
[0002] The growing use and adoption of Al systems has triggered society to think about ethical
considerations of Al. Currently, different governments, industries and academic institutions
have produced their own principles, guidelines and frameworks for ethics in Al.
[0003] Moreover, some technical aspects of addressing ethics in Al systems are investigated,
and published. For example, mechanisms to ensure responsibility and the accountability
for Al systems and their outcomes before and after their development, deployment and
use. This includes auditability, minimization and reporting of negative impacts, trade-offs,
and redress; which are vividly related to the EU "Ethics guidelines for trustworthy
Al" that includes four main principles: respect for human autonomy; prevention of
harm; fairness; and explicability.
[0004] Closely linked to the principle of fairness is the requirement of accountability;
it may be seen as the set of actions that allows developers to: prepare for anything
unexpected (actions are taken to prevent or control an unexpected situation); prepare
for error scenarios (actions prevent or control error scenarios); handle errors (to
deal with errors in software); and ensure data security (to ensure cyber security
of systems and secure handling of data).
[0005] It is desirable to mitigate the potential harm that can arise from increasing use
of multiple Al systems. Also, the identification of conflicts in information coming
from the Al systems is not only limited to preventing something which 'harms' but
also include discrepancies, confusion, or errors in sets of data (information) sent
from disparate Al systems. Errors, confusions, discrepancies can be very wide-ranging
from errors in information or design or manufacturing but should not be limited to
these example.
[0006] According to an embodiment of one aspect of the mention there is provided a computer-implemented
method of reconciling values of a feature, each value being provided by a different
artificial intelligence, Al, system, the method comprising; collecting logs from the
different Al systems, each log including a value of the feature; identifying any discrepancy
between the values, and when there is any discrepancy; creating global information
from the values, the global information taking into account some or all of the values,
and when the global information differs from the value of one of the Al systems; sending
the global information to that Al system.
[0007] Putting principles and ethics requirements into practice is a challenging task. Invention
embodiments provide a first step to allow accountability of Al systems. In particular,
embodiments focus on preventing harm caused by errors/failures in (an) Al system(s)
that are detected using the collective knowledge of other Al systems that are presented
in a same context (physical setting). For this, traceability of the systems and reconciliation
of conflicting information is essential.
[0008] It is known in the prior art to apply focus on accountability and detect errors to
avoid negative impact of Al systems. Also, Al or Deep learning techniques have been
applied to create defect detection systems and to calibrate sensors.
[0009] To avoid negative impact, other prior art prevents an Al-behaviour body becoming
out of control to reduce harm. This solution is based on the analysis of information
from an Al system to detect when the Al is out of control and avoid negative outputs
and harm to humans.
[0010] Unlike invention embodiments, the prior art cannot access information of other Al
systems to augment the knowledge from other perspectives and improve the detection
of conflicting information.
[0011] Embodiments tackle the analysis and reconciliation of contexts (situations or environments)
where there is more than one Al system having different perceptions/information of
one (or a set of) characteristic(s) or feature in the context and these perceptions
could provoke a negative impact.
[0012] Values herein may include numerical values or classifications or any other form of
value. One example of a context where the embodiments may be applied is a road (or
a scenario defined by a set of coordinates) where an accident between two or more
vehicles or other objects could happen because one of them perceives wrong positions
or classifications of objects (such as one of the vehicles, or a person, or an inanimate
object). Another example is a smart factory or other industrial space where different
robots perform autonomous decisions to generate products, but defective items could
be produced because a robot classifies incorrectly materials.
[0013] To solve the problems in the prior art, invention embodiments provide a resolution
system that resolves conflicting contexts based on the traceable contextual information
of Al systems presented in the context, identifies discrepancies in the information
received and calculates new global information (GI) that may be used to resolve discrepancies
and avoids a negative event in the context.
[0014] In particular, embodiments address identification of discrepancies in terms of conflicting
contextual information perceived by the Al systems (i.e., position, distance, classification,
etc.); calculation of a new value in the form of global information (GI) for the features
that have discrepancies among Al systems; and the use of this, new global information
for resolving in real time the discrepancies and thus potentially, avoiding negative
impacts in the context through technical action. The examples of a technical action
may include: deactivating Al components. In the context of autonomous vehicles other
technical actions could change the movement of a vehicle which the Al is controlling
and/or alert an operator/manager , such as "brake", "stop in 100metres", "move forwards
by 100 metres" alerts, or actions. These would all have the effect of ignoring/or
disregarding the information/device.
[0015] The global information can be presented to a manager of the Al system with the discrepancy,
used to adjust the value provided by that system or used in any other way. In some
embodiments, the global information is to replace the value provided by that Al system
with a value from the global information that is a more trusted value. In other embodiments,
a value from the global information may be used to adjust the "local" value provided
by that system.
[0016] To give a more advanced calculation, each value used to create the global information
may be modified by a confidence score, for example for the Al system which provided
the value. Thus, in cases with no clear single discrepancy, i.e. with many Al systems
and different types of conflicting information received, the method would result in
utilising the confidence scores to provide a clearer discrepancy. This means that
if the systems 1 and 2 perceive different values, but 1 has more confidence, 2 should
use information more close to 1.
[0017] The confidence score may be formed of a plurality of different partial confidence
scores added together, preferably wherein the partial confidence scores are individually
weighted.
[0018] The confidence score may be based on Al ethics guidelines. For example, if an Al
system is known not to follow such guidelines, it may receive a lower score. Other
factors may be taken into account, such as past discrepancies of the Al system. Additionally
or alternatively, certain values may be given more weight. For example, a value indicating
a potentially dangerous situation (such as a high speed or am unusual value) or fragile
object classification (person rather than vehicle) may be weighted with a coefficient
of more than 1, or may be weighted more than other values.
[0019] The method may further comprise deactivating the part of the Al system providing
the value which is replaced by the global information.
[0020] The logs may provide a stream of data values, which may generally or sometimes be
time-separated values for the same feature. The method may further comprise a first
step of checking for a change in values in the logs before identifying any discrepancies
between the values.
[0021] The method may further comprising generating a report based on the logs and the global
information and feeding the report to an auditor.
[0022] In one embodiment, global information for a feature F is calculated as:
Where m is the number of Al systems that have information about the feature F
csk,F is the stored confidence score of the Al system with ID k for the feature F
valuek_F is the value of the Al system k for the feature F.
[0023] As mentioned previously, the method can be used with any Al systems that each provide
a value for the same feature, whether that is financial, related to other non-physical
data, or physical. In one embodiment, the Al systems determine a position of an object
or identify an object, and the feature is a determined position of an object, or an
identification of an object.
[0024] The Al system receiving the global information may control the movement and/or position
of an entity, and receipt of the global information may lead to a change of a movement
and/or position of the entity. For example, the Al system may be part of a vehicle,
and the global information may be translated into action such as a lane change or
braking of the vehicle.
[0025] In this context, and as mentioned above, the feature may be a position of an object.
The simplest reconciling may be addition of any numerical properties, and division
by the number of Al systems. For example, X, Y (and Z) co-ordinates may be added separately.
In an alternative embodiment, the reconciling between values may use all the triangles
that can be formed between locations of any three Al systems and which include the
object within the triangle, part global information for each triangle being calculated
from the three Al systems in that triangle, and then overall global information being
calculated by combining the part global information for each triangle.
[0026] According to an embodiment of a further aspect of the invention, there is provided
a system to reconcile values of a feature, each value being provided by a different
artificial intelligence, Al, system, the system comprising memory and a processor
arranged to provide; an information collector to collect logs from the different Al
systems, each log including a value of the feature; a conflict auditor to identify
any discrepancy between the values, and when there is any discrepancy to create global
information from the values, the global information taking into account the values,
and a global information activator to send the global information to an Al system
when the global information differs from the value of that Al system, to replace the
value provided by that Al system.
[0027] A (trusted) Al network may comprise a group of autonomous Al systems linked to a
resolution system as defined above.
[0028] According to an embodiment of a further aspect of the invention, there is provided
a computer program comprising instructions which, when the program is executed by
a computer, cause the computer to carry out the method detailed above.
[0029] An apparatus or computer program according to preferred embodiments of the present
invention may comprise any combination of the method aspects. Methods or computer
programs according to further embodiments may be described as computer-implemented
in that they require processing and memory capability.
[0030] The apparatus according to preferred embodiments is described as configured or arranged
to, or simply "to" carry out certain functions. This configuration or arrangement
could be by use of hardware or middleware or any other suitable system. In preferred
embodiments, the configuration or arrangement is by software.
[0031] Thus according to one aspect there is provided a program including instructions which,
when loaded onto at least one computer configure the computer to become the apparatus
according to any of the preceding apparatus definitions or any combination thereof.
[0032] According to a further aspect there is provided a program including instructions
which when loaded onto the at least one computer configure the at least one computer
to carry out the method steps according to any of the preceding method definitions
or any combination thereof.
[0033] In general the computer may comprise the elements listed as being configured or arranged
to provide the functions defined. For example this computer may include memory, processing,
and a network interface.
[0034] The invention may be implemented in digital electronic circuitry, or in computer
hardware, firmware, software, or in combinations of them. The invention may be implemented
as a computer program or computer program product, i.e., a computer program tangibly
embodied in a non-transitory information carrier, e.g., in a machine-readable storage
device, or in a propagated signal, for execution by, or to control the operation of,
one or more hardware modules.
[0035] A computer program may be in the form of a stand-alone program, a computer program
portion or more than one computer program and may be written in any form of programming
language, including compiled or interpreted languages, and it may be deployed in any
form, including as a stand-alone program or as a module, component, subroutine, or
other unit suitable for use in a data processing environment. A computer program may
be deployed to be executed on one module or on multiple modules at one site or distributed
across multiple sites and interconnected by a communication network.
[0036] Method steps of the invention may be performed by one or more programmable processors
executing a computer program to perform functions of the invention by operating on
input data and generating output. Apparatus of the invention may be implemented as
programmed hardware or as special purpose logic circuitry, including e.g., an FPGA
(field programmable gate array) or an ASIC (application-specific integrated circuit).
[0037] Processors suitable for the execution of a computer program include, by way of example,
both general and special purpose microprocessors, and any one or more processors of
any kind of digital computer. Generally, a processor will receive instructions and
data from a read-only memory or a random access memory or both. The essential elements
of a computer are a processor for executing instructions coupled to one or more memory
devices for storing instructions and data.
[0038] The invention is described in terms of particular embodiments. Other embodiments
are within the scope of the following claims. For example, the steps of the invention
may be performed in a different order and still achieve desirable results. Multiple
test script versions may be edited and invoked as a unit without using object-oriented
programming technology; for example, the elements of a script object may be organized
in a structured database or a file system, and the operations described as being performed
by the script object may be performed by a test control program.
[0039] Elements of the invention have been described using the terms "information collector",
"conflict auditor" etc. The skilled person will appreciate that such functional terms
and their equivalents may refer to parts of the system that are spatially separate
but combine to serve the function defined. Equally, the same physical parts of the
system may provide two or more of the functions defined.
[0040] For example, separately defined means may be implemented using the same memory and/or
processor as appropriate.
[0041] Preferred features of the present invention will now be described, purely by way
of example, with references to the accompanying drawings, in which:-
Figure 1 is a flow chart of a method in a general embodiment;
Figure 2 is a block diagram of main system components in a general embodiment of the
invention;
Figure 3 is a block diagram of a specific invention embodiment;
Figure 4 is an overview sequence diagram of a resolution system;
Figure 5 is a flowchart showing a workflow of a conflict auditor;
Figure 6 is a table of information collected;
Figure 7 is a table showing an example output;
Figure 8 is a flowchart showing a workflow of a global information (GI) activator;
Figure 9 is a flowchart showing a workflow of the confidence calculator;
Figure 10 is an overhead view of a use case scenario;
Figure 11 is a graph of a use case scenario;
Figure 12 is a JSON example of information sent by each object;
Figure 13 is a table of information received by a conflict auditor;
Figure 14 is a graph showing Barycentric coordinates of the use case scenario; and
Figure 15 is a diagram of suitable hardware for implementation of invention embodiments.
[0042] Al systems can take autonomous decisions in different situations and these decisions
can provoke a negative impact (i.e. in people or society such as an accident, discrimination,
etc.). The Al systems are considered black boxes or opaque software tools that are
difficult to interpreter, explain, and whose behaviour is difficult to modify.
[0043] Analysing and redressing Al system behaviour in isolation is a huge technological
challenge that needs resolution to provide safe, beneficial and fair use of Al.
[0044] The inventors have appreciated that collective knowledge that can be obtained from
different systems can help to determine why a specific action was taken, identify
possible errors or unexpected results and avoid negative impacts of an Al in a particular
situation or context.
[0045] Therefore, one improvement provided by the invention embodiments is a way to analyse
a set of Al systems that are in a given context where a negative impact could happen.
This analysis is based on using data from different systems to identify discrepancies
between the systems/data, and may detect what systems have errors or are responsible
for these discrepancies and even redress this responsibly to avoid negative impact.
Moreover, possible penalization and redress of Al systems will allow the minimization
of risk in future contexts with similar information. In this sense, redress may be
the deactivation of a faulty component, and the use of the GI sent by the system.
Penalization is more related to the confidence for the faulty Al component. If an
Al component is deactivated, this may decrease the confidence score.
[0046] In embodiments accountability is focused on analysing a particular context (situation
or perception), where more than one Al systems is present, to understand and look
for who or what is responsible for a potential event (generally, with negative impact)
or the occurrence of an incident in this context, and preferably apply a mechanism
that prevents the harm, reducing the negative impact, and minimizing the risk of repeating
an event. Hence for example, an incorrect position or other feature determined by
an Al system may be rectified.
[0047] Put another way, embodiments disclose a way to analyse and rectify the behaviour
of different Al systems that are in a given context where a negative impact could
happen. They are based on tracing data from different systems to identify discrepancies
amongst them, detecting a source (responsibility) for these discrepancies and redress
the situation to resolve such discrepancies for the purposes of self-regulation or
auditability.
[0048] The redress mechanism is a technical action that allows rectification of the responsible
Al component/system and thus may eliminate or minimize the risk of negative impacts
in the context.
[0049] One main benefit is to be able to trace back incorrect or inconsistent inputs that
could generate a negative impact in a context by considering Al system's outputs.
Thus, tracing the collective knowledge from different Al systems, it can be clarified
why an Al system took a specific action, identify the possible errors, redress them,
and eliminate or minimize future unexpected outputs for Al system's auditability purposes.
[0050] Embodiments focus on the analysis and reconciliation of conflicting information arising
from a context or situation (for example a physical situation or setting) where there
are two or more Al systems and a negative impact could happen given that one of the
systems may interpret a feature of the context incorrectly.
[0051] Figure 1 depicts a general embodiment in which values from different Al systems are
reconciled. In step S10, logs are collected from the Al systems. Here, each Al system
may send logs repeatedly. Each log is a set of values for the same set of features
(such as physical coordinates or other data points). The resolution system may align
these logs temporally and by features.
[0052] In S20, the resolution system identifies any discrepancies between values of the
same feature (and at the same time) from these different logs. For efficiency, the
resolution system may check first that there has been any change in the value, and
if not, the method may be discontinued.
[0053] In S30, the system creates global information (GI) from the values (if there is a
discrepancy). For example, if 4 systems give one value and a 5
th system gives a different value for the same frequency, then GI is created. The GI
may be a kind of average, or weighted average, of the values, or it can be calculated
in any other way from some or all of the values. For example, each Al system may be
given a confidence score, the confidence scores providing the weights (for example
between 0 and 1).
[0054] In S40, the GI is sent to any Al system with a discrepancy. In fact, the GI may be
sent to all the Al systems, and each Al system may include a detector to identify
the difference between its perception and the GI received. These detectors may simply
give priority to the GI (over the perception of the Al system) if there is any difference.
Hence the GI is adopted in the Al system.
[0055] Figure 3 depicts an overview of a resolution system for use with Al systems and its
interactions with external elements. On the left, different Al systems in a context
are depicted and an auditor. On the right, there is the system itself.
[0056] In more detail, the elements of the figure are:
- System to resolve conflicting contexts arising from Al systems: This may be an independent computer system where Al systems will send information
in real time or may form part of an existing server that already has communication
with the Al systems.
In essence, the resolution system receives information from a set of two or more Al
systems (logs), detects discrepancies among the information received and calculates
global information (GI) that may be used to activate or deactivate specific components
of the Al systems to avoid a negative impact in the context (such as the physical
setting in which the Al systems operate).
Additionally, the system can produce a dynamic report with the summary of the exchanged
information and the technical orders that have been sent to the Al systems to avoid
negative events in the context. This report could be analysed to allow an auditor
or external entity to take actions in the Al systems (for example, redesign an Al
component, replace a technical device - i.e., a sensor, etc.).
- Al systems The Al systems are autonomous systems that carry out autonomous decisions in a defined
context. One Al system could be a smart object: an object equipped with different
type of sensors that enable a virtual perception of the surrounding and embedded with
a processor and connectivity that enhance its interaction with other smart objects
and external servers.
These Al systems send different logs (information about a set of features) to the
resolution system to resolve conflicting perceptions of the context arising from the
Al systems and will receive in return orders that may be translated to technical actions
to avoid negative impact in the context.
Examples of Al systems that could be considered are (or from part of): robots, self-driving
vehicles (whether on land, on water on in the air), drones, industrial bots, virtual
assistances, home automation components, etc.
- Auditor An optional external entity may analyse (manually or automatically) the report
or information produced by the resolution system to resolve conflicting perceptions
of context arising from Al systems to rectify or prevent specific behaviours of the
Al systems in next versions or in the future.
Detailed Description of an Invention Embodiment
[0057] The three major components within the resolution system 10 are described in more
detail hereinafter and comprise:
- Conflict auditor 40: This component is supplied with information by an information collector collecting
logs from the Al systems 20 and also receives historical data. It has three modules,
to identify discrepancies, to estimate perceptions and to calculate a global information
(GI) using the information perceived by the Al systems.
- Confidence Score Generator 60: This component may use heuristics, majority vote and closest neighbour modules, along
with historical data to calculate a confidence score for each Al system using a multi-method
approach and perform confidence computation based on stored standards or guidelines,
such as ethically-aligned confidence computations based on published Al ethics guidelines.
These may be confidence computations based in other standards, e.g. health and safety/critical
systems where standards/guidelines apply.
For example, the entire system may be ethically-compliant (not only the confidence
score computation) by following appropriate guidelines and by incorporating such guidelines
into the original functionality/mechanisms of the system. For example, traceability
(i.e. tracking where the data is originating and logging the trail) may be one of
the key requirements. This also leads to accountability of the system in terms of
faulty data (sources) and the ability to redress. The system may include a knowledge
database of ethical guidelines that include the definition of principles to be followed.
- GI Activator 50: This component will send the technical actions or orders to specific Al systems to
try to resolve the conflict in the context.
[0058] Moreover, the resolution system includes other elements for whole system support.
Their detailed description is out of the scope of this description:
- Information collector 30: receives the inputs (logs) from the different Al systems and may carry out different
processes to normalize and standardize the information received, such as formatting
and rounding.
- Report Generator 70: allows external systems or users to check information received, produced and sent
by the resolution system.
- Historical data 80: this local or remote storage component stores and traces all the information received,
produced and sent by the different components of the system.
[0059] The main sequence diagram of a resolution system according to one embodiment is depicted
in Figure 4.
[0060] The sequence diagram starts at step S100 when a set of Al systems send their logs
in a context. In step S110, the information collector processes the logs and send
the new information to the conflict auditor. In step S120 the conflict auditor requests
the historical confidence scores of each Al system based on a specific feature (for
example taking into account previous de-activation) and with this information calculates
the global information (GI). In step S130 the GI will be calculated for use and for
storage as historical data. Then, the conflict auditor activates the processes of
the GI activator (S140) and the conflict score generator (S150).
[0061] The GI activator sends orders as necessary to the Al systems to deactivate Al components
of the Al systems that have wrong information (different from the GI) in step S160.
[0062] The confidence score generator will request heuristic data (S170) and update the
confidence scores of the system, potentially making sure that they are ethically aligned
using a database of standard/published Al ethics guidelines and based on a multi-method
approach (S180). This information is then stored into the historical database.
[0063] Next, the processes and methods of each component are detailed.
Information Collector 30
[0064] This component may be needed to carry out any of the following functions:
- receive information from the different Al systems,
- normalize and standardize,
- store the raw and processed information, and
- resend the processed information of a specific event to the conflict auditor. An event
may be defined as something that occurs (or is perceived) in a given time by an Al
system. This is a set of perceptions that an Al system observes in a context.
Historical Data 80
[0065] This component may be a database storing all information and data that is received,
sent and processed by the system. For example, the historical data could include:
- the logs or raw data sent from the Al systems
- the GI information calculated by the system to resolve conflicting contexts arising
from Al systems
- the ethically-aligned confidence scores of each Al system for each feature
- etc.
Conflict Auditor 40
[0066] The conflict auditor is programmed to detect discrepancies among the information
received and to calculate global Information (GI) for each feature that has discrepancies.
The GI will be used to avoid negative impacts in the context.
[0067] The workflow of the conflict auditor is illustrated in Figure 5. At the top, there
is a table with an event defined by n features and perceived by m systems.
[0068] The process starts when the conflict auditor receives the Al systems' information
potentially as processed by the information collector.
[0069] The information received will be a matrix of size:
m *
n where
m is the number of Al systems in the context and
n the number of features detected by each Al system. For example; the matrix may have
the form shown in Figure 6, in which the first column is the ID of the Al system and
value A_B is the B
th feature of the A
th Al system.
[0070] Next, the steps of the workflow will compare the logs receives with the previous
ones stored by the historical data component to identify changes (S200). If there
are changes (S210), the conflict auditor will carry out the next tasks for each feature
to review (S220) in the information received as follows. First the conflict auditor
compares values of the feature in all the received logs (S230). If different values
are detected (S240), global information will be calculated (S250) using confidence
scores from the historical data. This information will be stored and retained in memory
to be sent to the confidence score generator and the GI activator (S260). Once all
features are checked (no feature to review at S220), the process will end. Two outputs
could be generated: an empty list when no changes are detected from the previous logs
received or there are no discrepancies in the perceptions, or a list of features that
have discrepancies and the new global information calculated by the component.
[0071] An example of an output when feature1 has discrepancies among the values of this
feature detected by the Al systems is shown in Figure 7
[0072] The calculation of the global information (
GI_1 for
feature1) may be calculated as:
Where:
m is the number of Al systems that have information about the feature1
csk,1 is the stored confidence score of the Al system with ID k for the feature1
valuek_1 is the value of the Al system k for the feature1
[0073] Therefore, in this example, Al systems with more confidence (high ethically-aligned
confidence score and/or fewer deactivations) will have more weight than Al systems
with less confidence for a specific feature. If CS
R,1 ‡1 for all the Al systems then the denominator is m. Otherwise, for each Al system
where CS
R,1 ‡1, then CS
R,1 is subtracted from 1 and the results are added together and then subtracted from
m to make the denominator.
GI Activator
[0074] The GI activator receives the global information (GI) of each feature calculated
by the conflict auditor and starts the workflow to send orders to the different Al
systems and thus, to avoid negative impact in the context (as shown in Figure 8).
[0075] Once the GI activator receives the data from the conflict auditor, the component
checks in step S300 if there are items in the list received. An item in the list indicates
that there is a discrepancy for a specific feature. If a new GI was calculated, this
indicates that discrepancies were detected.
[0076] If there are discrepancies, for each feature and GI associated, the GI activator
compares the GI with the information perceived by each Al system (for that, historical
data is checked) in S310.
[0077] If there are differences between the GI and the perceived information found in step
S320, the GI Activator will send an order to the specific Al system in S330 to deactivate
its Al component that perceives the feature different from the global information.
For example:
[0078] There are three Al systems: A, B and C. A and B detect an object as a car. C detects
the same object as a pedestrian. The global information calculated by the conflict
auditor determines that the object is a car given the confidence of each Al system.
The GI activator will send to the Al system C the order to stop its classification
component and take into account the calculated GI as the feature of classification:
car.
[0079] The order to stop the classification component (or any other component producing
a faulty value in an Al system) may have any suitable form. For example, the resolution
system could create atomically technical orders that the Al system has to interpret.
A technical order could have the format: "subject + order + arguments".
- Subject: is the component to apply (component of the system).
- Order: the technical result (stop, update, continue, etc.) for the faulty Al component
- Arguments: values to apply. This means, for example, the GI to apply (or a correction
based on a GI).
[0080] The GI may be applied in technical orders that may be translated in actions in each
Al system: for example, change the lane, brake a car, etc. These actions could avoid
a negative impact in a context (i.e., an accident).
[0081] The faulty Al component may include a mechanism to update its functionality based
on the GI, in the form of recalibration or "auto-learning".
Confidence Score Generator
[0082] The confidence score generator is to provide an ethically-aligned confidence score
for each Al system for each feature based on a multi-method approach and by using
the database of standards/published Al ethics guidelines.
[0083] The workflow of the confidence score generator starts when the conflict auditor ends
its workflow and sends the global information and Al systems information as processed
(processed logs received with details perceived by each Al system). With this information,
the confidence score generator calculates a set of partial confidence scores, using
different techniques.
[0084] Three parallel processes may be carried out here to calculate three partial confidence
scores, but other techniques could be added or only one or two techniques could be
used.
- Heuristic method: takes into account the historical data to calculate a partial confidence score for
each feature of each Al system. Different techniques could be used to implement this
method. For example, a machine learning or deep learning model could be created using
historical data to predict the confidence of an Al system or a set of Al models could
be used to calculate the confidence of each feature.
- Majority vote method: Takes into account what other Al systems detect and perceive about a specific feature.
Thus, if the majority of Al systems detect a specific value for a feature, the partial
score related to the majority vote will be increased. Otherwise, the confidence score
will be decreased.
- Closest neighbour: will be applied in features that have a scale or a countably infinite collection
of norms. The scale could be applied to numeric features (i.e., distance) and categorical
features that have a scale with relevance (e.g., in a classification of objects, a
pedestrian will be more important than trees or animals).
[0085] After the calculation of the three partial confidence scores, the confidence score
generator will calculate the final confidence score of each Al system for each feature.
The equation for the system K could be:
Where:
csk is the final score of the Al system k in a specific feature
csk,heuristic is the partial confidence score using the heuristic method
csk,majority is the partial confidence score using the majority vote method
csk,neighbour is the partial confidence score using the closest neighbour method
a, b, and c are scores to give more or less importance to the previous partial scores.
[0086] In summary, the workflow to calculate the confidence score is illustrated in Figure
9.
[0087] The heuristic method S400, majority vote method S410 and closest neighbour method
S420 take place in parallel using the GI and systems information. The heuristics method
also uses historical data. The results for the methods are combined in S430 to give
a final confidence score which is stored in the historical data.
Worked Example
[0088] The example relates to a use case of transport. Specifically, it considers a road
where there are different autonomous vehicles (Al systems with a set of Al components
to detect objects, positions and classification of objects, etc.).
[0089] Some usual Al systems to detect position are based on radar, GPS, LIDAR, and Inertial
measurement units. Examples can be found at:
https://en.wikipedia.org/wiki/Vehicle tracking system.
[0090] An accident could happen because one or more of the vehicles perceives a wrong position
for an object (e.g., a pedestrian) or because an Al system classifies a pedestrian
incorrectly. An example scenario is depicted in Figure 10. Assume north is shown as
up in the figure. Car 1 is travelling north out of a side road onto an east-west main
road. Car 2 is travelling east on the main road towards the side road. Car 3 is travelling
west on the main road towards the side road. Car 4 was travelling west on the main
road and is turning south down the side road from the main road. Person 1 is on the
main road just to the east of the side road and of car 4.
[0091] For the worked example, consider the scenario graph as shown in Figure 11.
[0092] In Figure 11 the four cars and person are objects which are shown as points. Solid
lines with arrows represent the direction of the object. For example, car_1 moves
to the north, and car_2 to the east, as previously explained.
[0093] Dashed lines are the perception (vision) of a car for the person_1. For example,
car_1 and car_2 see the person_1 at the correct position (the dashed line ends at
the correct place), but car_3 and car_4 have a wrong perception of the person_1 and
believe that the person is at the end of the dashed line.
[0094] Taking into account the previous information, the scenario is as follows:
- car_1 and car_2 detect the person_1 in the right position
- car_3 and car_4 detect the person_1 in the wrong position
[0095] First, in order to resolve the conflicting information arising from the Al systems,
the vehicles (or Al systems) will send the information that is perceived by them to
the information collector. The information collector component will normalize and
standardize the logs received to the same format, and will check that the logs received
are for the same context and time. For example, car_1 sends the information shown
in Figure 12 in JSON format (relevant information only is shown in Figure 12):
The information indicates that the car_1 in the position (428,400) views or detects
person_1 in the position (600,200).
[0096] The conflict auditor receives the information from all the cars. For each feature,
it will detect if there are discrepancies. The component will detect discrepancies
in the position for person_1 because car3_ detects the person position (400, 450)
and car_4 in position (600, 400).
[0097] X_direction is the direction of a car (where it is moving) related to the x axis.
For example, in the table, car_1 is not moving in x axis, is going to the north (this
would be included in a y_direction feature, which is not shown.
Information received by the conflict auditor is shown in Figure 13.
[0098] To calculate the Global information of the person_1 position, the conflict auditor
takes into account the information of each vehicle (car in this example) and its previous
confidence scores.
[0099] The worked example is based on a method that is inspired by the
barycentric coordinates of the person with respect to the different triangles that vehicles can define. The
confidence and accuracy of this method is higher than when simply combining all inputs.
Creating the triangles is a way to check the position perceived by each vertex.
[0100] First, the conflict auditor detects what triangles defined by cars include the person_1
inside or close to them (considering a margin such as 40% of the areas). In the example,
four triangles will be detected, as shown in Figure 14:
- car_1car_3car_4 → triangle 1
- car_1car_3car_2 → triangle 2
- car_1car_4car_2 → triangle 3
- car_3car_4car_2 → triangle 4
[0101] For each triangle, a part-Global information (GI) is calculated. The part-Global
information (GI) for the features x and y and triangle t is:
Where:
t is a specific triangle detected
k is the ID of the Al system (only 3 Al systems will considered by triangle)
x is the feature x
y is the feature y
csk,x is the confidence score of the Al system with ID k for the feature x
csk,y is the confidence score of the Al system with ID k for the feature y
[0103] Finally, the Global information for the position of the person is the median of all
the triangles.

[0104] Other methods (not only the median) are also possible. For example, the triangles
in which car_3 (car with lower confidence score) participate could not be considered
or have less relevance to obtain the final Global information.
GIx = 600
Gly = 450
[0105] The GI activator receives the GI calculated and compares with the information received
from the Al systems by the cars. Given that the GI is different from the received
logs of the Al systems, the GI activate may deactivate the Al components of all the
cars to avoid an accident. If the last calculation is used, only the car_3 will deactivate
its Al components for features x and y.
[0106] Figure 15 is a block diagram of a computing device, such as a data storage server,
which embodies the present invention, and which may be used to implement a method
of resolving conflicts in values between Al systems. The computing device comprises
a processor 993, and memory, 994. Optionally, the computing device also includes a
network interface 997 for communication with other computing devices, for example
with other computing devices of invention embodiments.
[0107] For example, an embodiment may be composed of a network of such computing devices.
Optionally, the computing device also includes one or more input mechanisms such as
keyboard and mouse 996, and a display unit such as one or more monitors 995. The components
are connectable to one another via a bus 992.
[0108] The memory 994 may include a computer readable medium, which term may refer to a
single medium or multiple media (e.g., a centralized or distributed database and/or
associated caches and servers) configured to carry computer-executable instructions
or have data structures stored thereon. Computer-executable instructions may include,
for example, instructions and data accessible by and causing a general purpose computer,
special purpose computer, or special purpose processing device (e.g., one or more
processors) to perform one or more functions or operations. Thus, the term "computer-readable
storage medium" may also include any medium that is capable of storing, encoding or
carrying a set of instructions for execution by the machine and that cause the machine
to perform any one or more of the methods of the present disclosure. The term "computer-readable
storage medium" may accordingly be taken to include, but not be limited to, solid-state
memories, optical media and magnetic media. By way of example, and not limitation,
such computer-readable media may include non-transitory computer-readable storage
media, including Random Access Memory (RAM), Read-Only Memory (ROM), Electrically
Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM)
or other optical disk storage, magnetic disk storage or other magnetic storage devices,
flash memory devices (e.g., solid state memory devices).
[0109] The processor 993 is configured to control the computing device and execute processing
operations, for example executing code stored in the memory to implement the various
different functions of modules, such as the information collector, conflict auditor
and global information activator described here and in the claims. The memory 994
stores data being read and written by the processor 993. As referred to herein, a
processor may include one or more general-purpose processing devices such as a microprocessor,
central processing unit, or the like. The processor may include a complex instruction
set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor,
very long instruction word (VLIW) microprocessor, or a processor implementing other
instruction sets or processors implementing a combination of instruction sets. The
processor may also include one or more special-purpose processing devices such as
an application specific integrated circuit (ASIC), a field programmable gate array
(FPGA), a digital signal processor (DSP), network processor, or the like. In one or
more embodiments, a processor is configured to execute instructions for performing
the operations and steps discussed herein.
[0110] The display unit 997 may display a representation of data (such as the values from
the Al system) stored by the computing device and may also display a cursor and dialog
boxes and screens enabling interaction between a user and the programs and data stored
on the computing device. The input mechanisms 996 may enable a user to input data
and instructions to the computing device.
[0111] The network interface (network I/F) 997 may be connected to a network, such as the
Internet, and is connectable to other such computing devices (such as the Al systems
themselves) via the network. The network I/F 997 may control data input/output from/to
other apparatus via the network. Other peripheral devices such as microphone, speakers,
printer, power supply unit, fan, case, scanner, trackerball etc may be included in
the computing device.
[0112] The information collector 30 may comprise processing instructions stored on a portion
of the memory 994, the processor 993 to execute the processing instructions, and a
portion of the memory 994 to store values during the execution of the processing instructions.
The output of the information collector may be stored on the memory 994 and/or on
a connected storage unit, and may be transmitted/transferred/communicated to the historical
data store/report generator.
[0113] The conflict auditor 40 may comprise processing instructions stored on a portion
of the memory 994, the processor 993 to execute the processing instructions, and a
portion of the memory 994 to store data relevant to identified discrepancies during
the execution of the processing instructions. The output of the conflict auditor may
be stored on the memory 994 and/or on a connected storage unit, and may be transmitted/transferred/communicated
to the historical data store/report generator.
[0114] The global information activator 50 may comprise processing instructions stored on
a portion of the memory 994, the processor 993 to execute the processing instructions,
and a portion of the memory 994 to store results of calculations and GI during the
execution of the processing instructions. The output of the global information activator
may be stored on the memory 994 and/or on a connected storage unit, and may be transmitted/transferred/communicated
to the historical data store/report generator.
[0115] Methods embodying the present invention may be carried out on a computing device
such as that illustrated in Figure 15. Such a computing device need not have every
component illustrated in Figure 15, and may be composed of a subset of those components.
A method embodying the present invention may be carried out by a single computing
device in communication with one or more data storage servers via a network. The computing
device may be a data storage itself storing the values and GI and any intermediate
results.
[0116] A method embodying the present invention may be carried out by a plurality of computing
devices operating in cooperation with one another. One or more of the plurality of
computing devices may be a data storage server storing at least a portion of Al system/resolution
system data.
Summary
[0117] Embodiments propose a system and methods to integrate accountability in existing
Al systems. The previous description is independent of any field, so this could be
applied in different business areas: finance, transport, robotics, etc. where two
or more Al systems participate.
[0118] Al systems are considered black boxes even to their developers. Understanding and
looking for errors or wrong information of them is a huge technical challenge and
it is necessary to avoid negative impact. New mechanisms to achieve this have to be
defined.
[0119] Embodiments disclose: a mechanism to trace back the incorrect or inconsistent input
by considering the Al system's output; a system and method to detect and redress errors
in Al systems based on the knowledge of other systems; and a mechanism that prevents
harm, reducing the negative impact, and minimizing the risk of repeating a harmful
event.
Amended claims in accordance with Rule 137(2) EPC.
1. A computer-implemented method of reconciling values of a feature, each value being
provided by a different artificial intelligence, Al, system, the method comprising:
collecting logs from the different Al systems, each log including a value of the feature;
identifying any discrepancy between the values, and when there is any discrepancy:
creating global information from the values provided by the different Al systems,
the global information taking into account some or all of the values, and when the
global information differs from the value of one of the Al systems:
sending the global information to that Al system,
wherein the Al systems determine a position of an object or identify an object, and
the feature is a determined position of an object, or an identification of an object.
2. A method according to claim 1, wherein the global information is to replace the value
provided by that Al system with a value from the global information.
3. A method according to claim 1 or 2, wherein each value used to create the global information
is modified by a confidence score for the Al system which provided the value.
4. A method according to claim 3, wherein the confidence score is formed of a plurality
of different partial confidence scores added together, preferably wherein the partial
confidence scores are individually weighted.
5. A method according to claim 3 or 4, wherein the confidence score is based on Al ethics
guidelines.
6. A method according to any of the preceding claims, further comprising deactivating
the part of the Al system providing the value which is replaced by the global information.
7. A method according to any of the preceding claims, further comprising a first step
of checking for a change in values in the logs before identifying any discrepancies
between the values.
8. A method according to any of the preceding claims, further comprising generating a
report based on the logs and the global information and feeding the report to an auditor.
9. A method according to any of the preceding claims, wherein global information for
a feature F is calculated as:
Where:
m is the number of Al systems that have information about the feature F
csk,F is the stored confidence score of the Al system with ID k for the feature F
valuek_F is the value of the Al system k for the feature F.
10. A method according to any of the preceding claims, wherein the Al system receiving
the global information controls the movement and/or position of an entity, and wherein
receipt of the global information leads to a change of a movement and/or position
of the entity.
11. A method according to any of the preceding claims, wherein the feature is a position
of an object and the reconciling between values uses all the triangles that can be
formed between locations of any three Al systems and which include the object within
the triangle, part global information for each triangle being calculated from the
three Al systems in that triangle, and then overall global information being calculated
by combining the part global information for each triangle.
12. A system (10) to reconcile values of a feature, each value being provided by a different
artificial intelligence, Al, system, the system (10) comprising memory (994) and a
processor (993) arranged to provide:
an information collector (30) to collect logs from the different Al systems, each
log including a value of the feature;
a conflict auditor (40) to identify any discrepancy between the values, and when there
is any discrepancy to create global information from the values provided by the different
Al systems, the global information taking into account the values, and
a global information activator (50) to send the global information to an Al system
when the global information differs from the value of that Al system,
wherein the Al systems are configured to determine a position of an object or identify
an object, and the feature is a determined position of an object, or an identification
of an object.
13. A trusted Al network comprising a group of autonomous Al systems linked to a system
according to claim 12.
14. A computer program which comprising instructions which, when the program is executed
by a computer, cause the computer to carry out the method of any of the preceding
claims.