TECHNICAL FIELD
[0001] Various exemplary embodiments disclosed herein relate generally to a method for symbolic
execution on constrained devices.
BACKGROUND
[0002] During a trust provisioning process, a manufacturer establishes secrets between a
manufactured device and a customer before delivery. This process usually depends on
long-term secret inputs from a customer and the manufacturer of the device that are
used to derive the secrets to be put on the device,
i.e., there is a component that is handling these long-term secret inputs and providing
the device secrets as output. This component is usually a hardware security module
(HSM).
[0003] A major threat in this scenario is that the HSM leaks information about long-term
secrets through its output. This may happen in various ways including accidentally
(programming error) or maliciously (maliciously crafted program). Thus, in many cases
a thorough examination (with final certification) of the HSM code must be done before
it can be used by the manufacturer for trust provisioning. With certification, the
programming of the HSM is fixed. However, the required programming is usually different
for every customer which leads to a large evaluation and certification effort.
[0006] US 2013/0205134 A1 describes methods and apparatuses for access credential provisioning. A method may
include causing a trusted device identity for a mobile apparatus to be provided to
an intermediary apparatus. The intermediary apparatus may serve as an intermediary
between the mobile apparatus and a provisioning apparatus for a network. The method
may further include receiving, from the intermediary apparatus, network access credential
information for the network. The network access credential information may be provisioned
to the mobile apparatus by the provisioning apparatus based at least in part on the
trusted device identity.
SUMMARY
[0007] A brief summary of various exemplary embodiments is presented below. Some simplifications
and omissions may be made in the following summary, which is intended to highlight
and introduce some aspects of the various exemplary embodiments, but not to limit
the scope of the invention. Detailed descriptions of an exemplary embodiment adequate
to allow those of ordinary skill in the art to make and use the inventive concepts
will follow in later sections.
[0008] The present invention is defined by the appended independent claims. Dependent claims
constitute embodiments of the invention. The embodiments of the following description
which are not covered by the appended claims are considered as not being part of the
present invention.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] In order to better understand various exemplary embodiments, reference is made to
the accompanying drawings, wherein:
FIG. 1 illustrates a hardware security module (HSM), its inputs and outputs, and functions;
FIG. 2 illustrates a process for defining instructions and constraints and using a
formal mode to prove that the instructions along with the constraints meet desired
security requirements; and
FIG. 3 provides an example of a structure of the symbolic execution of a key definition
function.
[0010] To facilitate understanding, identical reference numerals have been used to designate
elements having substantially the same or similar structure and/or substantially the
same or similar function.
DETAILED DESCRIPTION
[0011] The description and drawings illustrate the principles of the invention. It will
thus be appreciated that those skilled in the art will be able to devise various arrangements
that, although not explicitly described or shown herein, embody the principles of
the invention and are included within its scope. Furthermore, all examples recited
herein are principally intended expressly to be for pedagogical purposes to aid the
reader in understanding the principles of the invention and the concepts contributed
by the inventor(s) to furthering the art, and are to be construed as being without
limitation to such specifically recited examples and conditions. Additionally, the
term, "or," as used herein, refers to a non-exclusive or (
i.e., and/or), unless otherwise indicated (
e.g., "or else" or "or in the alternative"). Also, the various embodiments described
herein are not necessarily mutually exclusive, as some embodiments can be combined
with one or more other embodiments to form new embodiments.
[0012] Embodiments described below include a method incorporating the following three elements:
1) a semi-formal description of an instruction set for the trust provisioning process
including constraints for the use of each instruction; 2) a formal model that can
be verified with an automated theorem prover,
i.e., the formal model shows that the constraints of the instruction achieve a certain
goal (
e.g., keeping secret data confidential); and 3) a constraint checker that performs a symbolic
execution for a given set of instructions and verifies whether the instructions meet
the constraints.
[0013] One way to solve the problem stated above is to make the trust provisioning generic,
that is, the programming of the HSM is independent of the customer. Instead, the HSM
gets a script including a list of instructions along with the long-term secrets that
describes how the long-term secrets are to be used to derive the device keys. This
script can then be provided for each customer and may even be provided by the customer.
[0014] The problem that arises with this approach is that the behavior of the HSM is now
much more flexible and harder to assess. The embodiments described herein solve this
problem: they describe a way to symbolically execute the script on the HSM to check
whether it is "benign". Only then when verified, the HSM will execute the script on
the provided long-term secrets.
[0015] A formal specification of the allowed instructions with constraints and a formal
model are provided to the evaluator during a certification. The model shows that the
constraints are sound,
i.e., that every script obeying them will keep the long-term secrets secret. This allows
the evaluator to assess the correct and secure functioning of the HSM.
[0016] FIG. 1 illustrates an HSM, its inputs and outputs, and functions. The HSM 110 receives
a list of instructions 105. The list of instructions 105 may be, for example, a script
describing instructions to be performed by the HSM that are user specific. The list
of instructions 105 may be provided by the customer or developed by the trust provider.
A constraint checker 115 in the HSM 110 receives the list of instructions 105, and
checks the list of instructions 105 against predefined constraints. The constraint
checker 115 performs a symbolic execution of the received list of instructions 105
to verify that the list of instructions 105 meets the constraints. The predefined
constraints are intended to prevent security violations, such as leaking of the customer
or manufacturer secure information. If the list of instructions 105 fails to meet
the predefined constraints, then the check fails and the list of instructions is rejected
130. If the list of instructions 105 passes the constraint check 115, then the list
of instructions 105 is executed by an instructions execution processor 120. The instruction
execution processor 120 is allowed to receive and process the confidential inputs
135. The confidential inputs 135 may be stored in a secure memory that may be, for
example, tamper resistant. The instruction execution module 120 then outputs HSM outputs
125. The HSM outputs are the data and other information needed to accomplish the trust
provisioning, and may then be installed in customer devices.
[0017] The HSM may be any secure device or a combination of secure devices. The secure device
may include memory and a processor. The memory may include various types of memory
such as, for example L1, L2, or L3 cache or system memory. As such, the memory may
include static random access memory (SRAM), dynamic RAM (DRAM), flash memory, read
only memory (ROM), or other similar memory devices. The memory may also be secure
memory that resists tampering or access by an attacker. The HSM may also interact
with an external memory that may be of any type and may be secure so as to resist
tampering.
[0018] The processor may be any type of processor used to implement a cryptographic or secure
function. A single processor may be used in the HSM, or two processors may be used
in the HSM: one to implement the constraint checker 115; and the other to implement
the instruction execution processor 120. The processor may be any hardware device
capable of executing instructions, including the list of instructions 105, stored
in memory or other storage (not shown) or otherwise processing data. As such, the
processor may include a microprocessor, field programmable gate array (FPGA), application-specific
integrated circuit (ASIC), or other similar devices. The processor may also be a secure
processor that resists tampering or access by an attacker. Further, the processor
may include a specific hardware implementation of a cryptographic or secure function.
Further, the HSM may be a compute constrained device, meaning that the computing capability
of the HSM is limited compared to more robust and general purpose processors. This
may be the case partly because the processor is manufactured to be secure and tamper
proof and such manufacturing become more expensive as the size and capability of the
processor increases. Such constraints may include limited processing power and a limited
amount of persistent memory
[0019] FIG. 2 illustrates a process for defining instructions and constraints and using
a formal mode to prove that the instructions along with the constraints meet desired
security requirements. The process of FIG. 2 begins by developing a semi-formal description
of an instruction set 205 for the trust provisioning process along with constraints
for the use of each instruction. Next, a formal model 210 is provided that may be
verified with an automated theorem prover. The formal model 210 may be applied to
the instructions and constraints to show that the constraints of the instruction achieve
certain security goals,
e.g., keeping secret data confidential.
[0020] The formal model 210 may include two parts. The first part of the formal model 210
is the input for the theorem prover, which may be a text file. The input for the automatic
theorem prover is the translation of the instructions and their constraints to a formal
language. As the proof is conducted in the formal model 210, there is a natural gap
between the formalized instructions and constraints and the instructions and constraints
as executed on the HSM. To bridge this gap, the second part of the formal model 210
must provide a list of assumptions that have been made while creating the formal model
and justify them.
[0021] A security verifier 215 receives the formal model and proof 210 together with the
definition of instructions and constraints 205. The security verifier 215 checks whether
the formal model 210 correctly captures the instructions and constraints 205. The
proof then serves as evidence that the instructions and constraints 205 meet the security
requirements 220. Once this process is complete, the instructions and constraints
may be used by the constraint checker 115 to check lists of instructions 105 received
by the HSM.
[0022] Now a more specific example will be given to illustrate the above described processes.
[0023] As discussed above, trust provisioning (TP) may securely generate chip-individual
data for insertion in chips during the manufacturing process. This dynamic TP data
is generated in HSMs to ensure confidentiality of customer data.
[0024] A key definition file (KDF) defines how dynamic TP data is generated by the trust
provisioning HSMs. The KDF contains a set of instructions corresponding to functions
to be executed in the HSM.
[0025] The KDF instructions handle secrets generated during the execution of the KDF as
well as long-term secrets provided by the device manufacturer and the customer as
inputs to the HSM. The HSM needs to guarantee that the output of the KDF does not
violate the confidentiality of these secrets. This violation could happen accidentally,
e.g., by forgetting to encrypt a secret in the KDF output, or even maliciously,
e.g., by intentionally creating a KDF that leaks one of the customer's or manufacturer's
long term secret keys. Examples of requirements and rules restricting the set of all
possible KDFs to a subset of "secure" KDFs that do protect all secrets according to
their security level are provided below. To verify whether a given KDF is secure the
HSM symbolically executes the KDF once and checks all requirements before it then
actually executes it on real input data.
[0026] The KDF is an example of a list of instructions 105 as described in FIG. 1. The KDF
is the interface to the HSM. It is an executable record, containing both the instructions
as well as all input data which is needed to generate the die individual chip data.
The KDF may include several sections defining input parameters (
e.g., constants, intermediate results,
etc.)
, a section specifying the sequence of instructions to be executed by the HSM, and
output sections defining which data leaves the HSM. To guarantee that a KDF does not
leak any secrets, every data field may be tagged with a security level. Before a KDF
is executed by an HSM, the HSM determines whether the KDF complies with the given
security levels through a preceding symbolic execution.
[0027] In this example, there may be four different kinds of KDF fields: constants; secrets;
inputs; and data fields.
[0028] Constants may be fields which are initialized when the KDF is created. These fields
are read only and may be classified as "no secret".
[0029] Secrets may be fields which are initialized when the KDF is created. Depending on
the type of secret, symmetric keys or an asymmetric key pair may be used to protect
the imported secrets. The plain values are only available inside the HSM. Each secret
field is classified according to the signing key/encryption key of the HSM-Secret
that is applied. Secrets are read-only fields.
[0030] Inputs may be fields which reference a part of the UID input generated for each individual
chip during KDF execution. During the KDF creation process only the length and the
general type
(e.g., production year, position on wafer,
etc.) of this generated data are known. These fields are read-only and are classified
as "no secret".
[0031] Data fields are fields which are designed to store intermediate or final results
when the KDF is executed. Because no dynamic memory allocation is possible when the
KDF is executed, the size of the fields needs to be allocated beforehand (
i.e., when the KDF is created). Data fields are the only fields which can be written.
[0032] Each instruction in the KDF operates on fields, to read input data and to write intermediate
output data. Fields may be passed to instructions by means of references, denoting
links to specific field entries in the KDF. A reference may address the whole field,
or a sub-range of the field, defined with start-index and length within the referenced
field. Some instructions, may prevent the usage of sub-range references.
[0033] The output section can reference either a constant, input, or data field which is
then written to the output. Output fields are collected at the end of the KDF execution
when all instructions were executed. For that reason, only the last value written
to a data field will actually be part of the output.
[0034] Before the HSM accepts a KDF for execution, the HSM performs a symbolic execution
to find possible security violations. The HSM may perform this symbolic execution
only when it receives a new KDF. The symbolic execution may distinguish various security
levels (
e.g., S0, S1, S2, S3, and S4). Every input field of the KDF must be tagged with a security
level. The security levels define the security requirements for this input during
KDF execution. In this example, there may be one security level for public data (SO),
three different security levels for long-term secrets (S1, S3, S4), and one security
level for generated secrets (S2). Note that other numbers and types of security levels
may be used as well. A long-term secret is a secret generated outside of the KDF and
provided as input to the KDF. In contrast, generated secrets are freshly generated
during each KDF run. The five security levels S0 to S4 used in this example may be
defined as follows.
[0035] S0 is defined as no secret. If a field is tagged as no secret, its contents are considered
to be public.
[0036] S1 is defined as a known secret. Known secrets are long-term secrets with lower security
requirements than customer secrets and system secrets (lower requirements in particular
with regard to their protection outside of the HSM). Because they are long-term, no
information must leak about the contents of these fields, because the leakage could
add up over multiple KDF executions. Due to the lower security requirements, keys
that are classified as known secrets may not be used to protect keys classified as
S2, S3 or S4.
[0037] S2 is defined as a generated secret. A field is tagged as generated secret, if it
contains a secret key generated during the execution of the KDF. Some information
may leak about generated secrets,
e.g., a cyclic redundancy check (CRC) or a hash of the secret, but their confidentiality
level must not drop below a defined threshold bit security level. Keys that are classified
as generated secrets may not be used to protect keys classified as S3 or S4.
[0038] S3 is defined as a customer secret. Fields tagged as customer secret contain long-term
secrets provided by the customer. No information may leak about the contents of these
fields. Keys that are classified as customer secrets may not be used to protect keys
classified as S4.
[0039] S4 is defined as a system secret. Fields tagged as system secret contain long-term
secrets provided by the manufacturer. No information may leak about the contents of
these fields. System secrets are usually keys protecting data during the trust provisioning
process. Hence these keys must only be used as keys for encryption/decryption and
never be (part of) the plaintext.
[0040] Before the HSM may use a KDF it needs to check its security properties through a
symbolic execution. A symbolic execution is a mechanism to check high-level properties
(
e.g., on data flow) of a program without actually executing it. Given that a KDF is a
linear (
i.e., nonbranching) program and the set of instructions is relatively small, a complete
symbolic execution is feasible on an HSM.
[0041] FIG. 3 provides an example of a structure of the symbolic execution of a KDF. The
KDF has n instructions Instruction_1 to Instruction_n 305 and m inputs Input_1 to
Input_m 310. The inputs may be the constant, secret, and input fields of the KDF described
above. Each instruction may take any of the already existing fields as a parameter.
Furthermore, each instruction may introduce at least one new field, which is the output.
The use of the fields during the symbolic execution will be tracked and recorded so
that the state of execution can be traced. Hence a copy instruction, for example,
will take one field as a parameter and introduce one new field as output. The properties
of the output field - such as the data type and the confidentiality level - depend
on the instruction and are hence defined in its description.
[0042] To assure that the confidentiality of secrets is maintained, the symbolic execution
tracks the confidentiality level (CL) of every field. This can be an integer between
zero and some maximum confidentiality value or the value NULL meaning that the CL
has not been assigned yet. How the value NULL is treated for computations must be
stated specifically in every case except when there is a defined default behavior.
[0043] If the confidentiality level of a field is not NULL, it provides a lower bound on
the security, measured in bits, with regard to the confidentiality of that field.
That is the expected number of operations required to learn the value of this field.
For example, if a field has a confidentiality level of 100 bits, the expected effort
of the adversary is 2
100 operations to learn its value. Often assumptions regarding the security of cryptographic
algorithms (
i.e., security of AES, TDEA, RSA, ...) is important for assessing the confidentiality level
of a field. Which level of security can be expected from the different operations
may be specified.
[0044] The confidentiality of fields is traced back to the input fields after the symbolic
execution. Each output field of the KDF will get a confidentiality level of 0. For
each field the confidentiality update section of the instruction that produced the
output in that field dictates how the confidentiality level is propagated to the inputs,
i.e., how the confidentiality level of the inputs changes based on a change of the confidentiality
level of one of the outputs.
[0045] The confidentiality level of a field is actually the minimum of two confidentiality
levels: The confidentiality level considering direct leakage and the confidentiality
level considering threshold leakage. These two concepts are explained as follows.
[0046] The confidentiality level ConfDL(F) considering direct leakage of a field F is the
confidentiality level of F considering only published information (
i.e., with a confidentiality level of zero) that lowers the confidentiality level of F
by a certain amount L, the leakage. Hence it may be denoted as Leak (F, L) in the
symbolic execution. The value L must be independent of confidentiality level F. This
kind of leakage is dubbed "direct leakage" because it directly helps the adversary.
For example, publishing the 32-bit checksum computed over F directly leaks 32 bits
of information about F's value. The confidentiality level of F has a lower bound of
0 (
i.e., once it reaches 0 all further leakage is ignored). In contrast to the direct leakage,
there is also threshold leakage as explained below.
[0047] The confidentiality level considering threshold leakage of a field F (denoted as
ConfTH(F)) is the confidentiality level of F considering only published information
(
i.e., with a confidentiality level of zero) that adds a lower bound B for the confidentiality
level of F. Hence it is denoted as AddConfLB (F, B) in the symbolic execution. This
kind of leakage is called threshold leakage because it directly helps the adversary
only after performing an expected amount of 2
ConfTH(F) operations; however, the assumption is that the adversary learns the complete value
of F once he invests 2
ConfTH(F) operations (or more). For example, publishing the output of a symmetric encryption
of F using a key K that provides a security level of 80 bits sets the threshold leakage
of F to 80. ConfTH(F) has a natural lower bound of 0. In contrast to the threshold
leakage, there is also direct leakage as explained above.
[0048] Every instruction specifies a ConfDL(F) and ConfTH(F) operation for each of its output
fields F. A field is an output field of an instruction if the instruction writes to
it. The ConfDL(F) and ConfTH(F) operations provides the current confidentiality level
(considering direct leakage or considering threshold leakage) of the corresponding
output field. This value may depend on the confidentiality level of other fields,
e.g., inputs of the instruction (which, in turn, may be output fields of other instructions;
to learn their current confidentiality level, the corresponding function is called
for them).
[0049] Similarly, every instruction may define a Leak (F, L) and a AddConfLB (F, B) operation
for each of its output fields. These operations describe how direct leakage / threshold
leakage are propagated to the inputs of the instruction.
[0050] The symbolic execution assigns a data type to every field in the KDF. This allows
it to check the type requirements of instructions. If a parameter of an instruction
requires a specific data type, the symbolic execution accepts every subtype of that
data type as well. Some data types have additional parameters. The symbolic execution
sets specific values for these parameters once it assigns such a data type to a field.
Parameters allow for a more concise description of requirements. The alternative would
be to have additional subtypes corresponding to all possible parameter values.
[0051] Various security considerations should be considered while defining the requirements
for instructions and the symbolic execution. Often, the definitions are a balancing
act between simple requirements and use cases that need to be supported. Many of the
considerations do not directly point to a specific attack that would be possible if
they are ignored. The design principle here - as in general for the use of cryptography
- is safe use,
i.e., sticking to constructions that are known to be secure and avoiding those which are
not - even if no concrete attack is publicly known. The purpose behind the specific
definition of data types is to prevent misuse of data with an impact on security.
[0052] A method according to the embodiments of the invention may be implemented on a computer
as a computer implemented method. Executable code for a method according to the invention
may be stored on a computer program medium. Examples of computer program media include
memory devices, optical storage devices, integrated circuits, servers, online software,
etc. Accordingly, key delivery systems described herein may include a computer implementing
a computer program. Such system, may also include other hardware elements including
storage, network interface for transmission of data with external systems as well
as among elements of the key delivery systems.
[0053] In an embodiment of the invention, the computer program may include computer program
code adapted to perform all the steps of a method according to the invention when
the computer program is run on a computer. Preferably, the computer program is embodied
on a non-transitory computer readable medium.
[0054] Any combination of specific software running on a processor to implement the embodiments
of the invention, constitute a specific dedicated machine.
[0055] As used herein, the term "non-transitory machine-readable storage medium" will be
understood to exclude a transitory propagation signal but to include all forms of
volatile and nonvolatile memory. Further, as used herein, the term "processor" will
be understood to encompass a variety of devices such as microprocessors, field-programmable
gate arrays (FPGAs), application-specific integrated circuits (ASICs), and other similar
processing devices. When software is implemented on the processor, the combination
becomes a single specific machine.
[0056] It should be appreciated by those skilled in the art that any block diagrams herein
represent conceptual views of illustrative circuitry embodying the principles of the
invention. Although the various exemplary embodiments have been described in detail
with particular reference to certain exemplary aspects thereof, it should be understood
that the invention is capable of other embodiments and its details are capable of
modifications in various obvious respects. As is readily apparent to those skilled
in the art, variations and modifications can be effected while remaining within the
scope of the invention. Accordingly, the foregoing disclosure, description, and figures
are for illustrative purposes only and do not in any way limit the invention, which
is defined only by the claims.
1. A method of performing a generic trust provisioning of a device, comprising:
- receiving, by a hardware security module, HSM (110), a customer-independent list
of instructions (105) configured to produce trust provisioning information (125);
- performing, by the HSM (110), a constraint check on the list of instructions (105),
including performing a symbolic execution of the list of instructions (105);
- receiving, by the HSM (110), confidential inputs (135);
- executing, by the HSM (110), the list of instructions (105) on the confidential
inputs (135) when the list of instructions (105) passes the constraint check;
- providing, by the HSM (110), the output of the executing step as the trust provisioning
information (125) to be installed on the device,
wherein the symbolic execution of the list of instructions tracks a confidentiality
level of any parameter used by the list of instructions, to verify whether the list
of instructions meets a set of predefined constraints for preventing a security violation
for the confidential inputs on which the list of instructions will be executed.
2. The method of claim 1, further comprising receiving instruction definitions and constraints
(205) used by the constraint check on the list of instructions (105).
3. The method of claim 2, wherein the received instruction definitions and constraints
(205) have been verified using a formal model (210) to verify that the received instruction
definitions and constraints (205) meet a specified security requirement (220), wherein
the formal model (210) is verified by an automated theorem prover.
4. The method of any preceding claim, wherein the confidential inputs (135) include confidential
information of a manufacturer of the device to be trust provisioned and confidential
information of a customer receiving the device to be trust provisioned.
5. The method of any preceding claim, wherein the HSM (110) is a compute constrained
device.
6. The method of any preceding claim, wherein a confidentiality level of an out-put of
an instruction corresponds to a confidentiality level of the input to the instruction
and a definition of the instruction.
7. The method of any preceding claim, wherein the symbolic execution of the list of instructions
(105) includes assigning a data type to each parameter for the list of instructions.
8. The method of claim 7, wherein a parameter has a specific set of values.
9. The method of any preceding claim, wherein the symbolic execution of the list of instructions
(105) includes for each instruction determining that the confidentiality level of
input to the instruction, the confidentiality level of the output, and the definition
of the instruction meet a specified security requirement.
10. A non-transitory machine-readable storage medium encoded with executable instructions
which, when executed, carry out the method of any preceding claim.
1. Verfahren zum Durchführen einer generischen Vertrauensgewährung einer Vorrichtung,
das umfasst:
- das Empfangen einer kundenunabhängigen Liste von Anweisungen (105), die dafür konfiguriert
sind, Vertrauensgewährungsinformationen (125) zu erzeugen, durch ein Hardware-Sicherheitsmodul
HSM (110);
- das Durchführen einer Beschränkungsprüfung an der Liste von Anweisungen (105) durch
das HSM (110) einschließlich des Durchführens einer symbolischen Ausführung der Liste
von Anweisungen (105);
- das Empfangen vertraulicher Eingaben (135) durch das HSM (110);
- das Ausführen der Liste von Anweisungen (105) an den vertraulichen Eingaben (135)
durch das HSM (110), wenn die Liste von Anweisungen (105) die Beschränkungsprüfung
besteht;
- das Bereitstellen der Ausgabe des Ausführungsschrittes als die Vertrauensgewährungsinformationen
(125), die auf der Vorrichtung zu installieren sind, durch das HSM (110), wobei die
symbolische Ausführung der Liste von Anweisungen eine Vertraulichkeitsstufe eines
jeglichen Parameters, der von der Liste von Anweisungen verwendet wird, verfolgt,
um zu verifizieren, ob die Liste von Anweisungen einen Satz vordefinierter Beschränkungen
zum Verhindern einer Sicherheitsverletzung für die vertraulichen Eingaben, an denen
die Liste von Anweisungen ausgeführt werden wird, erfüllt.
2. Verfahren nach Anspruch 1, ferner umfassend das Empfangen von Anweisungsdefinitionen
und Beschränkungen (205), die von der Beschränkungsprüfung an der Liste von Anweisungen
(105) verwendet werden.
3. Verfahren nach Anspruch 2, wobei die empfangenen Anweisungsdefinitionen und Beschränkungen
(205) unter Verwendung eines formalen Modells (210) verifiziert wurden, um zu verifizieren,
dass die empfangenen Anweisungsdefinitionen und Beschränkungen (205) eine spezifizierte
Sicherheitsanforderung (220) erfüllen, wobei das formale Modell (210) von einem automatischen
Theorembeweiser verifiziert wird.
4. Verfahren nach einem vorhergehenden Anspruch, wobei die vertraulichen Eingaben (135)
vertrauliche Informationen eines Herstellers der Vorrichtung, der Vertrauen zu gewähren
ist, und vertrauliche Informationen eines Kunden, der die Vorrichtung erhält, der
Vertrauen zu gewähren ist, umfassen.
5. Verfahren nach einem vorhergehenden Anspruch, wobei das HSM (110) eine rechenbeschränkte
Vorrichtung ist.
6. Verfahren nach einem vorhergehenden Anspruch, wobei eine Vertraulichkeitsstufe einer
Ausgabe einer Anweisung einer Vertraulichkeitsstufe der Eingabe der Anweisung und
einer Definition der Anweisung entspricht.
7. Verfahren nach einem vorhergehenden Anspruch, wobei die symbolische Ausführung der
Liste von Anweisungen (105) das Zuweisen eines Datentyps an jeden Parameter für die
Liste von Anweisungen umfasst.
8. Verfahren nach Anspruch 7, wobei ein Parameter einen bestimmten Satz von Werten aufweist.
9. Verfahren nach einem vorhergehenden Anspruch, wobei die symbolische Ausführung der
Liste von Anweisungen (105) für jede Anweisung das Bestimmen umfasst, dass die Vertraulichkeitsstufe
einer Eingabe an die Anweisung, die Vertraulichkeitsstufe der Ausgabe und die Definition
der Anweisung eine spezifizierte Sicherheitsanforderung erfüllen.
10. Nichtflüchtiges maschinenlesbares Speichermedium, das mit ausführbaren Anweisungen
codiert ist, die bei Ausführung das Verfahren nach einem vorhergehenden Anspruch durchführen.
1. Procédé pour effectuer un provisionnement de confiance générique d'un dispositif,
comprenant :
- la réception, par un module de sécurité matérielle, HSM (110), d'une liste d'instructions
indépendantes du client (105), configurée pour produire des informations de provisionnement
de confiance (125) ;
- l'exécution, par le HSM (110), d'un contrôle des contraintes sur la liste d'instructions
(105), cela comprenant le fait d'effectuer une exécution symbolique de la liste d'instructions
(105) ;
- la réception, par le HSM (110), d'entrées confidentielles (135) ;
- l'exécution, par le HSM (110), de la liste d'instructions (105) sur les entrées
confidentielles (135) lorsque la liste d'instructions (105) satisfait au contrôle
des contraintes ;
- la fourniture, par le HSM (110), de la sortie de l'étape d'exécution en tant qu'informations
de provisionnement de confiance (125) devant être installées sur le dispositif,
dans lequel l'exécution symbolique de la liste d'instructions poursuit un niveau de
confidentialité de l'un quelconque des paramètres utilisés par la liste d'instructions,
afin de vérifier si la liste d'instructions répond à un ensemble de contraintes prédéfinies
pour empêcher une violation de sécurité pour les entrées confidentielles sur lesquelles
la liste d'instructions sera exécutée.
2. Procédé selon la revendication 1, comprenant en outre la réception de définitions
d'instructions et de contraintes (205) utilisées par le contrôle des contraintes sur
la liste d'instructions (105).
3. Procédé selon la revendication 2, dans lequel les définitions d'instructions et les
contraintes (205) reçues ont été vérifiées à l'aide d'un modèle formel (210) pour
vérifier que les définitions d'instructions et les contraintes (205) reçues répondent
à une exigence de sécurité spécifiée (220), dans lequel le modèle formel (210) est
vérifié par un démonstrateur de théorème automatisé.
4. Procédé selon l'une quelconque des revendications précédentes, dans lequel les entrées
confidentielles (135) comprennent des informations confidentielles d'un fabricant
du dispositif à provisionner en confiance et des informations confidentielles d'un
client recevant le dispositif à provisionner en confiance.
5. Procédé selon l'une quelconque des revendications précédentes, dans lequel le HSM
(110) est un dispositif à calcul contraint.
6. Procédé selon l'une quelconque des revendications précédentes, dans lequel le niveau
de confidentialité d'une sortie d'une instruction correspond au niveau de confidentialité
de l'entrée de l'instruction et à une définition de l'instruction.
7. Procédé selon l'une quelconque des revendications précédentes, dans lequel l'exécution
symbolique de la liste d'instructions (105) comprend l'attribution d'un type de données
à chaque paramètre pour la liste d'instructions.
8. Procédé selon la revendication 7, dans lequel un paramètre possède un ensemble spécifique
de valeurs.
9. Procédé selon l'une quelconque des revendications précédentes, dans lequel l'exécution
symbolique de la liste d'instructions (105) comprend, pour chaque instruction, la
détermination du fait que le niveau de confidentialité de l'entrée de l'instruction,
le niveau de confidentialité de la sortie et la définition de l'instruction répondent
à une exigence de sécurité spécifiée.
10. Support de stockage non transitoire lisible par machine, sur lequel sont codées des
instructions exécutables qui, lorsqu'elles sont exécutées, mettent en œuvre le procédé
selon l'une quelconque des revendications précédentes.