(19)
(11)EP 3 080 702 B1

(12)EUROPEAN PATENT SPECIFICATION

(45)Mention of the grant of the patent:
14.08.2019 Bulletin 2019/33

(21)Application number: 13899151.8

(22)Date of filing:  12.12.2013
(51)International Patent Classification (IPC): 
G06F 11/36(2006.01)
G06F 11/34(2006.01)
G06F 11/07(2006.01)
(86)International application number:
PCT/US2013/074661
(87)International publication number:
WO 2015/088534 (18.06.2015 Gazette  2015/24)

(54)

TECHNIQUES FOR DETECTING RACE CONDITIONS

VERFAHREN ZUR ERKENNUNG VON RACE-BEDINGUNGEN

TECHNIQUES DE DÉTECTION DE CONDITIONS DE COURSE


(84)Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

(43)Date of publication of application:
19.10.2016 Bulletin 2016/42

(73)Proprietor: Intel Corporation
Santa Clara, CA 95054 (US)

(72)Inventors:
  • HU, Shiliang
    Los Altos, CA 94024 (US)
  • POKAM, Gilles A.
    Fremont, CA 94555 (US)
  • PEREIRA, Cristiano L.
    Groveland, CA 95321 (US)
  • GOTTSCHLICH, Justin E.
    Santa Clara, CA 95054 (US)

(74)Representative: Rummler, Felix et al
Maucher Jenkins 26 Caxton Street
London SW1H 0RJ
London SW1H 0RJ (GB)


(56)References cited: : 
WO-A1-2012/038780
US-A1- 2003 140 326
US-A1- 2006 200 823
US-A1- 2009 282 288
US-A1- 2003 056 149
US-A1- 2004 255 277
US-A1- 2006 224 873
US-B1- 7 543 187
  
      
    Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give notice to the European Patent Office of opposition to the European patent granted. Notice of opposition shall be filed in a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).


    Description

    Technical Field



    [0001] Embodiments described herein generally relate to detecting possible race conditions in accesses to data by different threads of an application routine.

    Background



    [0002] It has become commonplace for a wide variety of application routines to be written for multi-threaded execution. This arises from the incorporation of processor components capable of supporting multi-threaded execution into an increasing variety of computing devices. However, while multi-threaded execution provides numerous advantages, the work of writing portions of an application routine to execute concurrently on different threads in a coordinated manner presents challenges. Mistakes resulting in uncoordinated accesses (e.g., read and write operations, or pairs of write operations) by different portions of an application routine to the same data are not uncommon, and can beget unexpected behavior that can be difficult to trace to the mistakes that were made in writing in the application routine.

    [0003] Of particular concern are instances of one portion of an application routine that reads data close enough in time to when another portion of the application routine writes that same data that it is not reliably predictable which of the read and write operations will occur before the other. Thus, the read operation may retrieve the data either before or after it is modified by the write operation. Also of particular concern are instances in which the same data is twice written to in a pair of uncoordinated write operations such that the state of that data it is not reliably predictable after those two write operations. In other words, a lack of coordination between two accesses to the same data leads to an insufficiently coordinated result.

    [0004] Various techniques have been devised to trace, step-by-step, the execution of each such portion of an application routine on their separate threads, including executing a debugging routine alongside those different portions to monitor various break points inserted among the instructions of those portions and/or to monitor their data accesses. However, such debugging routines typically consume considerable processing and/or storage resources, themselves. Indeed, the fact of the execution of such a debugging routine alongside an application routine may consume enough resources to alter the behavior of the application routine such that the uncoordinated accesses that are sought to be traced for debugging simply never occur. Further, in situations where the processing and/or storage resources provided by a computing device are closely matched to the resources required by an application routine, there may be insufficient resources available to accommodate the execution of both the application routine and such a debugging routine.

    [0005] In US2006/224873 A1 systems, methodologies, media, and other embodiments associated with acquiring instruction addresses associated with performance monitoring events are described. One exemplary system embodiment includes logic for recording instruction and state data associated with events countable by performance monitoring logic associated with a pipelined processor. The exemplary system embodiment may also include logic for traversing the instruction and state data on a cycle count basis. The exemplary system may also include logic for traversing the instruction and state data on a retirement count basis.

    Brief Description of the Drawings



    [0006] 

    FIG. 1 illustrates an embodiment of a race condition debugging system.

    FIG. 2 illustrates an alternate embodiment of a race condition debugging system.

    FIGS. 3-4 each illustrate detection of a cache event according to an embodiment.

    FIGS. 5-6 each illustrate a portion of an embodiment of a race condition debugging system.

    FIGS. 7-9 each illustrate a logic flow according to an embodiment.

    FIG. 10 illustrates a processing architecture according to an embodiment.


    Detailed Description



    [0007] Various embodiments are generally directed to detecting race conditions arising from uncoordinated accesses to data by different portions of an application routine or by related application routines by detecting occurrences of one or more specified cache events that tend to be associated with such uncoordinated accesses. A monitoring unit of a processor component is configured to capture indications of the state of the processor component associated with such cache events. The monitoring unit provides those captured indications to a detection driver for filtering and/or relaying to a debugging routine to enable debugging of the application routine or of the related application routines.

    [0008] As familiar to those skilled in the art, cache coherency mechanisms are employed to ensure that the contents of a cache do not fail to be synchronized with the contents of other storages, including other caches, in a manner that results in a processor component reading and using incorrect data. Such cache coherency mechanisms often entail operations that store data in a cache line, retrieve data from a cache line, instruct a cache to mark data in a cache line as invalid and/or to make a cache line available, and still other operations that change the state associated with data stored in a cache line. In some embodiments, the cache coherency mechanism employed may be based on the widely known and widely used modified-exclusive-shared-invalidated (MESI) algorithm.

    [0009] Regardless of the specific cache coherency mechanism used, one or more of the operations affecting the state associated with data stored in a cache line may be cache events that are able to be detected by a monitoring unit incorporated into a processor component. Such a monitoring unit may be programmable to monitor for one or more specified cache events and/or to capture indications of one or more aspects of the state of a core of the processor component upon detecting the one or more specified cache events. The captured indications may include an indicator of the type of cache event that occurred, an indicator of the type of data access (e.g., a read or write operation) that triggered the cache event, an identifier of a process and/or thread of execution that caused the data access, an indication of an address of an instruction pointer, the contents of one or more registers of the processor component, etc.

    [0010] The one or more specified cache events monitored for may include one or more types cache event that arise from a succession of accesses to the same piece of data that occur relatively close in time (e.g., one access occurring immediately after another). Of particular interest are cache events that may arise from an uncoordinated combination of read and write operations or from an uncoordinated pair of write operations involving the same piece of data by two concurrently executed portions of an application routine. Alternatively, such uncoordinated accesses may be made by two or more related application routines that are meant to coordinate such accesses as they are executed concurrently. Such a lack of coordination may result in a race condition between the read and write operations in which the read operation unpredictably retrieves the piece of data either before or after it has been altered by the write operation, or a race condition between two write operations in which it the resulting state of the data becomes unpredictable. Such cache events may involve an explicit communication to a cache to preemptively invalidate a cache line as a result of an imminent or ongoing succession of accesses to the same data stored in another cache line of another cache. An example of such a cache event may be a "request for ownership" or "read for ownership" (RFO) cache event. Alternatively or additionally, such cache events may involve an access to data in a cache line of one cache that has been invalidated as a result of having been both stored and modified in another cache line of another cache. An example of such a cache event may be a "hit modified" (HITM) cache event.

    [0011] Indications of the state of a processor component associated with a specific cache event may be captured each time that cache event occurs, or may be captured at a lesser specified frequency. By way of example, such capturing may take place only on every fifth occurrence of the specified cache event. This may be done to reduce processing and/or storage requirements, especially where the specified cache event is expected to occur frequently. This may also be done with the expectation that a mistake within an application routine that causes a race condition ought to occur with sufficient frequency as to be likely to be captured, even though the capturing does not take place each time the specified cache event occurs.

    [0012] The programming of the monitoring unit may be performed by a detection driver that also receives the captured indications of the one or more specified cache events from the monitoring unit. The detection driver may filter the captured indications it receives to generate reduced data made up of a more limited set of indications to reduce the overall quantity of data to be provided to a debugging routine. The debugging routine may be executed by a processor component of the same computing device in which the application routine is executed. Alternatively, the debugging routine may be executed by a processor component of a different computing device, and the reduced data may be transmitted to the other computing device via a network.

    [0013] With general reference to notations and nomenclature used herein, portions of the detailed description which follows may be presented in terms of program procedures executed on a computer or network of computers. These procedural descriptions and representations are used by those skilled in the art to most effectively convey the substance of their work to others skilled in the art. A procedure is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. These operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical, magnetic or optical signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It proves convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be noted, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to those quantities.

    [0014] Further, these manipulations are often referred to in terms, such as adding or comparing, which are commonly associated with mental operations performed by a human operator. However, no such capability of a human operator is necessary, or desirable in most cases, in any of the operations described herein that form part of one or more embodiments. Rather, these operations are machine operations. Useful machines for performing operations of various embodiments include general purpose digital computers as selectively activated or configured by a computer program stored within that is written in accordance with the teachings herein, and/or include apparatus specially constructed for the required purpose. Various embodiments also relate to apparatus or systems for performing these operations. These apparatus may be specially constructed for the required purpose or may include a general purpose computer. The required structure for a variety of these machines will be apparent from the description given.

    [0015] Reference is now made to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding thereof. It may be evident, however, that the novel embodiments can be practiced without these specific details. In other instances, well known structures and devices are shown in block diagram form in order to facilitate a description thereof. The intention is to cover all modifications, equivalents, and alternatives within the scope of the claims.

    [0016] FIG. 1 is a block diagram of an embodiment of a race condition debugging system 1000 incorporating one or more of a server 100, a computing device 300 and a debugging device 500. Each of these computing devices 100, 300 and 500 may be any of a variety of types of computing device, including without limitation, a desktop computer system, a data entry terminal, a laptop computer, a netbook computer, a tablet computer, a handheld personal data assistant, a digital camera, a smartphone, a smart wristwatch, smart glasses, a body-worn computing device incorporated into clothing, a computing device integrated into a vehicle (e.g., a car, a bicycle, a wheelchair, etc.), a server, a cluster of servers, a server farm, etc.

    [0017] As depicted, subsets of these computing devices 100, 300 and 500 may exchange signals via a network 999 that are associated with the debugging of race conditions in accessing data arising from concurrent execution of portions of a single application routine 370 or of multiple related application routines 370 on the computing device 300. However, one or more of these computing devices may exchange other data entirely unrelated to such debugging with each other and/or with still other computing devices (not shown) via the network 999. In various embodiments, the network 999 may be a single network possibly limited to extending within a single building or other relatively limited area, a combination of connected networks possibly extending a considerable distance, and/or may include the Internet. Thus, the network 999 may be based on any of a variety (or combination) of communications technologies by which signals may be exchanged, including without limitation, wired technologies employing electrically and/or optically conductive cabling, and wireless technologies employing infrared, radio frequency or other forms of wireless transmission.

    [0018] In various embodiments, the computing device 300 incorporates one or more of a processor component 350, a storage 360, controls 320, a display 380 and an interface 390 to couple the computing device 300 to the network 999. The processor component 350 incorporates a monitoring unit 354, and one or more cores 355 that each incorporate a cache 357. Each of the caches 357 may be a level 1 (L1) and/or level 2 (L2) cache, and each may be dedicated for use by the one of the cores 355 into which it is incorporated. The computing device 300 may additionally incorporate an additional cache 367 that may be shared by multiple ones of the cores 355. The cache 367 may be incorporated into the processor component 350, or into a controller or other circuitry by which the storage 360 is coupled to the processor component 350, and may be a L2 or level 3 (L3) cache.

    [0019] The monitoring unit 354 may monitor the state of each of the cores 355 to detect occurrences of any of a variety of events and may be operable to capture aspects of the state of each of the cores 355, including the state of one or more registers of each of the cores 355. The monitoring unit 354 may be directly coupled to the caches 357 of each of the cores 355 to directly monitor cache events associated with the operation of each of the caches 357. In embodiments in which the cache 367 is incorporated into the processor component 350, the monitoring unit 354 may be directly coupled to the cache 367 to directly monitor cache events associated with the operation of the cache 367. In embodiments in which the cache 367 is not incorporated into the processor component 350 (e.g., the cache 367 is incorporated into or otherwise associated with the storage 360), the monitoring unit 354 may monitor communications exchanged between the processor component 350 and the cache 367 that may be indicative of cache events associated with the cache 367. Stated differently, the monitoring unit 354 monitors for the occurrence of one or more specific types of cache event occurring in connection with each of the caches 357 and/or in the cache 367, and to capture indications of the state of one or more of the cores 355 in response to occurrences of the one or more specific types of cache event.

    [0020] The storage 360 stores one or more of an operating system 340, data 330, a detection driver 140, configuration data 135, monitoring data 334, reduced data 337 and a debugging routine 170. Also stored within the storage 360 is either one of the application routine 370 or multiple related ones of the application routine 370. Each of the operating system 340, the application routine(s) 370, the detection driver 140 and the debugging routine 170 incorporates a sequence of instructions operative on the processor component 350 in its role as a main processor component of the computing device 300 to implement logic to perform various functions.

    [0021] In executing the operating system 340, the processor component 350 may provide various forms of support for the execution of the application routine(s) 370, including callable system functions to provide various services. The operating system 340 may provide a layer of abstraction between the application routine(s) 370 and hardware components of the computing device 300 and/or their device drivers. Among the hardware components and related device drivers from which a layer of abstraction may be provided may be the monitoring unit 354 and the detection driver 140. The operating system 340 provides support for the concurrent execution of multiple portions of a single application routine 370 or for the concurrent execution of portions of related application routines 370 in separate ones of multiple threads of execution by the processor component 350. A single one of the application routines 370 may be any of a variety of types of routine selected for execution on the computing device 300 by the processor component 350 with the support of the operating system 340 to perform any of a variety of personal and/or professional functions, including word processors, website viewers, photograph editors, meeting schedule management, CAD/CAM, etc. Where there are multiple ones of the application routine 370, they may be related application routines of a set of application routines, such as, and not limited to, a word processing application and an illustration application of the same "productivity suite" of application routines, or a website browsing application and an "extension" application to enable viewing of a particular type of data provided by a website.

    [0022] In executing the detection driver 140, the processor component 350 configures the monitoring unit 354 to detect occurrences of one or more specific cache events associated with one or more of the caches 357 and/or the cache 367. In so doing, the processor component 350 may be caused to specify the one or more cache events through setting bits of one or more registers associated with the monitoring unit 354. The processor component 350 may retrieve indications of the one or more cache events to be detected from the configuration data 135. Of particular interest may be cache events that arise as a result of combinations of read and write operations involving the same piece of data that occur relatively close together in time. Given the closeness in time and a lack of coordination, the read operation may unpredictably occur either before or after the write operation, thereby causing a race condition between the read and write operations such that a wrong version of that piece of data is sometimes retrieved in the read operation and sometimes not. Alternatively or additionally, of particular interest may be cache events that arise as a result of combinations of multiple write operations involving the same piece of data that occur relatively close together in time. Given the closeness in time and a lack of coordination, one of the write operations may unpredictably occur either before or after another, thereby causing a race condition between the multiple write operations such that the state of the data following the multiple write operations is rendered unpredictable. It is possible that such uncoordinated combinations of accesses to the same piece of data result from a lack of coordination either between two concurrently executed portions of a single application routine 370, or between two concurrently executed application routines 370 that are related such that they are meant to coordinate their accesses.

    [0023] Upon being so configured, the monitoring unit 354 captures indications of the state of one or more of the cores 355 of the processor component 350 in response to the occurrence of one of the one or more specified cache events, and provides the captured indications to the detection driver 140 as the monitoring data 334. The captured indications may include indications of the state of one or more registers associated with one or more of the cores 355, indications of addresses stored in one or more instruction pointers, indications of instructions being executed, indications of identities of processes and/or threads being executed by one or more of the cores 355, etc. The captured indications may also include indications of the state of one or more cache lines of one or more of the caches 357 and/or of the cache 367. The captured indications may also include indications of cache protocol signals exchanged among the caches 357 and/or 367. This may include indications of cache protocol signals exchanged between the processor component 350 and the cache 367 in embodiments in which the cache 367 is external to the processor component 350. These various indications may provide indications of what access instruction(s) of what portion(s) of the application routine 370 were executed that triggered the occurrence of the specified cache event that triggered the capturing of the indications.

    [0024] In further executing the detection driver 140, the processor component 350 may also configure a counter of the monitoring unit 354 to recurringly count a specified number of occurrences of a specified cache event before capturing indications of the state of the processor component 350. In other words, instead of capturing indications of the state of the processor component 350 in response to every occurrence of a specified cache event, the monitoring unit 354 may be caused to capture indications of the state of the processor component 350 for a sample subset of the occurrences of the specified cache event, thereby reducing the frequency with which such capturing is performed. More specifically, the monitoring unit 354 may be configured to capture the state of the processor component 350 at only every "Nth" of the specified cache event in which the frequency at which captures are performed is controlled by the value of N. The frequency with which the captures are performed (e.g., the value of N) may also be specified in the configuration data 135. Capturing the state of the processor component 350 less frequently than upon every occurrence of the specified cache event may be deemed desirable to reduce processing and/or storage resources required in processing and/or relaying the captured indications to a debugging routine. Further, this may be done with the expectation that a mistake in one of the application routines 370 that results in occurrences of data access race conditions between concurrently executed portions of one or more than one of the application routines 370 that trigger the specified cache event are likely to occur sufficiently often as to render capturing the state of the processor component 350 in response to each occurrence unnecessary. Stated differently, such sampling of the state of the processor component 350 for less than all of such occurrences may be deemed likely to provide sufficient information to at least narrow down the location of a mistake causing a race condition within the instructions making up one of the application routines 370.

    [0025] In still further executing the detection driver 140, the processor component 350 may operate one or more registers of the monitoring unit 354 and/or signal the monitoring unit 354 via another mechanism to dynamically enable and/or disable the monitoring for the one or more specified cache events. The processor component 350 may be triggered to do so in response to indications of whether one or more than one of the application routines 370 are currently being executed. Such indications may be conveyed directly between the application routine(s) 370 and the detection driver 140, or may be relayed through the operating system 340 and/or through the debugging routine 170 (if present).

    [0026] Regardless of whether captures of the state of the processor component 350 are performed upon every occurrence of a specified cache event or less frequently, the processor component 350 may be caused by the detection driver 140 to filter the monitoring data 334 to generate the reduced data 337 with a subset of the indications of the monitoring data 334. The monitoring data 334 may contain indications of more pieces of information concerning the state of the processor component 350 than is actually needed to enable debugging of possible race conditions in accesses to data made by separate concurrently executed portions of the application routine 370. The indications of the monitoring data 334 may be filtered to remove indications associated with ones of the cores 355, processes and/or threads not involved in executing any portion of the application routine(s) 370. To enable this, the processor component 350 may retrieve identifiers of the cores 355, processes and/or threads that are involved in executing portions of the application routine(s) 370, and/or identifiers of the portions that are executed. Such identifiers may then be compared to identifiers of the monitoring data 334 to identify indications in the monitoring data 334 that are associated with a cache event unrelated to a possible race condition between accesses made by portions of the application routine(s) 370. Depending on the architecture of the operating system 340, such identifiers may be retrieved from the application routine(s) 370 through the operating system 340 and/or through the debugging routine 170 (if present). Alternatively or additionally, the indications of the monitoring data 334 may be filtered to remove indications associated with registers of the processor component 350 not associated with accessing data.

    [0027] In some embodiments, the processor component 350 may provide the reduced data 337 to the debugging routine 170 to enable debugging of possible race conditions in data accesses between portions of the application routine(s) 370 on the computing device 300. In so doing, and depending on the architecture of the operating system 340, such a conveyance of the reduced data 337 from the detection driver 140 to the debugging routine 170 may be performed at least partly through the operating system 340. In other embodiments, the processor component 350 may operate the interface 390 to transmit the reduced data 337 to the debugging device 500 to enable debugging to be performed thereon.

    [0028] FIGS. 3 and 4 each depict an embodiment of operation of components of the computing device 300 to detect a specific cache event. More specifically, FIG. 3 depicts detection of a RFO cache event, and FIG. 4 depicts detection of a HITM cache event. As depicted in both FIGS. 3 and 4, one or more of the cores 355 incorporates one or more registers 3555. The registers 3555 may include registers supporting arithmetic operations, bitwise logic operations, single-instruction-multiple-data (SIMD) operations, etc. As also depicted, each of the caches 357 and/or 367 incorporates a cache controller 3575 and multiple cache lines 3577 (of which one is depicted). As further depicted, the monitoring unit 354 incorporates a counter 3544 and one or more registers 3545 by which the monitoring unit may be programmed to detect various events and to capture various pieces of information. The monitoring unit 354 may directly communicate with the cache controller 375 of one or more of the caches 357 and/or 367 to detect cache events. Alternatively or additionally, the monitoring unit 354 may intercept communications among the cache controllers 3575 of two or more of the caches 357 and/or 367 to detect cache events.

    [0029] As still further depicted, the one or more application routines 370 incorporate multiple portions 377 to be concurrently executed, each of which may be assigned a unique identifier (ID) that may be generated by the operating system 340 as threads are allocated to execute each portion 377. Each portion includes a subset of the sequence of instructions of one of the application routines 370 to perform a subset of the one or more functions of that one of the application routines 370. In some embodiments, the application portions 377 that may access the same piece of data (e.g., the data 330) may be portions 377 of the same application routine 370. In such embodiments, those portions 377 should coordinate their accesses to a piece of data with each other. Such coordination may be meant to occur entirely within that one application routine 370 of which both are a part via any of a variety of mechanisms including and not limited to flag bits, passing of parameters in function calls, etc., but a mistake in the writing of one or more of the portions may result in a lack of such coordination. In other embodiments, the application portions 377 that may access the same piece of data may be portions 377 of different ones of the application routines 370 that are related to the extent that they should coordinate their accesses to a piece of data with each other, but again, a mistake may result in a lack of such coordination. In still other embodiments, there may be multiple instances of the same portion 377 of the same application routine 370 that make accesses to the same piece of data, such instances of the same portion 377 should coordinate those accesses.

    [0030] In preparation for detecting a cache event in either of the embodiments of FIGS 3 or 4, the processor component 350 may be caused by its execution of the detection driver 140 to program one or more of the registers 3545 of the monitoring unit 354 to configure the monitoring unit 354 to detect occurrences of one or more specific cache events. The processor component 350 may also be caused to program one or more of the registers 3545 to respond to detection of those occurrences by capturing aspects of the state of the processor component 350. The processor component 350 may be further caused to program the counter 3544 to cause the monitoring unit 354 to perform such capturing less frequently than upon each of such occurrences. For example, the monitoring unit 354 may be configured, via the counter 3544, to perform such capturing on only every 3rd occurrence of a specified cache event.

    [0031] Turning to FIG. 3, one of the cores 355 of the processor component 350, in executing one portion 377, executes a read instruction to read the data 330 to load the data 330 into one of the registers 3555. Immediately thereafter, the core 355 of the processor component 350, in executing another portion 377 (of either the same application routine 370 or a different, but related application routine 370) or in executing another instance of the same portion 377, executes a write instruction to write the data 330 with a value from one of the registers 3555 (either the same one of the registers 3555, or a different one) to modify the data 330. Thus, as a result of concurrent execution of two different portions 377, both a read operation and a write operation that both access the same data 330 occur relatively close in time. As has been discussed, this may be an instance of two uncoordinated accesses to the data 330 such that either of the read and write operations may have occurred ahead of the other such that the read operation could retrieve a wrong version of the data 330.

    [0032] The execution of the read operation may cause the retrieval of the data 330 either from the storage 360 or from another cache (e.g., the cache 367 or another of the caches 357) depending on whether an up-to-date copy of the data 330 is stored in another cache at the time the read operation is executed. Either way, the cache line 3577 of the cache 357 is loaded with the data 330 and the data 330 is also provided to whichever one of the registers 3555 was meant to be loaded with the data 330 by execution of the read operation.

    [0033] However, the fact of this read operation being followed, either immediately or at least relatively quickly, by a write operation that will modify the same data 330 may trigger the cache controller 3575 of the cache 357 to signal the cache controller 3575 of the other cache 357 or 367 to invalidate whatever copy it may have of the data 330. In response to receiving this signal, the cache controller 3575 of the other cache 357 or 367 invalidates its copy of the data 330 in whichever one of its cache lines 3577 in which that copy may be stored (this invalidation depicted with hash lines). This signal to the other cache 357 or 367 to invalidate whatever copy of the data 330 it may have is a preemptive signal announcing that whatever copy of the data 330 that the other cache 357 or 367 may have will shortly become out of date based on the fact of the write operation that is about to be executed after the read operation.

    [0034] This combination of the read operation to retrieve the data 330 for storage in the cache 357 and the signal to the other cache 357 or 367 to invalidate whatever copy of the data 330 it may have is what is referred to as a "read for ownership" (RFO) cache event. In essence, the cache 357 is "taking ownership" of the data 330 away from any other caches that may have a copy of the data 330 in preparation for the write operation that will change the data 330, which will result in the cache 357 "owning" the up to date version of what the data 330 will be following the write operation. Use of such a preemptive notice among the caches 357 and/or 367 occurs in at least some implementations of cache coherency mechanisms based on the MESI algorithm, and may be deemed desirable to increase efficiency in maintaining cache coherency. Depending on whether the monitoring unit 354 is in direct communications with the cache controllers 3575 of either the cache 357 or the other cache 357 or 367, the monitoring unit 354 may be signaled by one or both of these cache controllers 3575 of the occurrence of this RFO cache event and/or may intercept the invalidation signal exchanged between them. It should be noted that in some embodiments, the invalidation signal may incorporate an indication of being part of a RFO cache event.

    [0035] Turning to FIG. 4, one of the cores 355 of the processor component 350, specifically the core designated as core 355x in FIG. 4, is caused by its execution of a portion 377 to perform a write operation to modify the data 330, which happens to be stored in one of the cache lines 3577 of the cache 357 associated with the core 355x. As a result, the copy of the data 330 in that cache line 3577 is now the up-to-date version of the data 330. Also, in response to this modification of this copy of the data 330, the cache controller 3575 of the cache 357 of the core 355x transmits a signal to the cache controllers 3575 of other caches to invalidate whatever copies they may have of the data 330, including others of the caches 357 and/or the cache 367. Among those other controllers 3575 of those other caches is the controller 3575 of the cache 357 of another of the cores 355, specifically the core designated as core 355y in FIG. 4. In response to receiving the invalidation signal, the cache controller 3575 of the cache 357 of the core 355y invalidates its copy of the data 330 stored in one of its cache lines 3577 (this invalidation depicted with hash lines).

    [0036] Subsequently, the core 355y is caused by its execution of another portion 377 (of either the same application routine 370 or a different, but related application routine 370) or by its execution of another instance of the same portion 377 to perform either a read operation to read the data 330 or a write operation to write to the data 330. Given that the most up-to-date copy of the data 330 is stored in one of the cache lines 3577 of the cache 357 of the core 355x, the cache controller 3575 of the cache 357 of the core 355x signals the cache controller 3575 of the cache 357 of the core 355y to the effect that the cache 357 of the core 355x has a modified copy of the data 330 such that this is the most up-to-date copy. Stated differently, this signal conveys the fact that this subsequent read or write operation has resulted in a cache "hit" arising from the cache 357 of the core 355x having a copy of the data 330 such that it need not be accessed at the slower storage 360, and this signal conveys the fact that this copy has been modified such that it is the most up-to-date copy. The combination of these two indications in this signal signifies a "hit-modified" or HITM cache event. Thus, a HITM cache event, like the RFO cache event of FIG. 3, arises from a combination of two access operations involving the same data in which one is executed within a relatively short period of time after the other, possibly with one immediately following the other. Again, this may be indicative of an instance of two uncoordinated accesses to the same piece of data caused by the concurrent execution of two different portions 377 of either the same application routine 370 or of different application routines 370. Depending on whether the monitoring unit 354 is in direct communications with the cache controllers 3575 of either the caches 357 of the cores 355x or 355y, the monitoring unit 354 may be signaled by one or both of these cache controllers 3575 of the occurrence of this HITM cache event and/or may intercept the HITM signal exchanged between them.

    [0037] Referring back to both FIGS. 3 and 4, the monitoring unit 354 may respond to the occurrence of either of these RFO or HITM cache events by capturing indications of the state of at least a subset of the cores 355 of the processor component 350, and providing those indications to the detection driver 140 as the monitoring data 334. As has been discussed, in executing the detection driver 140, the processor component 350 may filter those indications of the monitoring data 334 to generate the reduced data 337 with fewer of those indications. Again, the reduced data 337 may be conveyed to the debugging routine 170 or may be transmitted to the debugging device 500. Again, such filtering may include removing indications of the state of ones of the cores 355, processes and/or threads not involved in executing a portion 377 of the one or more application routines 370. To enable identification of the cores 355, processes and/or threads that are involved in executing portions 377 of the application routine(s) 370, the processor component 350 may retrieve identifiers for those portions 377, cores 355, processes and/or threads.

    [0038] Returning to FIG. 1, in embodiments in which the reduced data 337 is provided to the debugging routine 170, the processor component 350 may be caused by its execution of the debugging routine 170 to operate the controls 320 and/or the display 380 to provide a user interface by which information relevant to debugging potentially uncoordinated data accesses may be presented. More specifically, the indications of possible instances of uncoordinated accesses arising from concurrent execution of different portions 377 of either the same application routine 370 or of different, but related, application routines 370 may be visually presented on the display 380. Alternatively or additionally, sequences of instructions from the concurrently executed portions 377 identified as having caused accesses to the same piece of data (e.g., the data 330) relatively close in time, as indicated in the reduced data 337, may be visually presented on the display 380 to enable inspection of those sequences.

    [0039] Alternatively, in embodiments that include the debugging device and in which the reduced data 337 is provided to the debugging device 500, the debugging device 500 incorporates one or more of a processor component 550, a storage 560, controls 520, a display 580 and an interface 590 to couple the debugging device 500 to the network 999. The storage 560 stores the reduced data 337 and a debugging routine 570. The debugging routine 570 incorporates a sequence of instructions operative on the processor component 550 in its role as a main processor component of the remote computing device 500 to implement logic to perform various functions. Not unlike the debugging routine 170, in executing the debugging routine 570, the processor component 550 may operate the controls 520 and/or the display 580 to provide a user interface by which information relevant to debugging potentially uncoordinated data accesses by portions 377 of the application routine(s) 370 may be presented. Again, alternatively or additionally, indications of possible instances of uncoordinated access arising from concurrent execution of different portions 377 and/or sequences of instructions from those different portions may be visually presented on the display 580.

    [0040] In embodiments that include the server 100, the computing device 300 may be provided with one or more of the detection driver 140, the configuration data 135 and the debugging routine 170 therefrom, and such provisioning may be performed through the network 999. It should be noted that the debugging routine 170 may be a component of one or more of the application routines 370 such that the debugging routine 170 may be provided to the computing device 300 along with or as part of one or more of the application routines 370.

    [0041] FIG. 2 illustrates a block diagram of an alternate embodiment of the race condition debugging system 1000 that includes an alternate embodiment of the computing device 300. The alternate embodiment of the race condition debugging system 1000 of FIG. 2 is similar to the embodiment of FIG. 1 in many ways, and thus, like reference numerals are used to refer to like components throughout. However, unlike the computing device 300 of FIG. 1, the computing device 300 of FIG. 2 incorporates more than one of the processor components 350, designated in FIG. 2 as processor components 350a and 350b that may each execute a portion of the same application routine 370 or of different, but related, application routines 370. Thus, the race conditions in accesses to the same piece of data by different portions of one or more application routines 370 may involve races between accesses made by different ones of the processor components 350a and 350b.

    [0042] To accommodate this, execution of the detection driver 140 by one or both of the processor components 350a and 350b may cause the monitoring units 354 of both of these processor components to be configured to monitor for specific cache events and to capture aspects of the state of their respective ones of these processor components. Thus, both of the depicted monitoring units 354 may contribute to the contents of the monitoring data 334. Further, the detection driver 140 may collate the indications of the state of each of the processor components 350a and 350b into a combined set of indications in generating the reduced data 337. By way of example, the detection driver 140 may filter out redundant information in the indications in the monitoring data 334 received from each of the monitoring units 354.

    [0043] In various embodiments, each of the processor components 350 and 550 may include any of a wide variety of commercially available processors. Further, one or more of these processor components may include multiple processors, a multi-threaded processor, a multi-core processor (whether the multiple cores coexist on the same or separate dies), and/or a multi-processor architecture of some other variety by which multiple physically separate processors are in some way linked.

    [0044] In various embodiments, each of the storages 360 and 560 may be based on any of a wide variety of information storage technologies, possibly including volatile technologies requiring the uninterrupted provision of electric power, and possibly including technologies entailing the use of machine-readable storage media that may or may not be removable. Thus, each of these storages may include any of a wide variety of types (or combination of types) of storage device, including without limitation, read-only memory (ROM), random-access memory (RAM), dynamic RAM (DRAM), Double-Data-Rate DRAM (DDR-DRAM), synchronous DRAM (SDRAM), static RAM (SRAM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, polymer memory (e.g., ferroelectric polymer memory), ovonic memory, phase change or ferroelectric memory, silicon-oxide-nitride-oxide-silicon (SONOS) memory, magnetic or optical cards, one or more individual ferromagnetic disk drives, or a plurality of storage devices organized into one or more arrays (e.g., multiple ferromagnetic disk drives organized into a Redundant Array of Independent Disks array, or RAID array). It should be noted that although each of these storages is depicted as a single block, one or more of these may include multiple storage devices that may be based on differing storage technologies. Thus, for example, one or more of each of these depicted storages may represent a combination of an optical drive or flash memory card reader by which programs and/or data may be stored and conveyed on some form of machine-readable storage media, a ferromagnetic disk drive to store programs and/or data locally for a relatively extended period, and one or more volatile solid state memory devices enabling relatively quick access to programs and/or data (e.g., SRAM or DRAM). It should also be noted that each of these storages may be made up of multiple storage components based on identical storage technology, but which may be maintained separately as a result of specialization in use (e.g., some DRAM devices employed as a main storage while other DRAM devices employed as a distinct frame buffer of a graphics controller).

    [0045] In various embodiments, each of the interfaces 390 and 590 may employ any of a wide variety of signaling technologies enabling computing devices to be coupled to other devices as has been described. Each of these interfaces may include circuitry providing at least some of the requisite functionality to enable such coupling. However, each of these interfaces may also be at least partially implemented with sequences of instructions executed by corresponding ones of the processor components (e.g., to implement a protocol stack or other features). Where electrically and/or optically conductive cabling is employed, these interfaces may employ signaling and/or protocols conforming to any of a variety of industry standards, including without limitation, RS-232C, RS-422, USB, Ethernet (IEEE-802.3) or IEEE-1394. Where the use of wireless signal transmission is entailed, these interfaces may employ signaling and/or protocols conforming to any of a variety of industry standards, including without limitation, IEEE 802.11a, 802.11b, 802.11g, 802.16, 802.20 (commonly referred to as "Mobile Broadband Wireless Access"); Bluetooth; ZigBee; or a cellular radiotelephone service such as GSM with General Packet Radio Service (GSM/GPRS), CDMA/lxRTT, Enhanced Data Rates for Global Evolution (EDGE), Evolution Data Only/Optimized (EV-DO), Evolution For Data and Voice (EV-DV), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), 4G LTE, etc.

    [0046] FIG. 5 illustrates a block diagram of a portion of an embodiment of the race condition debugging system 1000 of FIG. 1 in greater detail. More specifically, FIG. 5 depicts aspects of the operating environment of the computing device 300 in which the processor component 350, in executing at least the detection driver 140, detects cache events that may be associated with a race condition between two accesses to the same piece of data. FIG. 6 illustrates a block diagram of a portion of an embodiment of the race condition debugging system 1000 of FIG. 2 in greater detail. More specifically, FIG. 6 depicts aspects of the operating environment of the computing device 300 in which either of the processor components 350a or 350b, in executing at least the detection driver 140, detects cache events that may be associated with a race condition between two accesses to the same piece of data.

    [0047] Referring to both FIGS. 5 and 6, as recognizable to those skilled in the art, the operating system 340, the application routine(s) 370, the detection driver 140 and the debugging routine 170, including the components of which each is composed, are selected to be operative on whatever type of processor or processors that are selected to implement applicable ones of the processor components 350, 350a or 350b. In various embodiments, the sequences of instructions making up each may include one or more of an operating system, device drivers and/or application-level routines (e.g., so-called "software suites" provided on disc media, "applets" obtained from a remote server, etc.). Where an operating system is included, the operating system may be any of a variety of available operating systems appropriate for whatever corresponding ones of the processor components 350, 350a or 350b. Where one or more device drivers are included, those device drivers may provide support for any of a variety of other components, whether hardware or software components, of the computing device 300.

    [0048] The detection driver 140 may include a configuration component 145 executable by the processor component 350 or by the processor components 350a and/or 350b to configure one or more of the monitoring units 354 to detect one or more specific cache events (e.g., RFO, HITM, etc.) associated with race conditions between accesses to the same piece of data (e.g., the data 330). More specifically, a trigger component 1455 of the configuration component 145 programs one or more of the registers 3545 of one or more of the monitoring units to specify one or more particular cache events to respond to by capturing various aspects of the state of the processor component 350 or of the processor components 350a and/or 350b. The configuration component 145 may retrieve indications of what those particular cache events are from the configuration data 135. The registers 3545 may enable the select of which aspects of the state of the processor component 350 or of the processor components 350a and/or 350b are to be captured. Further, the registers 3545 may enable some control over the manner in which the monitoring data 334 (or alternatively monitoring data 334a and 334b, as depicted in FIG. 6) generated by the monitoring unit(s) 354 is provided.

    [0049] A counter component 1454 of the configuration component 145 may program the counter 3544 of the monitoring unit(s) 354 to capture the state of the processor component 350 or of the processor components 350a and/or 350b at a specified interval of occurrences of a specified cache event (e.g., every "Nth" occurrence) to reduce the frequency at which such captures are performed. In embodiments in which there is more than one of the processor components 350 (e.g., the processor components 350a and 350b of FIG. 6), the configuration component 145 may program the monitoring units 354 of each of the processor components to coordinate the capturing of cache events at the same frequency and/or for the same occurrences of a cache event.

    [0050] The trigger component 1455 may also operate one or more of the registers 3545 of one or more of the monitoring units 354 to dynamically enable and/or disable detection of the one or more specific cache events. The configuration component 145 may do so in response to indications received from the operating system 340, the debugging routine 170, and/or at least one of the application routines 370 of when the at least one of the application routines 370 is being executed.

    [0051] The detection driver 140 may include a filtering component 144 executable by the processor component 350 or by the processor components 350a and/or 350b to receive the monitoring data 334 or the monitoring data 334a and 334b from the monitoring unit(s) 354. Upon receipt of the monitoring data 334 or the monitoring data 334a-b, the filtering component 144 parses the indications of the status of the processor component 350 or the processor components 350a and/or 350b to generate the reduced data 337 with a subset of those indications that are deemed germane to identifying what portions 377 of one or more of the application routines 370 may be involved in a race condition in accessing the same piece of data. In so doing, the filtering component 144 may do so by comparing addresses of routines and/or data associated with cache events to one or more known addresses or ranges of addresses either of portions 377 of one or more of the application routines 370 believed to be responsible for such race conditions or of the data being accessed amidst such race conditions. In embodiments in which there is more than one of the processor components 350 (e.g., the processor components 350a and 350b of FIG. 6), the filtering component may eliminate redundant information in the indications of the state of those processor components in generating the reduced data 337.

    [0052] The filtering component 144 may also remove indications of the state of ones of the cores 355, processes and/or threads not involved in executing any portion 377 of one or more of the application routines 370 believed to be responsible for such race conditions. To enable this, the filtering component 144 may retrieve identifiers of the cores 355, processes and/or threads that are involved in executing portions 377 of the one or more application routines 370. The filtering component 144 may compare such identifiers to similar identifiers included in the indications to distinguish indications associated with a cache event arising from a race condition between accesses made by different portions 377 of the application routine(s) 370, from indications associated with other cache events. In various embodiments, depending on the architecture of the operating system 340 and/or other factors, the filtering component 144 may receive such identifiers directly from the application routine(s) 370 (e.g., as a result of one or more of the application routines 370 operating in a "debug mode"), from the debugging routine 170, or from the operating system 340. The debugging routine 170 may retrieve such identifiers from the application routine(s) 370 or the operating system 340 before relaying them to the filtering component 144.

    [0053] The detection driver 140 may include a relaying component 147 executable by the processor component 350 or by the processor components 350a and/or 350b to relay the reduced data 337 to a debugging routine executed within one or both of the computing device 300 or another computing device (e.g., the debugging device 500). In relaying the reduced data 337 to a debugging routine executed within the computing device 300 (e.g., the debugging routine 170), the relaying component 147 may so relay the reduced data 337 through the operating system 340. This may be necessitated in embodiments in which the detection driver 140 is executed as a device driver at a lower level similar to the operating system 340 such that the operating system provides a layer of abstraction between device drivers and other routines, such as the debugging routine 170. In relaying the reduced data 337 to a debugging routine executed within another computing device, the relaying component 147 may employ the interface 390 to transmit the reduced data 337 to another computing device (e.g., the debugging device 500) via a network (e.g., the network 999).

    [0054] FIG. 7 illustrates one embodiment of a logic flow 2100. The logic flow 2100 may be representative of some or all of the operations executed by one or more embodiments described herein. More specifically, the logic flow 2100 may illustrate operations performed by the processor component 350 or the processor components 350a and/or 350b in executing at least the detection driver 140, and/or performed by other component(s) of the computing device 300.

    [0055] At 2110, a processor component of a computing device of a race condition debugging system (e.g., the processor component 350 or the processor component 350a and/or 350b of the computing device 300 of the race condition debugging system 1000) awaits an indication of the start of execution of one or more application routines (e.g., the application routine(s) 370). As has been discussed, such an indication may be conveyed directly from the one or more application routines, through an operating system (e.g., the operating system 340) or from a debugging routine (e.g., the debugging routine 170).

    [0056] At 2120, a monitoring unit (e.g., the monitoring unit 354) of a processor component that executes at least one portion of the application routine(s) is configured to detect occurrences of specific cache event(s) associated with race conditions between accesses to the same piece of data (e.g., RFO, HITM, etc.), and to respond to those occurrences by capturing indications of the state of a processor component that executes one or more application routines. One or more of the processors that executes at least a portion of the application routine(s) may or may not be the same processor component that configures the monitoring unit. As previously discussed, the accesses may be a combination of a read operation and a write operation or may be a combination of multiple write operations. Regardless, the accesses may be made by different portions of a single application routine, portions of two different application routines, or two instances of the same portion of the same application routine that have failed to coordinate these accesses, even though they are meant to do so (e.g., portions 377 of either a single application routine 370 or of different ones of the application routines 370, or a single portion 377 of a single application routine 370). As also previously discussed, the captured indications may include an indicator of the type of cache event that occurred, an indicator of the type of data access (e.g., a read operation or a write operation) that triggered the cache event, an identifier of a process and/or thread of execution that caused the data access, an indication of an address of an instruction pointer, the contents of one or more registers of the processor component, etc.

    [0057] At 2130, a counter of the monitoring unit is configured to perform such captures at an interval of occurrences of the cache event(s) other than every occurrence. As has been discussed, such an interval may be every "Nth" occurrence of the cache event(s), such as every 3rd occurrence, every 5th occurrence, etc.

    [0058] At 2140, the end of the execution of the one or more application routines is awaited. At 2150, the end of execution of the application routine(s) is responded to by configuring the monitoring unit to cease monitoring for occurrences of the cache event(s). In this way, the monitoring for occurrences of the cache events and accompanying capture operations may be dynamically enabled and disabled depending on whether the application routine(s) are currently being executed.

    [0059] FIG. 8 illustrates one embodiment of a logic flow 2200. The logic flow 2200 may be representative of some or all of the operations executed by one or more embodiments described herein. More specifically, the logic flow 2200 may illustrate operations performed by the processor component 350 or the processor components 350a and/or 350b in executing at least the detection driver 140, and/or performed by other component(s) of the computing device 300.

    [0060] At 2210, a monitoring unit of a processor component of a computing device of a race condition debugging system (e.g., the monitoring unit 354 of the processor component 350 or the processor components 350a and/or 350b of the computing device 300 of the race condition debugging system 1000) monitors one or more cores (e.g., one or more of the cores 355 or one or both of the cores 355x and 355y) of a processor component executing at least a portion of at least one application routine (e.g., one of the portions 377 of one of the application routines 370) for an occurrence of a specified cache event. As has been discussed, the monitoring unit is configured to monitor for one or more specific cache events associated with race conditions between accesses to the same piece of data, such as a race condition between a read operation and a write operation or between two write operations involving the same piece of data.

    [0061] At 2220, an occurrence of the specified event is detected. At 2230, a check is made to determine whether the end of an interval programmed into a counter of the monitoring unit (e.g., the counter 3544) has been reached. If the end of the interval has not been reached, then the one or more cores of the processor component executing at least a portion of the application routine are again monitored at 2210.

    [0062] However, if the end of the interval has been reached, then the state of the one or more cores of the processor component executing at least one portion of at least one application routine is captured at 2240. As has been discussed, such capturing may entail capturing the state of one or more of the registers of one or more of the cores (e.g., the registers 3555) and/or an address pointed to by an instruction pointer of one or more of the cores. Following such a capture, the monitoring of the one or more cores continues at 2210.

    [0063] FIG. 9 illustrates one embodiment of a logic flow 2300. The logic flow 2300 may be representative of some or all of the operations executed by one or more embodiments described herein. More specifically, the logic flow 2300 may illustrate operations performed by the processor component 350 or the processor components 350a and/or 350b in executing at least the detection driver 140, and/or performed by other component(s) of the computing device 300.

    [0064] At 2310, a processor component of a computing device of a race condition debugging system (e.g., the processor component 350 or the processor component 350a and/or 350b of the computing device 300 of the race condition debugging system 1000) receives indications of the identity of at least one core (e.g., one of the cores 355), process and/or thread involved in executing at least one portion of at least one application routine (e.g., a portion 377 of one of the application routines 370). As has been discussed, such an indication may be conveyed directly from the application routine(s), through an operating system (e.g., the operating system 340) or from a debugging routine (e.g., the debugging routine 170).

    [0065] At 2320, indications of the state of one or more cores of a processor component that were captured in response to an occurrence of a specified cache event are received. As has been discussed, the specified cache event may be a type of cache event associated with occurrences of race conditions in accessing the same piece of memory between two portions of a single application routine or between portions of two different (but related) application routines. The accesses between which a race condition may exist may be a combination of a read operation and a write operation, or may be a combination of two or more write operations.

    [0066] At 2330, indications associated with one or more cores, processes and/or threads not involved in executing any portion of the application routine(s) are filtered out to generate reduced data. Such filtering employs the indications of identity of the one or more cores, processes and/or threads that are involved in executing at least one portion of the application(s).

    [0067] At 2340, the reduced data generated, at least in part, by the above filtering is provided to a debugging routine. As has been discussed, the debugging routine may be executed within the same computing device in which the application routine(s) are executed, and the reduced data may be provided to the debugging routine through an operating system of that computing device. Otherwise, the reduced data may be transmitted via network to another computing device on which a debugging routine is executed.

    [0068] FIG. 10 illustrates an embodiment of a processing architecture 3000 suitable for implementing various embodiments as previously described. More specifically, the processing architecture 3000 (or variants thereof) may be implemented as part of one or more of the computing devices 100, 300 or 500. It should be noted that components of the processing architecture 3000 are given reference numbers in which the last two digits correspond to the last two digits of reference numbers of at least some of the components earlier depicted and described as part of these computing devices. This is done as an aid to correlating components of each.

    [0069] The processing architecture 3000 may include various elements commonly employed in digital processing, including without limitation, one or more processors, multi-core processors, co-processors, memory units, chipsets, controllers, peripherals, interfaces, oscillators, timing devices, video cards, audio cards, multimedia input/output (I/O) components, power supplies, etc. As used in this application, the terms "system" and "component" are intended to refer to an entity of a computing device in which digital processing is carried out, that entity being hardware, a combination of hardware and software, software, or software in execution, examples of which are provided by this depicted exemplary processing architecture. For example, a component can be, but is not limited to being, a process running on a processor component, the processor component itself, a storage device (e.g., a hard disk drive, multiple storage drives in an array, etc.) that may employ an optical and/or magnetic storage medium, an software object, an executable sequence of instructions, a thread of execution, a program, and/or an entire computing device (e.g., an entire computer). By way of illustration, both an application running on a server and the server can be a component. One or more components can reside within a process and/or thread of execution, and a component can be localized on one computing device and/or distributed between two or more computing devices. Further, components may be communicatively coupled to each other by various types of communications media to coordinate operations. The coordination may involve the uni-directional or bi-directional exchange of information. For instance, the components may communicate information in the form of signals communicated over the communications media. The information can be implemented as signals allocated to one or more signal lines. A message (including a command, status, address or data message) may be one of such signals or may be a plurality of such signals, and may be transmitted either serially or substantially in parallel through any of a variety of connections and/or interfaces.

    [0070] As depicted, in implementing the processing architecture 3000, a computing device may include at least a processor component 950, a storage 960, an interface 990 to one or more other devices, and a coupling 959. As will be explained, depending on various aspects of a computing device implementing the processing architecture 3000, including its intended use and/or conditions of use, such a computing device may further include additional components, such as without limitation, a display interface 985, or one or more processing subsystems 900. In whatever computing device may implement the processing architecture 3000, one of the various depicted components that are implemented with circuitry may be implemented with discrete components and/or as blocks of circuitry within a single or relatively small number of semiconductor devices (e.g., a "system on a chip" or SOC).

    [0071] The coupling 959 may include one or more buses, point-to-point interconnects, transceivers, buffers, crosspoint switches, and/or other conductors and/or logic that communicatively couples at least the processor component 950 to the storage 960. Coupling 959 may further couple the processor component 950 to one or more of the interface 990, the audio subsystem 970 and the display interface 985 (depending on which of these and/or other components are also present). With the processor component 950 being so coupled by couplings 959, the processor component 950 is able to perform the various ones of the tasks described at length, above, for whichever one(s) of the aforedescribed computing devices implement the processing architecture 3000. Coupling 959 may be implemented with any of a variety of technologies or combinations of technologies by which signals are optically and/or electrically conveyed. Further, at least portions of couplings 959 may employ timings and/or protocols conforming to any of a wide variety of industry standards, including without limitation, Accelerated Graphics Port (AGP), CardBus, Extended Industry Standard Architecture (E-ISA), Micro Channel Architecture (MCA), NuBus, Peripheral Component Interconnect (Extended) (PCI-X), PCI Express (PCI-E), Personal Computer Memory Card International Association (PCMCIA) bus, HyperTransport™, QuickPath, and the like.

    [0072] As previously discussed, the processor component 950 (corresponding to one or more of the processor components 350 or 550) may include any of a wide variety of commercially available processors, employing any of a wide variety of technologies and implemented with one or more cores physically combined in any of a number of ways.

    [0073] As previously discussed, the storage 960 (corresponding to one or more of the storages 360 or 560) may be made up of one or more distinct storage devices based on any of a wide variety of technologies or combinations of technologies. More specifically, as depicted, the storage 960 may include one or more of a volatile storage 961 (e.g., solid state storage based on one or more forms of RAM technology), a non-volatile storage 962 (e.g., solid state, ferromagnetic or other storage not requiring a constant provision of electric power to preserve their contents), and a removable media storage 963 (e.g., removable disc or solid state memory card storage by which information may be conveyed between computing devices). This depiction of the storage 960 as possibly including multiple distinct types of storage is in recognition of the commonplace use of more than one type of storage device in computing devices in which one type provides relatively rapid reading and writing capabilities enabling more rapid manipulation of data by the processor component 950 (but possibly using a "volatile" technology constantly requiring electric power) while another type provides relatively high density of non-volatile storage (but likely provides relatively slow reading and writing capabilities).

    [0074] Given the often different characteristics of different storage devices employing different technologies, it is also commonplace for such different storage devices to be coupled to other portions of a computing device through different storage controllers coupled to their differing storage devices through different interfaces. By way of example, where the volatile storage 961 is present and is based on RAM technology, the volatile storage 961 may be communicatively coupled to coupling 959 through a storage controller 965a providing an appropriate interface to the volatile storage 961 that perhaps employs row and column addressing, and where the storage controller 965a may perform row refreshing and/or other maintenance tasks to aid in preserving information stored within the volatile storage 961. By way of another example, where the non-volatile storage 962 is present and includes one or more ferromagnetic and/or solid-state disk drives, the non-volatile storage 962 may be communicatively coupled to coupling 959 through a storage controller 965b providing an appropriate interface to the non-volatile storage 962 that perhaps employs addressing of blocks of information and/or of cylinders and sectors. By way of still another example, where the removable media storage 963 is present and includes one or more optical and/or solid-state disk drives employing one or more pieces of machine-readable storage medium 969, the removable media storage 963 may be communicatively coupled to coupling 959 through a storage controller 965c providing an appropriate interface to the removable media storage 963 that perhaps employs addressing of blocks of information, and where the storage controller 965c may coordinate read, erase and write operations in a manner specific to extending the lifespan of the machine-readable storage medium 969.

    [0075] One or the other of the volatile storage 961 or the non-volatile storage 962 may include an article of manufacture in the form of a machine-readable storage media on which a routine including a sequence of instructions executable by the processor component 950 to implement various embodiments may be stored, depending on the technologies on which each is based. By way of example, where the non-volatile storage 962 includes ferromagnetic-based disk drives (e.g., so-called "hard drives"), each such disk drive typically employs one or more rotating platters on which a coating of magnetically responsive particles is deposited and magnetically oriented in various patterns to store information, such as a sequence of instructions, in a manner akin to storage medium such as a floppy diskette. By way of another example, the non-volatile storage 962 may be made up of banks of solid-state storage devices to store information, such as sequences of instructions, in a manner akin to a compact flash card. Again, it is commonplace to employ differing types of storage devices in a computing device at different times to store executable routines and/or data. Thus, a routine including a sequence of instructions to be executed by the processor component 950 to implement various embodiments may initially be stored on the machine-readable storage medium 969, and the removable media storage 963 may be subsequently employed in copying that routine to the non-volatile storage 962 for longer term storage not requiring the continuing presence of the machine-readable storage medium 969 and/or the volatile storage 961 to enable more rapid access by the processor component 950 as that routine is executed.

    [0076] As previously discussed, the interface 990 (corresponding to one or more of the interfaces 390 or 590) may employ any of a variety of signaling technologies corresponding to any of a variety of communications technologies that may be employed to communicatively couple a computing device to one or more other devices. Again, one or both of various forms of wired or wireless signaling may be employed to enable the processor component 950 to interact with input/output devices (e.g., the depicted example keyboard 920 or printer 925) and/or other computing devices, possibly through a network (e.g., the network 999) or an interconnected set of networks. In recognition of the often greatly different character of multiple types of signaling and/or protocols that must often be supported by any one computing device, the interface 990 is depicted as including multiple different interface controllers 995a, 995b and 995c. The interface controller 995a may employ any of a variety of types of wired digital serial interface or radio frequency wireless interface to receive serially transmitted messages from user input devices, such as the depicted keyboard 920. The interface controller 995b may employ any of a variety of cabling-based or wireless signaling, timings and/or protocols to access other computing devices through the depicted network 999 (perhaps a network made up of one or more links, smaller networks, or perhaps the Internet). The interface 995c may employ any of a variety of electrically conductive cabling enabling the use of either serial or parallel signal transmission to convey data to the depicted printer 925. Other examples of devices that may be communicatively coupled through one or more interface controllers of the interface 990 include, without limitation, microphones, remote controls, stylus pens, card readers, finger print readers, virtual reality interaction gloves, graphical input tablets, joysticks, other keyboards, retina scanners, the touch input component of touch screens, trackballs, various sensors, a camera or camera array to monitor movement of persons to accept commands and/or data signaled by those persons via gestures and/or facial expressions, laser printers, inkjet printers, mechanical robots, milling machines, etc.

    [0077] Where a computing device is communicatively coupled to (or perhaps, actually incorporates) a display (e.g., the depicted example display 980, corresponding to the display 580), such a computing device implementing the processing architecture 3000 may also include the display interface 985. Although more generalized types of interface may be employed in communicatively coupling to a display, the somewhat specialized additional processing often required in visually displaying various forms of content on a display, as well as the somewhat specialized nature of the cabling-based interfaces used, often makes the provision of a distinct display interface desirable. Wired and/or wireless signaling technologies that may be employed by the display interface 985 in a communicative coupling of the display 980 may make use of signaling and/or protocols that conform to any of a variety of industry standards, including without limitation, any of a variety of analog video interfaces, Digital Video Interface (DVI), DisplayPort, etc.

    [0078] More generally, the various elements of the computing devices described and depicted herein may include various hardware elements, software elements, or a combination of both. Examples of hardware elements may include devices, logic devices, components, processors, microprocessors, circuits, processor components, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), memory units, logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Examples of software elements may include software components, programs, applications, computer programs, application programs, system programs, software development programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. However, determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints, as desired for a given implementation.

    [0079] Some embodiments may be described using the expression "one embodiment" or "an embodiment" along with their derivatives. These terms mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment. Further, some embodiments may be described using the expression "coupled" and "connected" along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, some embodiments may be described using the terms "connected" and/or "coupled" to indicate that two or more elements are in direct physical or electrical contact with each other. The term "coupled," however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. Furthermore, aspects or elements from different embodiments may be combined.

    [0080] It is emphasized that the Abstract of the Disclosure is provided to allow a reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. In the appended claims, the terms "including" and "in which" are used as the plain-English equivalents of the respective terms "comprising" and "wherein," respectively. Moreover, the terms "first," "second," "third," and so forth, are used merely as labels, and are not intended to impose numerical requirements on their objects.


    Claims

    1. An apparatus to detect race conditions comprising:

    a processor component (350), comprising a monitoring unit (354);

    a trigger component (1455) for execution by the processor component to configure the monitoring unit to detect a cache event associated with a race condition between accesses to a piece of data including to detect a cache event associated with a race condition between a first access to the piece of data by a first portion of an application routine executed by the processor component and a second access to the piece of data by a second portion of the application routine, and to capture an indication of a state of the processor component to generate monitoring data in response to an occurrence of the cache event; and

    a counter component (1454) for execution by the processor component to configure a counter of the monitoring unit to enable capture of the indication of the state of the processor component at a frequency less than every occurrence of the cache event; and

    a filter component for execution by the processor component to:

    retrieve identifiers of at least one of the first portion, the second portion, a core of the processor component that executes one of the first and second portions, or a thread of execution of one of the first and second portions;

    compare said identifiers to identifiers of the monitoring data to determine whether the race condition is between the first and second accesses; and

    remove from the monitoring data the captured indication based on the determination that the cache event does not arise from the race condition between the first and the second accesses to generate reduced data comprising a subset of multiple indications of the state of the first processor component of the monitoring data.


     
    2. The apparatus of claim 1, the trigger component to dynamically enable the capture of the indication based on whether an application routine comprising multiple portions for concurrent execution by the processor component is currently executed by the processor component.
     
    3. The apparatus of claim 1, wherein the first access comprising a write operation and the second access comprising one of a read operation and a write operation.
     
    4. The apparatus of claim 1, the race condition comprising a race condition between a read operation of a first application routine executed by the processor component in a first thread of execution to read the piece of data, and a write operation of a second application routine executed by the processor component in a second thread of execution to write to the piece of data.
     
    5. The apparatus of claim 1, wherein the processor component is a first processor component comprising a first monitoring unit and the monitoring data are first monitoring data, the apparatus further comprising a second processor component (350b) comprising a second monitoring unit (354), the trigger component to further configure the second monitoring unit to detect the cache event and to capture an indication of a state of the second processor component as a second monitoring data in response to an occurrence of the cache event.
     
    6. A computing-implemented method for detecting race conditions comprising:

    detecting (2220) a cache event associated with a race condition between accesses to a piece of data by at least one application routine executed by a processor component; and

    recurringly capturing (2240) an indication of a state of the processor component as monitoring data in response to an occurrence of the cache event at a selected interval of multiple occurrences of the cache event;

    characterized by

    retrieving identifiers of at least one of a first portion of the application routine, a second portion of the application routine, a core of the processor component that executes one of the first and second portions, or a thread of execution of one of the first and second portions;

    comparing said identifiers to identifiers of the monitoring data to determine whether the race condition is between a read operation of the first portion and a write operation of the second portion; and

    removing from the monitoring data the captured indication based on the determination that the cache event does not arise from the race condition between the read operation of the first portion of the application routine and the write operation of a second portion of the application routine to generate a reduced data comprising a subset of multiple indications of the state of the processor component of the monitoring data.


     
    7. The computer-implemented method of claim 6, comprising dynamically enabling capture of the indication based on whether the application routine is currently at least partly executed by the processor component.
     
    8. The computer-implemented method of claim 6, the race condition comprising a race condition between a first write operation of the application routine executed by the processor component in a first thread of execution to write to the piece of data, and a second write operation of another application routine executed by the processor component in a second thread of execution to write to the piece of data.
     
    9. The computer-implemented method of claim 6, the indication comprising at least one of contents of a register of the processor component, an address of an instruction pointer of the processor component, an indication of what access operation triggered the cache event, a type of the cache event, an identifier of a core of the processor that executed an access operation that triggered the cache event, or an identifier of a thread of execution on which an access operation that triggered the cache event was executed.
     
    10. The computer-implemented method of claim 6, the cache event involving a cache of the processor component and comprising at least one of a read-for-ownership [RFO] cache event or a hit-modified [HITM] cache event.
     
    11. The computer-implemented method of claim 6, wherein the processor component is a first processor component and the monitoring data are first monitoring data, the computer-implemented method further comprising capturing an indication of a state of a second processor component as a second monitoring data in response to the occurrence of the cache event at the selected interval of multiple occurrences of the cache event.
     
    12. The computer-implemented method of claim 11, comprising generating a reduced data comprising a subset of multiple indications of the state of the processor component of the monitoring data and a subset of multiple indications of the state of the other processor component of the other monitoring data by removing the indication of the state of the second processor component from the second monitoring data based on the cache event not arising from a race condition between a first access to the piece of data by a first portion of the application routine and a second access to the piece of data by a second portion of the application routine.
     
    13. At least one machine-readable storage medium comprising instructions that when executed by a processor component, cause the processor component to perform the method of any of claims 6-12.
     


    Ansprüche

    1. Vorrichtung zum Detektieren von Wettlaufsituationen, die Folgendes umfasst:

    eine Prozessorkomponente (350), die eine Überwachungseinheit (354) umfasst;

    eine Auslösekomponente (1455) für die Ausführung durch die Prozessorkomponente, um die Überwachungseinheit zu konfigurieren, ein Cache-Ereignis zu detektieren, das einer Wettlaufsituation zwischen Zugriffen auf einen Datenabschnitt zugeordnet ist, was enthält, ein Cache-Ereignis zu detektieren, das einer Wettlaufsituation zwischen einem ersten Zugriff auf den Datenabschnitt durch einen ersten Teil einer Anwendungsroutine, die durch die Prozessorkomponente ausgeführt wird, und einem zweiten Zugriff auf den Datenabschnitt durch einen zweiten Teil der Anwendungsroutine zugeordnet ist, und eine Angabe eines Zustands der Prozessorkomponente aufzunehmen, um als Reaktion auf ein Auftreten des Cache-Ereignisses Überwachungsdaten zu erzeugen; und

    eine Zählerkomponente (1454) für die Ausführung durch die Prozessorkomponente, um einen Zähler der Überwachungseinheit zu konfigurieren, eine Aufnahme der Angabe des Zustands der Prozessorkomponente mit einer Häufigkeit, die kleiner als jedes Auftreten des Cache-Ereignisses ist, zu ermöglichen; und

    eine Filterkomponente für die Ausführung durch die Prozessorkomponente zum:

    Erfassen von Kennungen des ersten Teils und/oder des zweiten Teils und/oder eines Kerns der Prozessorkomponente, die einen des ersten und des zweiten Teils ausführt, und/oder eines Threads der Ausführung des ersten oder des zweiten Teils;

    Vergleichen der Kennungen mit Kennungen der Überwachungsdaten, um zu bestimmen, ob die Wettlaufsituation zwischen dem ersten und dem zweiten Zugriff ist; und

    Entfernen aus den Überwachungsdaten der aufgenommenen Angabe anhand der Bestimmung, dass das Cache-Ereignis nicht von der Wettlaufsituation zwischen dem ersten und dem zweiten Zugriff stammt, um reduzierte Daten zu erzeugen, die eine Untergruppe der mehreren Angaben des Zustands der ersten Prozessorkomponente der Überwachungsdaten umfassen.


     
    2. Vorrichtung nach Anspruch 1, wobei die Auslösekomponente, um die Aufnahme der Angabe anhand dessen zu ermöglichen, ob eine Anwendungsroutine, die mehrere Teile für eine gleichzeitige Ausführung durch die Prozessorkomponente umfasst, derzeit durch die Prozessorkomponente ausgeführt wird.
     
    3. Vorrichtung nach Anspruch 1, wobei der erste Zugriff einen Schreibvorgang umfasst und der zweite Zugriff einen Lesevorgang oder einen Schreibvorgang umfasst.
     
    4. Vorrichtung nach Anspruch 1, wobei die Wettlaufsituation eine Wettlaufsituation zwischen einem ersten Lesevorgang einer ersten Anwendungsroutine, die durch die Prozessorkomponente in einem ersten Thread der Ausführung ausgeführt wird, um den Datenabschnitt zu lesen, und einem Schreibvorgang einer zweiten Anwendungsroutine, die durch die Prozessorkomponente in einem zweiten Thread der Ausführung ausgeführt wird, um den Datenabschnitt zu schreiben, ist.
     
    5. Vorrichtung nach Anspruch 1, wobei die Prozessorkomponente eine erste Prozessorkomponente ist, die eine erste Überwachungseinheit umfasst, und die Überwachungsdaten erste Überwachungsdaten sind, wobei die Vorrichtung ferner eine zweite Prozessorkomponente (350b) umfasst, die eine zweite Überwachungseinheit (354) umfasst, wobei die Auslösekomponente weiter dazu dient, die zweite Überwachungseinheit zu konfigurieren, das Cache-Ereignis zu detektieren und eine Angabe eines Zustands der zweiten Prozessorkomponente als zweite Überwachungsdaten als Reaktion auf ein Auftreten des Cache-Ereignisses aufzunehmen.
     
    6. Computerimplementiertes Verfahren zum Detektieren von Wettlaufsituationen, das Folgendes umfasst:

    Detektieren (2220) eines Cache-Ereignisses, das einer Wettlaufsituation zwischen Zugriffen auf einen Datenabschnitt durch mindestens eine Anwendungsroutine, die durch eine Prozessorkomponente ausgeführt wird, zugeordnet ist; und

    wiederkehrendes Aufnehmen (2240) einer Angabe eines Zustands der Prozessorkomponente als Überwachungsdaten als Reaktion auf ein Auftreten des Cache-Ereignisses bei einem ausgewählten Intervall mehrerer Vorkommen des Cache-Ereignisses;

    gekennzeichnet durch

    Erfassen von Kennungen eines ersten Teils der Anwendungsroutine und/oder eines zweiten Teils der Anwendungsroutine und/oder eines Kerns der Prozessorkomponente, die einen des ersten und des zweiten Teils ausführt, und/oder eines Threads der Ausführung des ersten und des zweiten Teils;
    Vergleichen der Kennungen mit Kennungen der Überwachungsdaten, um zu bestimmen, ob die Wettlaufsituation zwischen einem Lesevorgang des ersten Teils und einem Schreibvorgang des zweiten Teils ist; und

    Entfernen aus den Überwachungsdaten der aufgenommenen Angabe anhand der Bestimmung, dass das Cache-Ereignis nicht von der Wettlaufsituation zwischen dem Lesevorgang des ersten Teils der Anwendungsroutine und dem Schreibvorgang eines zweiten Teils der Anwendungsroutine stammt, um reduzierte Daten zu erzeugen, die eine Untergruppe mehrerer Angaben des Zustands der Prozessorkomponente der Überwachungsdaten umfassen.


     
    7. Computerimplementiertes Verfahren nach Anspruch 6, das umfasst, die Aufnahme der Angabe anhand dessen, ob die Anwendungsroutine derzeit zumindest teilweise durch die Prozessorkomponente ausgeführt wird, dynamisch zu ermöglichen.
     
    8. Computerimplementiertes Verfahren nach Anspruch 6, wobei die Wettlaufsituation eine Wettlaufsituation zwischen einem ersten Schreibvorgang der Anwendungsroutine, die durch die Prozessorkomponente in einem ersten Thread der Ausführung ausgeführt wird, um den Datenabschnitt zu schreiben, und einem zweiten Schreibvorgang einer zweiten Anwendungsroutine, die durch die Prozessorkomponente in einem zweiten Thread der Ausführung ausgeführt wird, um den Datenabschnitt zu schreiben, ist.
     
    9. Computerimplementiertes Verfahren nach Anspruch 6, wobei die Angabe Inhalte eines Registers der Prozessorkomponente und/oder eine Adresse eines Anweisungszeigers der Prozessorkomponente und/oder eine Angabe darüber, welcher Zugriffsvorgang das Cache-Ereignis ausgelöst hat, und/oder einen Typ des Cache-Ereignisses und/oder eine Kennung eines Kerns des Prozessors, der einen Zugriffsvorgang ausgeführt hat, der das Cache-Ereignis ausgelöst hat, und/oder eine Kennung eines Thread einer Ausführung, auf dem ein Zugriffsvorgang, der das Cache-Ereignis ausgelöst hat, ausgeführt wurde, umfasst.
     
    10. Computerimplementiertes Verfahren nach Anspruch 6, wobei das Cache-Ereignis einen Cache der Prozessorkomponente betrifft und ein "Read-for-Ownership"-Cache-Ereignis [RFO-Cache-Ereignis] und/oder ein "hit-modified"-Cache-Ereignis [HITM-Cache-Ereignis] umfasst.
     
    11. Computerimplementiertes Verfahren nach Anspruch 6, wobei die Prozessorkomponente eine erste Prozessorkomponente ist und die Überwachungsdaten erste Überwachungsdaten sind, wobei das computerimplementierte Verfahren ferner umfasst, eine Angabe eines Zustands einer zweiten Prozessorkomponente als zweite Überwachungsdaten als Reaktion auf ein Auftreten des Cache-Ereignisses bei dem ausgewählten Intervall mehrerer Vorkommen des Cache-Ereignisses aufzunehmen.
     
    12. Computerimplementiertes Verfahren nach Anspruch 11, das umfasst, reduzierte Daten, die eine Untergruppe mehrerer Angaben des Zustands der Prozessorkomponente der Überwachungsdaten und eine Untergruppe mehrerer Angaben des Zustands der anderen Prozessorkomponente der anderen Überwachungsdaten umfassen, durch Entfernen der Angabe des Zustands der zweiten Prozessorkomponente aus den zweiten Überwachungsdaten anhand dessen, dass das Cache-Ereignis nicht von einer Wettlaufsituation zwischen einem ersten Zugriff auf den Datenabschnitt durch einen ersten Teil der Anwendungsroutine und einem zweiten Zugriff auf den Datenabschnitt durch einen zweiten Teil der Anwendungsroutine stammt, zu erzeugen.
     
    13. Mindestens ein maschinenlesbares Speichermedium, das Anweisungen umfasst, die dann, wenn sie durch eine Prozessorkomponente ausgeführt werden, bewirken, dass die Prozessorkomponente das Verfahren nach einem der Ansprüche 6-12 ausführt.
     


    Revendications

    1. Appareil pour détecter des conditions de course, comprenant :

    un composant processeur (350), comprenant une unité de surveillance (354) ;

    un composant de déclenchement (1455) destiné à être exécuté par le composant processeur pour configurer l'unité de surveillance pour lui faire détecter un événement d'antémémoire associé à une condition de course entre des accès à un morceau de données et lui faire détecter un événement d'antémémoire associé à une condition de course entre un premier accès au morceau de données par une première partie d'une routine d'application exécutée par le composant processeur et un deuxième accès au morceau de données par une deuxième partie de la routine d'application, et pour lui faire capturer une indication d'un état du composant processeur pour générer des données de surveillance en réponse à une apparition de l'événement d'antémémoire ; et

    un composant compteur (1454) destiné à être exécuté par le composant processeur pour configurer un compteur de l'unité de surveillance pour lui permettre de capturer l'indication de l'état du composant processeur à une fréquence inférieure à chaque apparition de l'événement d'antémémoire ; et

    un composant de filtrage destiné à être exécuté par le composant processeur pour :

    récupérer des identifiants d'au moins un élément parmi la première partie, la deuxième partie, un noyau du composant processeur qui exécute une partie parmi les première et deuxième parties, et un fil d'exécution d'une partie parmi les première et deuxième parties ;

    comparer lesdits identifiants aux identifiants des données de surveillance pour déterminer si la condition de course est située entre les premier et deuxième accès ; et

    retirer des données de surveillance l'indication capturée d'après la détermination du fait que l'événement d'antémémoire n'est pas dû à la condition de course entre les premier et deuxième accès pour générer des données réduites comprenant un sous-ensemble des indications multiples de l'état du premier composant processeur des données de surveillance.


     
    2. Appareil selon la revendication 1, dans lequel le composant de déclenchement doit activer dynamiquement la capture de l'indication selon qu'une routine d'application comprenant des parties multiples destinées à être exécutées simultanément par le composant processeur est ou non actuellement exécutée par le composant processeur.
     
    3. Appareil selon la revendication 1, dans lequel le premier accès comprend une opération d'écriture et le deuxième accès comprend une opération parmi une opération de lecture et une opération d'écriture.
     
    4. Appareil selon la revendication 1, la condition de course comprenant une condition de course entre une opération de lecture d'une première routine d'application exécutée par le composant processeur dans un premier fil d'exécution pour lire le morceau de données, et une opération d'écriture d'une deuxième routine d'application exécutée par le composant processeur dans un deuxième fil d'exécution pour écrire dans le morceau de données.
     
    5. Appareil selon la revendication 1, dans lequel le composant processeur est un premier composant processeur comprenant une première unité de surveillance, les données de surveillance étant des premières données de surveillance, l'appareil comprenant également un deuxième composant processeur (350b) comprenant une deuxième unité de surveillance (354), le composant de déclenchement devant également configurer la deuxième unité de surveillance pour lui faire détecter l'événement d'antémémoire et lui faire capturer une indication d'un état du deuxième composant processeur en tant que deuxièmes données de surveillance en réponse à une apparition de l'événement d'antémémoire.
     
    6. Procédé mis en oeuvre par ordinateur pour détecter des conditions de course, comprenant les étapes consistant à :

    détecter (2220) un événement d'antémémoire associé à une condition de course entre des accès à un morceau de données par au moins une routine d'application exécutée par un composant processeur ; et

    capturer de manière récurrente (2240) une indication d'un état du composant processeur comme données de surveillance en réponse à une apparition de l'événement d'antémémoire à un intervalle sélectionné d'apparitions multiples de l'événement d'antémémoire ;

    caractérisé par les étapes consistant à

    récupérer des identifiants d'au moins un élément parmi une première partie, une deuxième partie, un noyau du composant processeur qui exécute une partie parmi les première et deuxième parties, et un fil d'exécution d'une partie parmi les première et deuxième parties ;

    comparer lesdits identifiants aux identifiants des données de surveillance pour déterminer si la condition de course est située entre une opération de lecture de la première partie et une opération d'écriture de la deuxième partie ; et

    retirer des données de surveillance l'indication capturée d'après la détermination du fait que l'événement d'antémémoire n'est pas dû à la condition de course entre l'opération de lecture de la première partie de la routine d'application et l'opération d'écriture d'une deuxième partie de la routine d'application pour générer des données réduites comprenant un sous-ensemble des indications multiples de l'état du composant processeur des données de surveillance.


     
    7. Procédé mis en oeuvre par ordinateur selon la revendication 6, comprenant l'étape consistant à activer dynamiquement la capture de l'indication d'après le fait que la routine d'application est actuellement exécutée au moins partiellement par le composant processeur.
     
    8. Procédé mis en oeuvre par ordinateur selon la revendication 6, dans lequel la condition de course consiste en une condition de course entre une première opération d'écriture de la routine d'application exécutée par le composant processeur dans un premier fil d'exécution pour écrire le morceau de données, et une deuxième opération d'écriture d'une autre routine d'application exécutée par le composant processeur dans un deuxième fil d'exécution pour écrire dans le morceau de données.
     
    9. Procédé mis en oeuvre par ordinateur selon la revendication 6, dans lequel l'indication comprend au moins un élément parmi le contenu d'un registre du composant processeur, une adresse d'un pointeur d'instruction du composant processeur, une indication de l'opération d'accès qui a déclenché l'événement d'antémémoire, un type de l'événement d'antémémoire, un identifiant d'un noyau du processeur qui a exécuté une opération d'accès qui a déclenché l'événement d'antémémoire, et un identifiant d'un fil d'exécution sur lequel une opération d'accès qui a déclenché l'événement d'antémémoire a été exécutée.
     
    10. Procédé mis en oeuvre par ordinateur selon la revendication 6, dans lequel l'événement d'antémémoire concerne une antémémoire du composant processeur et comprend au moins un événement parmi un événement d'antémémoire de lecture pour appropriation [RFO] et un événement d'antémémoire à interception modifiée [HITM].
     
    11. Procédé mis en oeuvre par ordinateur selon la revendication 6, dans lequel le composant processeur est un premier composant processeur et les données de surveillance sont des premières données de surveillance, le procédé mis en oeuvre par ordinateur comprenant également l'étape consistant à capturer une indication d'un état d'un deuxième composant processeur en tant que deuxièmes données de surveillance en réponse à l'apparition de l'événement d'antémémoire à l'intervalle sélectionné des apparitions multiples de l'événement d'antémémoire.
     
    12. Procédé mis en oeuvre par ordinateur selon la revendication 11, comprenant l'étape consistant à générer des données réduites comprenant un sous-ensemble des indications multiples de l'état du composant processeur des données de surveillance et un sous-ensemble des indications multiples de l'état de l'autre composant processeur des autres données de surveillance en retirant l'indication de l'état du deuxième composant processeur dans les deuxièmes données de surveillance d'après le fait que l'événement d'antémémoire n'est pas dû à une condition de course entre un premier accès au morceau de données par une première partie de la routine d'application et un deuxième accès au morceau de données par une deuxième partie de la routine d'application.
     
    13. Au moins un support de stockage lisible par machine contenant des instructions qui, lorsqu'elles sont exécutées par un composant processeur, amènent le composant processeur à mettre en oeuvre le procédé selon l'une quelconque des revendications 6 à 12.
     




    Drawing



































    Cited references

    REFERENCES CITED IN THE DESCRIPTION



    This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.

    Patent documents cited in the description