(19)
(11)EP 3 017 372 B1

(12)EUROPEAN PATENT SPECIFICATION

(45)Mention of the grant of the patent:
04.12.2019 Bulletin 2019/49

(21)Application number: 14820229.4

(22)Date of filing:  20.06.2014
(51)International Patent Classification (IPC): 
G06F 12/00(2006.01)
G06F 12/0862(2016.01)
G06F 12/06(2006.01)
G06F 12/0897(2016.01)
G06F 9/06(2006.01)
G06F 9/30(2018.01)
G06F 13/28(2006.01)
G11C 7/10(2006.01)
(86)International application number:
PCT/US2014/043384
(87)International publication number:
WO 2015/002753 (08.01.2015 Gazette  2015/01)

(54)

MEMORY CONTROLLED DATA MOVEMENT AND TIMING

SPEICHERGESTEUERTE DATENÜBERTRAGUNG UND -TIMING

DÉPLACEMENT DE DONNÉES ET SYNCHRONISATION COMMANDÉS PAR UNE MÉMOIRE


(84)Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

(30)Priority: 03.07.2013 US 201313935303

(43)Date of publication of application:
11.05.2016 Bulletin 2016/19

(60)Divisional application:
19197613.3

(73)Proprietor: Micron Technology, Inc.
Boise, ID 83716-9632 (US)

(72)Inventor:
  • MURPHY, Richard C.
    Boise, Idaho 83713 (US)

(74)Representative: Gill Jennings & Every LLP 
The Broadgate Tower 20 Primrose Street
London EC2A 2ES
London EC2A 2ES (GB)


(56)References cited: : 
US-A1- 2007 266 206
US-A1- 2010 110 745
US-A1- 2012 158 682
US-A1- 2012 256 653
US-A1- 2013 039 112
US-A1- 2013 148 402
US-A1- 2008 183 984
US-A1- 2010 121 994
US-A1- 2012 254 591
US-A1- 2012 284 436
US-A1- 2013 148 402
US-B1- 6 754 779
  
  • August ET AL: "Hybrid Memory Cube (HMC)", Proceedings of the 23rd Hot Chips Symposium, 1 January 2011 (2011-01-01), pages 1-24, XP055237450, Retrieved from the Internet: URL:http://www.hotchips.org/wp-content/upl oads/hc_archives/hc23/HC23.18.3-memory-FPG A/HC23.18.320-HybridCube-Pawlowski-Micron. pdf [retrieved on 2015-12-18]
  • VIVEK SESHADRI ET AL: "Gather-scatter DRAM", PROCEEDINGS OF THE 48TH INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE, MICRO-48, 1 January 2015 (2015-01-01), pages 267-280, XP055339753, New York, New York, USA DOI: 10.1145/2830772.2830820 ISBN: 978-1-4503-4034-2
  
Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give notice to the European Patent Office of opposition to the European patent granted. Notice of opposition shall be filed in a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).


Description

Technical Field



[0001] The present disclosure relates generally to semiconductor memory and methods, and more particularly, to apparatuses, electronic device readable media, and methods for memory controlled data movement and timing.

Background



[0002] Memory devices are typically provided as internal, semiconductor, integrated circuits in computing devices or other electronic devices. There are many different types of memory including volatile and non-volatile memory. Volatile memory can require power to maintain its data (e.g., user data, error data, etc.) and includes random-access memory (RAM), dynamic random access memory (DRAM), and synchronous dynamic random access memory (SDRAM), among others. Non-volatile memory can provide persistent data by retaining stored data when not powered and can include NAND flash memory, NOR flash memory, read only memory (ROM), Electrically Erasable Programmable ROM (EEPROM), Erasable Programmable ROM (EPROM), and resistance variable memory such as phase change random access memory (PCRAM), resistive random access memory (RRAM), and magnetoresistive random access memory (MRAM), among others.

[0003] Computing devices typically include a number of processors coupled to a main memory (e.g., DRAM) and a secondary memory (e.g., storage, such as a hard drive or solid state drive). Main memory interfaces are typically tightly coupled or slaved to the processor. In DRAM, this is accomplished by the processor's memory controller managing explicitly timed interfaces (e.g., via a row address strobe (RAS) / column address strobe (CAS) protocol). Memory has typically been optimized for density and processors have typically been optimized for speed, causing a disparity between the two known as the memory wall or the von Neumann bottleneck. This disparity typically results in bandwidth between the processor and memory being a more limiting resource than the speed of the processor or the density of the memory.

[0004] By way of further background, reference is made to the following publications:

United States Patent Application Publication No. US 2012/0254591 A1, to Hughes et al., discloses a method for performing gather and scatter stride instruction in a computer processor, which method includes: receiving a request from a requesting device for request data stored in a memory, wherein the requested data is distributed in the memory; gathering the requested data from the memory; and sending the requested data to the requesting device without sending non-requested data surrounding the requested data. The execution of a gather stride instruction causes a conditional storage of strided data elements from memory into the destination register according to bit values of a write mask. Data elements are read from a memory source, such as an entire cache line, prior to gathering requested data. Based on a write-mask read with the data elements, one or more of the read data elements (e.g., the entire cache line) are sent to a destination register.

United States Patent Application Publication No. US 2013/0148402 A1, to Chang et al., discloses a control scheme for a 3D memory integrated circuit that includes a master chip and at least one slave chip. The master chip includes a main memory core, a first local timer, an I/O buffer, a first pad and a second pad. The at least one slave chip is stacked with the master chip. Each of the slave chip includes a slave memory core, a second local timer and a third pad. A timing control circuit is provided on the master chip, instead of slave chips, to improve circuit yield during manufacture.

United States Patent Application Publication No. US 2007/266206 A1, to Kim et al., discloses a scatter/gather technique which optimises unstructured streaming memory accesses, providing of-ship bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading a memory access overhead by supporting address calculation, data shuffling, and format conversion.

United States Patent Application Publication No. US 2012/158682 A1, to Yarnell et al., discloses a method of reading and writing data to and from a transactional database using a scatter-gather routine that minimises the time in which the database is subject to a transaction log.

United States Patent Application Publication No. US 2008/183984 A1, to Beucler et al., discloses an integrated circuit including an array of memory cells, addressing circuitry, and timing and control logic. The addressing circuitry is configured to address multiple locations of memory cells in response to a clock signal. The timing and control logic too is responsive to the clock signal and is configured to control a read-modify-write operation to read a first group of data bits from a first address location, modify at least one of the bits of the first group, and write the modified first group to the first address location. The read-modify-write operations are performed within one cycle of the clock signal.

"Hybrid Memory Cube (HMC)", August et al., Proceedings of the 23rd Hot Chips Symposium, 1 January 2011, pages 1-24, XP055237450, provides useful background description of hybrid memory cubes and their applications.

United States Patent Application Publication No.: US 2010/110745 A1 to Jeddeloh et al., discloses a method of operating a memory apparatus which includes receiving a request for data stored in a main memory device of the apparatus, from any one or more of a plurality of requesting devices, wherein the requested data is distributed in the main memory device. The requested data is gathered from the main memory device and is sent collectively to the one or more requesting devices without at the same time sending non-requested data surrounding the requested data in the main memory device. The main memory device comprises multiple stacked-die memory vaults. A wide data word is delivered from multiple memory vaults to a cache line.

United States Patent No.: US 6,754,779 to Magro seeks to improve performance of data read operations in a read buffer that receives and stores requested information in response to read requests from multiple requesting master devices. A full cache line of data is read from a memory device into the read buffer in response to any read request. Reading of the memory device is performed under the control of a SDRAM controller operating at its own frequency asynchronously with respect to frequencies of the requesting devices. The read buffer is capable of operation in either a demand read mode or a prefetch mode. In the demand read mode, the read buffer immediately forwards data to the requesting master. A full cache line of data containing the requested data is concurrently stored in the read buffer such that subsequently a read request from the same or other requesting devices that matches either the previously requested data or data within the store cache line of data in the read buffer is supplied to the requesting master in zero wait states. Read prefetch is supported by receiving and storing additional data from memory. The prefetch data is retrieved without an immediate need for that data. It is asserted that the performance of read operations is increased in that any subsequent read request for data matching either the previously demanded data, or the prefetch data, causes the matching data to be provided to the requesting master in zero wait states, without additional memory access.


Summary of the Invention



[0005] The present invention is defined in the appended independent claims to which reference should be made. Advantageous features are set out in the appended dependent claims.

[0006] The embodiments or examples of the following description which are not covered by the appended claims are considered as not being part of the invention.

Brief Description of the Drawings



[0007] 

Figure 1 illustrates a block diagram of an apparatus in the form of a computing device including a number of processors, a number of main memory devices, and an interface therebetween in accordance with a number of embodiments of the present disclosure.

Figure 2 illustrates a data movement operation between a processor and a main memory in accordance with a number of embodiments of the present disclosure.

Figure 3 includes an illustration of a more detailed view of a logic device coupled between the memory devices and the requesting devices in accordance with a number of embodiments of the present disclosure.

Figure 4 illustrates a block diagram of a number of address and offset based requests and responses in accordance with a number of embodiments of the present disclosure.


Detailed Description



[0008] An abstracted memory interface between a processor and a main memory can provide for decoupled timing (and, in some instances, decoupled naming) from explicit control by the processor. An example of a main memory with an abstracted interface is a hybrid memory cube (HMC). In HMC, this function is achieved by a packetized network protocol coupled with hardware logic (e.g., logic-layer memory control). Such interfaces can allow for a simplified processor-side memory controller interface, out-of-order return of main memory request, localized RAS and/or CAS management of the main memory, advanced memory topologies and sharing strategies in multiprocessor apparatuses, both homogeneous and heterogeneous, locally managed synchronization functions and metadata storage, and resilience (e.g., where failed portions of memory such as words or blocks can be remapped, such as by a logic layer in memory).

[0009] Applications such as high performance computing, graph-based analysis, data mining, national security, database technology, and other commercial drivers exhibit sparse memory access patterns ill-suited to the cache based architecture of many processors in which data generally exhibits poor spatial locality and/or temporal locality. Generalized data movement functions for main memory can afford an opportunity to better utilize memory bandwidth and cache based architectures.

[0010] The present disclosure includes apparatuses, electronic device (e.g., computing device) readable media, and methods for memory controlled data movement and timing. A number of electronic device readable media store instructions executable by an electronic device to provide programmable control of data movement operations within a memory (e.g., a main memory). The main memory can provide timing control, independent of any associated processor, for interaction between the memory and the associated processor. As will be appreciated by one of ordinary skill in the art, "main memory" is a term of art that describes memory storing data that can be directly accessed and manipulated by a processor. An example of main memory is DRAM. Main memory provides primary storage of data and can be volatile memory or non-volatile memory (e.g., in the case of non-volatile RAM managed as a main memory, such as a non-volatile dual in-line memory module (DIMM)). Secondary storage can be used to provide secondary storage of data and may not be directly accessible by the processor. However, as used herein, "main memory" does not necessarily have to be volatile memory, and can in some embodiments be non-volatile memory.

[0011] In the following detailed description of the present disclosure, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration how one or more embodiments of the disclosure may be practiced. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the embodiments of this disclosure, and it is to be understood that other embodiments may be utilized and that process, electrical, and/or structural changes may be made without departing from the scope of the present disclosure. As used herein, the designators "B", "L", "M", "N", and "P", particularly with respect to reference numerals in the drawings, indicates that a number of the particular feature so designated can be included. As used herein, "a number of' a particular thing can refer to one or more of such things (e.g., a number of memory devices can refer to one or more memory devices).

[0012] The figures herein follow a numbering convention in which the first digit or digits correspond to the drawing figure number and the remaining digits identify an element or component in the drawing. Similar elements or components between different figures may be identified by the use of similar digits. For example, 106 may reference element "06" in Figure 1, and a similar element may be referenced as 206 in Figure 2. As will be appreciated, elements shown in the various embodiments herein can be added, exchanged, and/or eliminated so as to provide a number of additional embodiments of the present disclosure. In addition, as will be appreciated, the proportion and the relative scale of the elements provided in the figures are intended to illustrate certain embodiments of the present invention, and should not be taken in a limiting sense.

[0013] Figure 1 illustrates a block diagram of an apparatus in the form of a computing device 100 including a number of processors 102-1,..., 102-P, a number of main memory devices 104-1,..., 104-M, and an interface 106 therebetween in accordance with a number of embodiments of the present disclosure. As used herein, a computing device 100, a processor 102, a memory device 104, or an interface 106 might also be separately considered an "apparatus." The computing device 100 can be any electronic device including a processor and main memory storing data that is accessed by the processor. Examples of computing devices 100 include supercomputers, personal computers, video cards, sound cards, and mobile electronic devices such as laptop computers, tablets, smartphones, and the like.

[0014] The processors 102-1,..., 102-P can be any type of general purpose processors. For example, the processors 102-1,..., 102-P can be cache based processors, vector processors (e.g., single instruction multiple data (SIMD)), scalar processors (e.g., single instruction single data (SISD)), multiple instruction single data (MISD), multiple instruction multiple data (MIMD) processors, etc. In some embodiments, the processors 102-1,..., 102-P do not provide timing control of the main memory devices 104-1,..., 104-M. The processors 102-1,..., 102-P can be configured to send a request via the interface 106 to the main memory devices 104-1,..., 104-M without being aware of a read time associated with the request (e.g., the processors 102-1,..., 102-P may not control and/or be aware of when the requested data will be received by the processors 102-1,..., 102-P). The request from the processors 102-1,..., 102-P may not have timing associated therewith, leaving the determination when to respond to the request to the main memory devices 104-1,..., 104-M.

[0015] The main memory devices 104-1,..., 104-M can store data that is operated on by the processors 102-1,..., 102-P. Examples of main memory devices include DRAM and HMC, among others. However, according to a number of embodiments of the present disclosure, the main memory devices 104-1,..., 104-M can control their timing independently of the processors 102-1,..., 102-P for interaction between the main memory devices 104-1,..., 104-M and the processors 102-1,..., 102-P. For example, the main memory devices 104-1,..., 104-M can provide their own timing control of a row address strobe (RAS) and/or a column address strobe (CAS) for accessing the main memory devices 104-1,..., 104-M. Examples of such timing control include random read or write cycle time, access time, etc.

[0016] In some embodiments, programmable control of data movement operations within the main memory devices 104-1,..., 104-M can be provided (e.g., by executable instructions provided by a programmer). Examples of such operations include gather/scatter operations, addressed based operations, offset based operations, strided operations, pointer based operations, etc. Enhanced data movement semantics can be exposed to the programmer (according to some previous approaches, the programmer was not provided with an ability to control data movement operations in a main memory 204). Such embodiments can be beneficial in allowing instructions to be written that reduce use of the bandwidth of the interface 106 by moving data within the main memory devices 104-1,..., 104-M before it is transferred across the interface 106 to the processors for further operation. Benefits can include reducing the overall latency of computation or a sequence of such operations. More specific examples of such movement operations within the main memory devices 104-1,..., 104-M are described herein. For example, the main memory devices 104-1,..., 104-M can store a data structure and traverse the data structure independent of an instruction stream from the processors 102-1,..., 102-P. Although the processors 102-1,..., 102-P may request certain data from the main memory devices 104-1,..., 104-M, the main memory devices 104-1,..., 104-M can independently traverse the data structure and move data in order to more efficiently (e.g., in terms of use of bandwidth of the interface 106) respond to the request for data from the processors 102-1,..., 102-P, although the request from the processors 102-1,..., 102-P did not specifically request the data movement. In addition to more efficiently utilizing the bandwidth of the interface 106, embodiments of the present disclosure can reduce a power consumption associated with use of the interface 106 by transmitting across the interface 106 fewer times for equivalent results (e.g., transmitting dense data, which requires fewer transfers than transmitting sparse data).

[0017] Figure 2 illustrates a data movement operation between a processor 202 and a main memory 204 in accordance with a number of embodiments of the present disclosure. The processor 202 can be analogous to the processors 102-1,..., 102-P illustrated in Figure 1. The processor 202 can be a cache based processor and can include, for example, a processing unit (e.g., a central processing unit "CPU") 208, a first level cache "LI" 210-1 coupled to the processing unit 208, a second level cache "L2" 210-2 coupled to the first level cache 210-1, and a number of additional levels of cache "LN" 210-L coupled to the second level cache 210-2. The first level cache 210-1, second level cache 210-2, and additional levels of cache 210-L may be referred to herein generically as cache 210. Embodiments are not limited to a particular number of levels of cache and can include more or fewer than those illustrated in Figure 2. The cache 210 can be used by the processing unit 208 to reduce average time to access the main memory 204 by storing frequently used data from the main memory 204. The latency for accessing the cache 210 by the processing unit 208 is less than the latency for accessing the main memory 204 by the processing unit 208 via the interface 206. The interface 206 can be analogous to the interface 106 illustrated in Figure 1 and the main memory 204 can be analogous to the main memory devices 104-1,..., 104-M illustrated in Figure 1.

[0018] Figure 2 also illustrates a representation of a cache line 212 in the cache 210. Each row in the diagram under the cache 210 represents a cache line and a particular cache line 212 is illustrated with data as indicated by the "x" in each block. A cache 210 can have a fixed cache line size (e.g., a certain number of bytes of data that can be stored in a cache line). Interactions with the cache (e.g., from the main memory 204) can occur in fixed data increments that are equal to a single cache line sized portion of data. According to a number of embodiments of the present disclosure, the main memory 204 can be configured to respond to a request for data from the processor 202 by gathering data that is distributed in the main memory 204 into a single cache line sized portion of data 216. With respect to Figure 2, the data that is distributed in the main memory 204 is represented by the "x" entries in the rows 214 of the main memory 204. The main memory can be configured to gather the data (x's), as shown, into a single cache line sized portion of data 216 before transferring the data across the interface 206 to the processor 202. As is illustrated, the requested data can be non-contiguously distributed in the main memory 204 such that the requested data and surrounding data comprises a plurality of cache line sized portions of data. As described herein, the main memory 204 can control a timing of the operation (e.g., the gather operation) and the processor 202 may be unaware of a read time associated with the request for data (e.g., the processor may not know or control when the data will be sent from the main memory 204 to the processor 202).

[0019] In contrast, some previous approaches to accessing the main memory 204 included the processor 202 controlling the timing of the operation. Furthermore, such a request for data would have been met with a plurality of responses from the main memory 204 by transferring each row 214 containing the requested data (x's) without first gathering the data into a single cache line sized portion of data 216. Such previous approaches would consume more bandwidth of the interface 206, because each row 214 would be sent separately across the interface, potentially with non-requested data (e.g., surrounding data (represented by the blank boxes) that was not requested by the processor 202). It would then be up to the processor 202 to process (e.g., filter) the data received from the main memory 204 to isolate and further operate on the requested data (e.g., the x's). However, according to the present disclosure, the requested data can be sent to the processor without sending the non-requested data.

[0020] According to a number of embodiments of the present disclosure, the request from the processor can include an indication of a number of attributes. An attribute can be a specification that defines a property of an object, element, or file. The attribute may refer to the specific value for a given instance of data. For example, where the main memory stores data comprising an image, an attribute of the image may be pixel values that have a particular color (where the particular color is the attribute). In response to a request from the processor for attributes stored in the main memory 204, the main memory 204 can examine a data structure stored in the main memory to determine whether the data structure includes the attribute. The main memory 204 can return data that indicates the attribute (e.g., data indicating a particular color in an image) to the processor 202 in response to determining that the data structure includes the attribute. The number of attributes can be attributes that are to be gathered from the main memory 204 or attributes that are to be scattered to the main memory 204. Examples of the type of request (from the processor 202) for which gather operations may be particularly beneficial are search requests (e.g., "among the data stored in the data structure, please return values that match a criteria," where the "x's" represent the data that matches the criteria) and filter requests (e.g., "among the data stored in the data structure, please return values for which a given predicate returns a Boolean true value," where the "x's" represent the data that returns a Boolean true value for the predicate).

[0021] According to some embodiments of the present disclosure, after the processor 202 has modified the requested data, it can return the modified data to the main memory device 204. The main memory device 204 can receive the modified data (e.g., a single cache line sized portion of modified data 216) scatter the modified data such that it is stored in the data structure of the main memory device 204 in the same locations from which the requested data was gathered. Thus, programmable control can be provided over data movement operations (e.g., gather and/or scatter operations) within the main memory 204.

[0022] The main memory 204 (e.g., hardware logic associated with the main memory 204) can be configured to provide an indication that locations in the main memory 204, from which data is gathered, are unavailable until the gathered data is released by the processor 202. Such embodiments can provide a synchronization mechanism (e.g., so that stale data is not delivered in response to another request such as a direct memory access (DMA) request or a request from another processor). The indication can be a table, a full/empty bit, or a series of base and bounds registers, among other indications.

[0023] In a number of embodiments, the request from the processor 202 can be for data to be modified by the main memory 204, where the data is distributed in a data structure of the main memory 204. The main memory 204 (e.g., hardware logic thereof) can be configured to provide modifications to data within the main memory 204 (e.g., via processing in memory (PIM)). In response to the request, the main memory 204 can originate and perform a data gather operation in the main memory 204 based on the data structure. The main memory 204 can be configured to originate and perform a data modify operation on the gathered data in the main memory (e.g., without transferring the data across the memory interface 206 and/or without use of a processor). An example of a modification operation includes adjusting a value of the gathered data (e.g., each unit of the gathered data). Such an example may be beneficial where the apparatus is, for example, a video card and the requested modification is, for example, to increase a brightness of a particular color in an image stored in the main memory 204 before the processor 202 performs a more complicated operation on the data comprising the image or transfers it to a peripheral device (e.g., a monitor). The main memory 204 can be configured to send the modified data to the processor 202 after completing the modification. As described herein, the main memory 204 can control timing of the main memory 204 independent of the processor 202. For those embodiments in which the main memory 204 includes an ability to modify data (as well as move data) without direct processor 202 control, the main memory 204 can be treated as a peer of the processor 202 (e.g., the main memory 204 can be treated as an extension of the processor's 202 cache 210 to extend the address space).

[0024] Figure 3 illustrates a block diagram of an apparatus in the form of a computing device 300 including a main memory device 304 and a number of requesting devices 302, 318, 321 in accordance with a number of embodiments of the present disclosure. Examples of requesting devices can include a processor 302, a DMA device 318, and/or a memory unit 321, among others. The processor(s) 302 can be analogous to the processors 102-1,..., 102-P illustrated in Figure 1. The memory unit 321 can be analogous to the main memory 104 illustrated in Figure 1 and/or to another memory unit other than a main memory. The computing device 300 can be analogous to the computing device 100 illustrated in Figure 1. In Figure 3, more detail is shown regarding a specific example of a main memory 304 that is a hybrid memory cube (HMC). The main memory HMC 304 illustrated in Figure 3 can be analogous to the main memory devices 104-1,..., 104-M illustrated in Figure 1.

[0025] An HMC 304 can be a single package including multiple memory devices 320-1, 320-2, 320-3,..., 320-B (e.g., DRAM dies) and hardware logic device 322 (e.g., a logic die, application-specific integrated circuit (ASIC), corresponding logic in another device, etc.) stacked together using through silicon vias (TSV), although other embodiments may differ (e.g., the hardware logic device 322 may not necessarily be stacked with the memory devices 320). The memory within the HMC 304 can be organized into subsets (e.g., vaults) 324, where each vault 324 is functionally and operationally independent of other vaults 324. Each vault 324 can include a partition of memory from each of the memory devices 320. Each vault 324 can include a hardware logic unit 328 (e.g., vault controller) in the logic device 322 that functions analogously to a memory controller for the vault 324. Each vault controller 324 can be coupled to a respective subset of the plurality of memory devices 320. For example, the vault controller 328 can manage memory operations for the vault 324 including determining its own timing requirements (e.g., instead of being managed by a requesting device such as a processor). The vault controller 328 can include a number of buffers for requests and responses with a processor 302 and can utilize the number of buffers to send responses to the processor 302 out of order with respect to an order in which the requests were received from the processor 302. Thus, the processor 302 can be configured to send a request via an interface to the HMC 304 without being aware of a read time associated with the request.

[0026] Figure 3 includes an illustration of a more detailed view of a logic device 322 coupled between the memory devices 320 and the requesting devices 302, 318, 321. The logic base 322 can include memory control logic 328 for each vault (e.g., vault control). The vault controller 328 can be coupled to a shared memory control logic 330 for the HMC 304 that can consolidate functions of the vaults 324. However, the shared memory control logic 330 does not necessarily comprise a central memory controller in the traditional sense because each of the vaults 324 can be directly controlled (e.g., controlled timing, access, etc.) independently of each other and because the shared memory control logic 330 does not necessarily interface (e.g., directly interface) with the requesting devices 302, 318, 321. Thus, in some embodiments, the computing device 300 and/or the main memory 304 does not include a central memory controller. The memory control logic 330 can be coupled to a switch 332 (e.g., a crossbar switch). The switch 332 can provide availability of the collective internal bandwidth from the vaults 324 to the input/output (I/O) links 336. The switch 332 can be coupled to link interface controllers 334, which control I/O links 336 to a requesting device 302, 318, 321. For example, the I/O links 336 can be serial fully duplexed input/output links. The logic device 322 can provide a logical/physical interface for the main memory 304.

[0027] The main memory 304 can receive requests from requesting devices such as a processor 302, a DMA device 318, and/or a memory unit 321, among others. As described herein, in some embodiments, the main memory 304 can be configured to provide an indication that locations in the main memory 304, from which data is gathered, are unavailable until the gathered data is released by the requesting device 302, 318, 321. Such embodiments can provide a synchronization mechanism (e.g., so that stale data is not delivered in response to a request from the DMA device 318 while the data is being operated on by the processor 302).

[0028] Figure 4 illustrates a block diagram of a number of address and offset based requests and responses in accordance with a number of embodiments of the present disclosure. A requesting device (e.g., a number of cache based processors) can provide a request to a main memory device and, in some embodiments, the request can include a tag 448 to allow for buffering by the requesting device until the main memory device responds to the request. Because the main memory device controls the timing of operations in response to requests from the requesting device, the requesting device can benefit from keeping track of tags 448 for its requests so that when the main memory device responds to the request, the requesting device can quickly identify to which request the response applies based on the tag 448. The main memory device can store a data structure (e.g., a linked list, a tree, and a graph, among others) and be configured to traverse the data structure in response to the request from the requesting device. Requests can include address based requests 440, offset based requests 444, strided requests, and pointer based requests, among others. The main memory device can be configured to respond in kind (e.g., an address based request 440 can be met with an address based response 442 and an offset based request 444 can be met with an offset based response 446, etc.).

[0029] An address based request can include a tag 448 that identifies the request, an indication of a type and/or size of requested elements 450 (e.g., a number of bytes, words, etc.), an indication of a number of requested elements 452, and a number of addresses 454-1,..., 454-N where the elements are stored. An address based response can include the tag 448 and the data elements 456-1,..., 456-N corresponding to the request.

[0030] An offset based request can include a tag 448 that identifies the request, an indication of a type and/or size of requested elements 450 (e.g., a number of bytes, words, etc.), a base address 458, an indication of a number of requested elements 452, and a number of offset indices 460-1,..., 460-N where the elements are stored. An offset based response can include the tag 448 and the data elements 456-1,..., 456-N corresponding to the request.

[0031] Although not specifically illustrated, a strided request can be similar to an offset based request in that the request can include a tag that identifies the request, an indication of a type and/or size of requested elements (e.g., a number of bytes, words, etc.), a base address, and an indication of a number of requested elements. However, instead of including offsets, the strided request includes a stride (e.g., a number that can be added to the base address or the previously accessed address to find the next desired address). A strided response can include the tag and the data elements corresponding to the request.

[0032] Although not specifically illustrated, a pointer based request can include a pointer to a data structure in the main memory, an indication of a number of attributes (e.g., a number of attributes to be gathered by the main memory from the data structure and returned to the requesting device), and indication of a list of pointers to be dereferenced (e.g., including an indication of how many pointers are to be dereferenced), and a number of conditions and corresponding operations for the request. The main memory can examine the data structure according to the pointer, the indication of the number of attributes, and the indication of the list of pointers and perform an operation in response to a corresponding condition being met. The main memory can examine the data structure by dereferencing the list of pointers until an end of the list is reached or until a threshold number of the pointers have been dereferenced. Dereferencing a pointer can include retrieving data indicated by the pointer (e.g., retrieving data from a location in the main memory indicated by the pointer). Examining the data structure can include determining whether the data structure referenced by a particular pointer includes the attribute and the main memory can return data including the attribute to the requesting device in response to determining that the data structure referenced by the pointer includes the attribute. In some embodiments, the main memory can generate a new request by cloning the request from the requesting device in response to determining that the data structure referenced by the pointer includes the attribute.

[0033] Examples of conditions and corresponding operations include returning data including an attribute to the requesting device in response to the data matching a set value or search key. Another example includes performing an atomic update of data including an attribute in response to a match. Another example includes ending operations for the request in response to a remaining number of pointers being a sentinel value (e.g., a value that causes the operations to end, for example, zero) or a threshold number of pointers having been dereferenced. Examples of the data structure include a linked list, a tree, and a graph.

[0034] Although not specifically illustrated as such, a non-transitory computing device readable medium for storing executable instructions can include all forms of volatile and non-volatile memory, including, by way of example, semiconductor memory devices, DRAM, HMC, EPROM, EEPROM, flash memory devices, magnetic disks such as fixed, floppy, and removable disks, other magnetic media including tape, optical media such as compact discs (CDs), digital versatile discs (DVDs), and Blu-Ray discs (BD). The instructions may be supplemented by, or incorporated in, ASICs.

[0035] Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of one or more embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of skill in the art upon reviewing the above description. The scope of the one or more embodiments of the present disclosure includes other applications in which the above structures and methods are used.

[0036] In the foregoing Detailed Description, some features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.


Claims

1. A method comprising:

receiving, by a main memory device (104-1, 104-M, 204, 304), a request from a cache based processor (102-1, 102-P, 202, 302) for data (216) stored in the main memory device (104-1, 104-M, 204, 304), wherein the requested data (216) is distributed in the main memory device (104-1, 104-M, 204, 304) such that the requested data (216) and non-requested data surrounding the requested data (216) comprise a plurality of cache line (212) sized portions of data;

gathering the requested data (216) from the main memory device (104-1, 104-M, 204, 304), wherein the gathering is performed by the main memory device (104-1, 104-M, 204, 304);

controlling timing of gathering the requested data (216) independent of the cache based processor (102-1, 102-P, 202, 302) via a random read or write cycle time, or an access time for the request such that the main memory device (104-1, 104-M, 204, 304) determines when to respond to the request, wherein the controlling of the timing is performed by the main memory device (104-1, 104-M, 204, 304); and sending, by the main memory device, the requested data (216) collectively in a single cache line (212) sized portion of data to the cache based processor (102-1, 102-P, 202, 302) without sending the non-requested data surrounding the requested data (216).


 
2. The method of claim 1, wherein the request comprises a direct memory access, DMA, request.
 
3. The method of either one of claims 1 or 2, wherein the method includes, in association with gathering the requested data (216), providing an indication that locations in the main memory device (104-1, 104-M, 204, 304) where the requested data (216) is unavailable until the requested data (216) is released by the cache based processor (102-1, 102-P, 202, 302).
 
4. The method of either one of claims 1 or 2, wherein the method includes, subsequent to sending the requested data (216) to the cache based processor (102-1, 102-P, 202, 302):

receiving from the cache based processor (102-1, 102-P, 202, 302), modified data corresponding to the requested data (216); and

scattering the modified data in the main memory device (104-1, 104-M, 204, 304) such that the modified data is stored in the same locations in which the requested data (216) was previously stored.


 
5. The method of any one of claims 1-4, wherein the request includes at least one request selected from the group consisting of an address based request (440), an offset based request (444), a strided request and a pointer based request.
 
6. The method of any one of claims 1-5, wherein the request includes at least one request selected from the group consisting of a search request and a filter request.
 
7. An apparatus, comprising:
a main memory device configured to:

receive a request from a cache based processor (102-1, 102-P, 202, 302) for data (216) stored in the main memory device (104-1, 104-M, 204, 304), wherein the requested data (216) is distributed in the main memory device (104-1, 104-M, 204, 304) such that the requested data (216) and non-requested data surrounding the requested data (216) comprise a plurality of cache line (212) sized portions of data, gather the requested data (216);

control timing of gathering the requested data (216) independent of the cache based processor (102-1, 102-P, 202, 302) via a random read or write cycle time, or an access time for the request such that the main memory device (104-1, 104-M, 204, 304) determines when to respond to the request; and

send the requested data (216) collectively in a single cache line (212) sized portion of data to the cache based processor (102-1, 102-P, 202, 302) without sending non-requested data surrounding the requested data (216).


 
8. The apparatus of claim 7, wherein the main memory device (104-1, 104-M, 204, 304) is a hybrid memory cube, HMC, (304).
 
9. The apparatus of claim 8, wherein the main memory device (104-1, 104-M, 204, 304) includes a hardware logic device (322) and a plurality of memory devices (320-1, 320-2, 320-3, 320-B) formed in a single package.
 
10. The apparatus of claim 9, wherein the hardware logic device (322) and the plurality of memory devices (320-1, 320-2, 320-3, 320-B) are stacked with each other.
 
11. The apparatus of claim 9 or 10, wherein the plurality of memory devices (320-1, 320-2, 320-3, 320-B) are organized into vaults (324); and
wherein the hardware logic device (322) comprises a plurality of logic units (328) each coupled to a respective vault (324) of the plurality of memory devices (320-1, 320-2, 320-3, 320-B).
 
12. The apparatus of any one of claims 9-11, wherein the hardware logic device (322) is configured to provide timing control by providing timing control of at least one of a row address strobe (RAS) or a column address strobe, CAS, of the plurality of memory devices (320-1, 320-2, 320-3, 320-B).
 
13. A non-transitory computing device readable storage medium storing a set of instructions, wherein said instructions, when executed on the apparatus of claim 7, cause said apparatus to carry out the steps of the method of claim 1.
 


Ansprüche

1. Verfahren, umfassend:

Empfangen, durch ein Hauptspeicherbauelement (104-1, 104-M, 204, 304), einer Anforderung von einem Cache-basierten Prozessor (102-1, 102-P, 202, 302) für Daten (216), die im Hauptspeicherbauelement (104-1, 104-M, 204, 304) gespeichert sind, wobei die angeforderten Daten (216) im Hauptspeicherbauelement (104-1, 104-M, 204, 304) derartig verteilt sind, dass die angeforderten Daten (216) und nicht angeforderte Daten, welche die angeforderten Daten (216) umgeben, eine Vielzahl von Cache-Zeilen (212) großen Teilen von Daten umfassen;

Erfassen der aus dem Hauptspeicherbauelement (104-1, 104-M, 204, 304) angeforderten Daten (216), wobei das Erfassen durch das Hauptspeicherbauelement (104-1, 104-M, 204, 304) durchgeführt wird;

Steuern von Timing der Erfassung der angeforderten Daten (216) unabhängig vom Cache-basierten Prozessor (102-1, 102-P, 202, 302) über eine Random-Lese- oder Schreibzykluszeit oder einer Zugriffszeit für die Anforderung, derartig, dass das Hauptspeicherbauelement (104-1, 104-M, 204, 304) bestimmt, wenn es auf die Anforderung reagiert, wobei das Steuern des Timings vom Hauptspeicherbauelement (104-1, 104-M, 204, 304) durchgeführt wird; und

Senden, durch das Hauptspeicherbauelement der angeforderten Daten (216) kollektiv in einem einzelnen Cache-Zeilen (212) großen Teil von Daten zum Cache-basierten Prozessor (102-1, 102-P, 202, 302) ohne die nicht angeforderten Daten zu senden, welche die angeforderten Daten (216) umgeben.


 
2. Verfahren nach Anspruch 1, wobei die Anforderung eine Speicherdirektzugriffs-Anforderung, DMA, umfasst.
 
3. Verfahren nach einem der beiden Ansprüche 1 oder 2, wobei das Verfahren, in Verbindung mit dem Erfassen der angeforderten Daten (216), das Bereitstellen einer Anzeige einschließt, dass Stellen im Hauptspeicherbauelement (104-1, 104-M, 204, 304), wo die angeforderten Daten (216) nicht verfügbar sind, bis die angeforderten Daten (216) vom Cache-basierten Prozessor (102-1, 102-P, 202, 302) freigegeben werden.
 
4. Verfahren nach einem der beiden Ansprüche 1 oder 2, wobei das Verfahren, anschließend an das Senden der angeforderten Daten (216) an den Cache-basierten Prozessor (102-1, 102-P, 202, 302), einschießt:

Empfangen vom Cache-basierten Prozessor (102-1, 102-P, 202, 302) modifizierten Daten, die den angeforderten Daten (216) entsprechen; und

Streuen der modifizierten Daten im Hauptspeicherbauelement (104-1, 104-M, 204, 304) derartig, dass die modifizierten Daten an den gleichen Stellen gespeichert werden, an denen die angeforderten Daten (216) zuvor gespeichert waren.


 
5. Verfahren nach irgendeinem der Ansprüche 1-4, wobei die Anforderung zumindest eine Anforderung einschließt, die aus der Gruppe selektiert ist, die aus einer Adressenbasierten Anforderung (440), einer Offset-basierten Anforderung (444), einer gestuften Anforderung und einer Zeiger-basierten Anforderung besteht.
 
6. Verfahren nach irgendeinem der Ansprüche 1-5, wobei die Anforderung zumindest eine Anforderung einschließt, die aus der Gruppe selektiert ist, die aus einer Suchanforderung und einer Filteranforderung besteht.
 
7. Vorrichtung, umfassend:
Ein Hauptspeicherbauelement, das konfiguriert ist, Folgendes zu tun:

Empfangen einer Anforderung von einem Cache-basierten Prozessor (102-1, 102-P, 202, 302) für Daten (216), die im Hauptspeicherbauelement (104-1, 104-M, 204, 304) gespeichert sind, wobei die angeforderten Daten (216) im Hauptspeicherbauelement (104-1, 104-M, 204, 304) derartig verteilt sind, dass die angeforderten Daten (216) und nicht angeforderte Daten, welche die angeforderten Daten (216) umgeben, eine Vielzahl von Cache-Zeilen (212) großen Teilen von Daten umfassen, die angeforderten Daten (216) erfassen;

Steuern von Timing der Erfassung der angeforderten Daten (216) unabhängig vom Cache-basierten Prozessor (102-1, 102-P, 202, 302) über eine Random-Lese- oder Schreibzykluszeit oder einer Zugriffszeit für die Anforderung, derartig, dass das Hauptspeicherbauelement (104-1, 104-M, 204, 304) bestimmt, wenn es auf die Anforderung reagiert; und

Senden die angeforderten Daten (216) kollektiv in einem einzelnen Cache-Zeilen (212) großen Teil von Daten zum Cache-basierten Prozessor (102-1, 102-P, 202, 302) ohne nicht angeforderte Daten zu senden, welche die angeforderten Daten (216) umgeben.


 
8. Vorrichtung nach Anspruch 7, wobei das Hauptspeicherbauelement (104-1, 104-M, 204, 304) ein Hybrid Memory Cube (Hybrid-Speicherwürfel), HMC, (304) ist.
 
9. Vorrichtung nach Anspruch 8, wobei das Hauptspeicherbauelement (104-1, 104-M, 204, 304) ein Hardware-Logik-Bauelement (322) und eine Vielzahl von Speicherbauelementen (320-1, 320-2, 320-3, 320-B) einschließt, die in ein einzelnes Paket geformt ist.
 
10. Vorrichtung nach Anspruch 9, wobei das Hardware-Logik-Bauelement (322) und die Vielzahl von Speicherbauelementen (320-1, 320-2, 320-3, 320-B) miteinander gestapelt sind.
 
11. Vorrichtung nach Anspruch 9 oder 10, wobei die Vielzahl von Speicherbauelementen (320-1, 320-2, 320-3, 320-B) in Depots (324) organisiert sind; und
wobei das Hardware-Logik-Bauelement (322) eine Vielzahl von Logik-Einheiten (328) umfasst, die jeweils an ein entsprechendes Depot (324) der Vielzahl von Speicherbauelementen (320-1, 320-2, 320-3, 320-B) gekoppelt sind.
 
12. Vorrichtung nach irgendeinem der Ansprüche 9-11, wobei das Hardware-Logik-Bauelement (322) konfiguriert ist, Timing-Steuerung bereitzustellen, indem Timing-Steuerung zumindest einem von einem Reihenadressen-Taktsignal (RAS) oder einem Spaltenadressen-Taktsignal, CAS, der Vielzahl von Speicherbauelementen (320-1, 320-2, 320-3, 320-B) bereitgestellt wird.
 
13. Ein nichtflüchtiges von einem Computergerät lesbares Speichermedium, das einen Satz von Befehlen speichert, wobei die Befehle, wenn auf der Vorrichtung nach Anspruch 7 ausgeführt, bewirken, dass die Vorrichtung die Schritte des Verfahrens von Anspruch 1 ausführt.
 


Revendications

1. Procédé comprenant :

la réception, par un dispositif de mémoire principal (104-1, 104-M, 204, 304), d'une demande venant d'un processeur à base de mémoire cache (102-1, 102-P, 202, 302) pour des données (216) stockées dans le dispositif de mémoire principal (104-1, 104-M, 204, 304), les données demandées (216) étant distribuées dans le dispositif de mémoire principal (104-1, 104-M, 204, 304) de manière à ce que les données demandées (216) et les données non demandées entourant les données demandées (216) comprennent une pluralité de parties de données d'une taille de ligne de cache (212) ;

le rassemblement des données demandées (216) à partir du dispositif de mémoire principal (104-1, 104-M, 204, 304), ce rassemblement étant effectué par le dispositif de mémoire principal (104-1, 104-M, 204, 304) ;

la commande de la synchronisation du rassemblement des données demandées (216) indépendamment du processeur à base de mémoire cache (102-1, 102-P, 202, 302) via un temps de cycle de lecture ou d'écriture aléatoire, ou un temps d'accès pour la demande de manière à ce que le dispositif de mémoire principal (104-1, 104-M, 204, 304) détermine quand répondre à la demande, la commande de la synchronisation étant effectuée par le dispositif de mémoire principal (104-1, 104-M, 204, 304) ; et

l'envoi, par le dispositif de mémoire principal, des données demandées (216) collectivement dans une seule partie de données d'une taille de ligne de cache (212) au processeur à base de mémoire cache (102-1, 102-P, 202, 302) sans envoyer les données non demandées entourant les données demandées (216).


 
2. Procédé selon la revendication 1, dans lequel la demande consiste en une demande d'accès direct à la mémoire (DMA).
 
3. Procédé selon une des revendications 1 ou 2, ce procédé comprenant, en association avec le rassemblement des données demandées (216), la fourniture d'une indication que les emplacements dans le dispositif de mémoire principal (104-1, 104-M, 204, 304) où les données demandées (216) ne sont pas disponibles avant que les données demandées (216) ne soient libérées par le processeur à base de mémoire cache (102-1, 102-P, 202, 302).
 
4. Procédé selon l'une des revendications 1 ou 2, ce procédé comprenant, après l'envoi des données demandées (216) au processeur à base de mémoire cache (102-1, 102-P, 202, 302) :

la réception, en provenance du processeur à base de mémoire cache (102-1, 102-P, 202, 302), de données modifiées correspondant aux données demandées (216) ; et

la diffusion des données modifiées dans le dispositif de mémoire principal (104-1, 104-M, 204, 304) de manière à ce que les données modifiées soient stockées dans les mêmes emplacements dans lesquels les données demandées (216) étaient stockées antérieurement.


 
5. Procédé selon l'une quelconque des revendications 1 à 4, dans lequel la demande comprend au moins une demande sélectionnée parmi le groupe comprenant une demande à base d'adresse (440), une demande à base de décalage (444), une demande progressive et une demande basée sur pointeur.
 
6. Procédé selon l'une quelconque des revendications 1 à 5, dans lequel la demande comprend au moins une demande sélectionnée parmi le groupe comprenant une demande de recherche et une demande de filtre.
 
7. Appareil comprenant :
un dispositif de mémoire principal configuré de façon à :

recevoir une demande d'un processeur à base de mémoire cache (102-1, 102-P, 202, 302) pour des données (216) stockées dans le dispositif de mémoire principal (104-1, 104-M, 204, 304), ces données demandées (216) étant distribuées dans le dispositif de mémoire principal (104-1, 104-M, 204, 304) de manière à ce que les données demandées (216) et les données non demandées entourant les données demandées (216) comprennent une pluralité de parties de données d'une taille de ligne de cache (212), rassembler les données demandées (216) ;

commander la synchronisation du rassemblement des données demandées (216) indépendamment du processeur à base de mémoire cache (102-1, 102-P, 202, 302) via un cycle de temps de lecture ou d'écriture aléatoire, ou un temps d'accès pour la demande de manière à ce que le dispositif de mémoire principal (104-1, 104-M, 204, 304) détermine quand répondre à la demande ; et à

envoyer les données demandées (216) collectivement dans une seule partie de données d'une taille de ligne de cache (212) au processeur à base de mémoire cache (102-1, 102-P, 202, 302) sans envoyer les données non demandées entourant les données demandées (216).


 
8. Appareil selon la revendication 7, dans lequel le dispositif de mémoire principal (104-1, 104-M, 204, 304) est un cube de mémoire hybride HMC, (304).
 
9. Appareil selon la revendication 8, dans lequel le dispositif de mémoire principal (104-1, 104-M, 204, 304) comprend un dispositif logique matériel (322) et une pluralité de dispositifs de mémoire (320-1, 320-2, 320-3, 320-B) formés en un seul ensemble.
 
10. Appareil selon la revendication 9, dans lequel le dispositif logique matériel (322) et la pluralité de dispositifs de mémoire (320-1, 320-2, 320-3, 320-B) sont empilés l'un avec l'autre.
 
11. Appareil selon la revendication 9 ou 10, dans lequel la pluralité de dispositifs de mémoire (320-1, 320-2, 320-3, 320-B) sont organisés en voûtes (324) ; et
dans lequel le dispositif logique matériel (322) comprend une pluralité d'unités logiques (328) couplées chacune à une voûte respective (324) de la pluralité de dispositifs de mémoire (320-1, 320-2, 320-3, 320-B).
 
12. Appareil selon l'une quelconque des revendications 9 à 11, dans lequel le dispositif logique matériel (322) est configuré de façon à fournir une commande de synchronisation en fournissant une commande de synchronisation d'au moins soit un signal d'échantillonnage d'adresse de rangée (RAS), soit un signal d'échantillonnage d'adresse de colonne, CAS, de la pluralité de dispositifs de mémoire (320-1, 320-2, 320-3, 320-B).
 
13. Support de stockage non transitoire lisible par dispositif informatique stockant un ensemble d'instructions, lesdites instructions, lorsqu'elles sont exécutées sur l'appareil de la revendication 7, font exécuter audit appareil les étapes du procédé selon la revendication 1.
 




Drawing














Cited references

REFERENCES CITED IN THE DESCRIPTION



This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.

Patent documents cited in the description




Non-patent literature cited in the description