PRIORITY APPLICATION
BACKGOUND
[0002] Memory bandwidth has become a bottleneck to system performance in high-performance
computing, high-end servers, graphics, and (very soon) mid-level servers. Microprocessor
enablers are doubling cores and threads-per-core to greatly increase performance and
workload capabilities by distributing work sets into smaller blocks and distributing
them among an increasing number of work elements, i.e. cores. Having multiple computer
elements per processor results in an increasing amount of memory per computer element.
This results in a greater need for memory bandwidth and memory density to be tightly
coupled to a processor to address these challenges. Current memory technology roadmaps
do not provide the performance to meet the central processing unit (CPU) and graphics
processing unit (GPU) memory bandwidth goals.
[0003] To address the need for memory bandwidth and memory density to be tightly coupled
to a processor, a hybrid memory cube (HMC) may be implemented so that memory may be
placed on the same substrate as a controller enabling the memory system to perform
its intended task more optimally. The HMC may feature a stack of individual memory
die connected by internal vertical conductors, such as through-silicon vias (TSVs),
which are vertical conductors that electrically connect a stack of individual memory
die with a controller, such as to combine high-performance logic with dynamic random-access
memory (DRAM). HMC delivers bandwidth and efficiencies while less energy is used to
transfer data and provides a small form factor. In one embodiment of a HMC, the controller
comprises a high-speed logic layer that interfaces with vertical stacks of DRAM that
are connected using TSVs. The DRAM handles data, while the logic layer handles DRAM
control within the HMC.
[0004] In other embodiments, a HMC may be implemented on, for example, a multi-chip module
(MCM) substrate or on a silicon interposer. A MCM is a specialized electronic package
where multiple integrated circuits (ICs), semiconductor dies or other discrete components
are packaged onto a unifying substrate thereby facilitating their use as a component
(e.g., thus appearing as one larger IC). A silicon interposer is an electrical interface
routing between one connection (e.g., a socket) to another. The purpose of an interposer
is to spread a connection to a wider pitch or to reroute a connection to a different
connection.
[0005] However, the DRAM stack in a HMC has more bandwidth and signal count than many applications
can use. The high signal count and high bandwidth of the DRAM stack in a HMC makes
a cost effective host interface difficult.
[0006] US2009/021974 discloses a semiconductor device where multiple chips of identical design can be
stacked, and the spacer and interposer eliminated, to improve three-dimensional coupling
information transmission capability.
[0007] US2011/246746 discloses apparatuses, stacked devices and methods of forming dice stacks on an interface
die. A dice stack includes at least a first die and a second die, and conductive paths
coupling the first die and the second die to the common control die.
[0008] US2010/195421 discloses systems and methods such as those that operate to control a set of delays
associated with one or more data clocks to clock a set of data bits into one or more
transmit registers, one or more data strobes to transfer the set of data bits to at
least one receive register, and/or a set of memory array timing signals to access
a memory array on a die associated with a stacked-die memory vault.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009]
Fig. 1 illustrates a 72 bit vault for a flexible memory system according to an embodiment;
Fig. 2 illustrates a 36 bit vault for a flexible memory system according to another
embodiment;
Fig. 3 illustrates a 36 bit vault for a flexible memory system according to another
embodiment;
Fig. 4 illustrates a flexible memory system according to an embodiment;
Fig. 5 illustrates a block diagram of a computer system according to an embodiment;
Fig. 6 illustrates a block diagram of another computer system;
Figs. 7a-b illustrate a flexible memory system according to an embodiment;
Fig. 8 is a plot showing the power savings according to an embodiment; and
Fig. 9 is a flowchart of a method for forming a flexible memory system according to
an embodiment.
DETAILED DESCRIPTION
[0010] The present invention is as defined in claim 1.
[0011] The following description and the drawings sufficiently illustrate specific embodiments
to enable those skilled in the art to practice them. Other embodiments may incorporate
structural, logical, electrical, process, and other changes. Portions and features
of some embodiments may be included in, or substituted for, those of other embodiments.
Embodiments set forth in the claims encompass available equivalents of those claims.
[0012] A flexible memory system may be provided by tying vaults together (e.g., above, within
or below the DRAM stack) to create a solution with a low contact count while keeping
a low power profile. Herein, contacts refer to the leads, pins, solder balls or other
types of interconnects that couple an integrated circuit to another device, such as
a circuit board. Thus, leads, pins, solder balls or other types of interconnects may
be used interchangeably.
[0013] The flexible memory system provides a range of solutions from no vaults tied together
for the highest bandwidth to tying the available vaults together for a low contact
count solution. The low contact count solution can be applied to high density memory
modules and low cost/low power system on a chip (SOC).
[0014] Fig. 1 illustrates a 72 bit vault interface block 100 of a controller in a flexible
memory system according to an embodiment. The 72 bit vault interface block 100 includes
a command interface block (CIB) 110 and two data interface blocks (DIB) 120, 122.
The CIB 110 includes contacts 112, including contacts for a first set of command signals,
serial command signals and a second set of command signals. Two Data Interface Blocks
(DIBs) 120, 122 are also illustrated in Fig. 1. Each of the DIBs 120, 122 provides
a plurality of contacts 124, including contacts for data input/output (I/O), the data
bus, clock signals, and reset data I/O.
[0015] Memory vaults may be formed by a stacked plurality of memory arrays, wherein each
memory array of a respective vault is located on a respective one of a plurality of
stacked memory dies. The command interfaces of the vaults of the vault pair may be
tied together such that a vault pair shares a common command interface block of a
vault interface block (e.g., below the DRAM stack) to create a solution having a low
contact count while keeping a low power profile.
[0016] Considering ball grid arrays, for example, existing fine pitch flip-chip technologies
may be used and may provide a 50 µm (130) x 150 µm (132) contact pitch in a die package
having a vault pitch in length 140 of 1.35 mm and a width 142 of 1.8 mm. The vault
interface block 100 may be matched in width to the effective DRAM vault pitch to minimize
the footprint on the controller.
[0017] Fig. 2 illustrates a 36 bit vault interface block 200 of a controller in a flexible
memory system according to another embodiment. In Fig. 2, one command interface block
(CIB) 210 having contacts 212 and one data interface block (DIB) 220 having contacts
224 are shown. Contacts in used are represented by the unfilled circles. Existing
fine pitch flip-chip technologies may be used to provide an appropriate contact pitch,
e.g., 50 µm (230) x 150 µm (232), in a die package having an appropriate vault pitch,
e.g, a length 240 of 0.9 mm and a width 242 of 1.8 mm.
[0018] Fig. 3 illustrates a 36 bit vault interface block 300 of a controller in a flexible
memory system according to another embodiment. In Fig. 3, one command interface block
(CIB) 310 having contacts 312 and one data interface block (DIB) 320 having contacts
324 are shown. In the embodiment shown in Fig. 3, the contact field, which is the
area of the die where the contacts are located, may include 6 rows of contacts 350.
Unused contacts are merely presented to show that a larger die may be used to provide
a 36 bit vault. Using a 150 µm (330) x 150 µm (332) contact pitch, The 36 bit vault
interface block 300 may have a length 340, e.g., of 0.9 mm, and width 342, e.g., of
1.8 mm. The area 360 of the total contact field may be 2.7 mm
2 (0.9 mm x 3.0 mm).
[0019] Fig. 4 illustrates a flexible memory system 400 according to an embodiment. The flexible
memory system 400 shown in Fig. 4 may include a controller 410 having a number
n of 72 bit vault interface blocks. However, those skilled in the art will recognize
that alternative vault interface blocks may be implemented. Pairing vaults using eight
36 bit vault interface blocks uses 21.6 mm2 die area for the contact field (i.e.,
2.7 mm2 x 8).
[0020] In Fig. 4, the controller 410 includes a number of
n of 72 bit vault interface blocks 420, 422, 424 similar to the 72 bit vault interface
block shown in Fig. 1. A 72 bit vault interface block 420, 422, 424 as shown in Fig.
4 may be implemented as vault interface block 100 as shown in Fig. 1. However, those
skilled in the art will recognize other implementations of vault interface blocks
may be used.
[0021] Each of the
n 72 bit vault interface blocks 420, 422, 424 may include a command interface block
(CIB) 430 and two data interface blocks (DIB) 440, 450. As described above, memory
vaults may be formed by a stacked plurality of memory arrays and tied together (e.g.,
below the DRAM stack) to create a low contact count solutions while keeping a low
power profile. As shown above with respect to Fig. 1, for example, existing fine pitch
flip-chip technologies may be used to provide a contact pitch of 50um x 150um contact
pitch in a die package having an effective vault length of 1.35 mm and a width of
1.8 mm. However, those skilled in the art will recognize that alternative contact
pitch, lengths and widths may be implemented. The vault interface blocks may be matched
in width to the effective DRAM vault pitch to minimize the footprint on the controller.
[0022] As shown in Fig. 4, the
n vault interface blocks 420, 422, 424 are included in the controller 410 provide a
total length of
n times the individual length of the vaults, e.g., n x 1.35 mm = 10.8 mm ≈ 11.0 mm.
Thus, the total area of the n vault interface blocks would be the total length times
the width, e.g., 1.8 mm x 11 mm = 19.8 mm
2.
[0023] Memory 460 is also shown in Fig. 4. The memory 460 might comprise vertical stacks
of DRAM die forming a DRAM hypercube 470. The vertical stacks of DRAM are connected
together using through-silicon-via (TSV) interconnects (not shown, see Figs. 8a-b).
Vaults 472, 474 of the DRAM hypercube 470 are tied together to form vault pair 490.
Vaults 476, 478 and vaults 480, 482 are tied together to form vault pairs 492, 494,
respectively. Thus, a vault interface block (e.g., VIB 1 420) may serve both pairs
of vaults (e.g., Vault 1 472 and Vault 2 474) of a vault pair (e.g., vault pair 490).
Although the preceding embodiments discuss tying together pairs of vaults to share
a vault interface block, embodiments are not limited thereto, as any number of vaults
might be tied together to share a vault interface block. Each pair of vaults is depicted
as sharing a command interface block.
[0024] The DRAM hybrid memory cube (HMC) 470 provides memory on the same substrate as a
controller. As described above with reference to Fig. 1, each of the DIBs 440, 450
of vault interface block 420, for example, may provide contacts, including contacts
for data input/output (I/O), the data bus, clock signals, and reset data I/O. Logic
blocks 498 may be associated with each of the vault interface blocks 420. Logic may
alternatively be provided at the DRAM hypercube 470. An ASIC (see Figs. 7a-b) may
implement logic blocks 498 associated with the vault interface blocks 420. The logic
blocks 498 provide host interface logic for processing signals between a host and
the DRAM hypercube 470. Data is handled by the DRAM hypercube 470, while the logic
blocks 498 handle control of the DRAM hypercube 470. For example, the number of contacts
may be reduced by including timing logic 496. While shown separately in Fig. 4, the
timing logic may be included in logic blocks 498. Timing logic 496 may be used to
determine whether a request is destined to a particular one of the vaults 472-482.
In some embodiments, the timing logic 496 may comprise timing and chip select logic.
[0025] A low power solution may be obtained by slightly increasing the individual input/output
(IO or I/O) buffer drive strength versus generating power for an interconnect that
multiplexes vaults 472, 474, vaults 476, 478, and vaults 480, 482, respectively. Signal
count can be further reduced by combining the address/command bus with a data line
(DQ) bus and use of a header. This resembles a packet interface to the DRAM hypercube
470. The first few clocks of the request involve a command header. This is followed
by write data for a write command. A very low contact count solution is useful for
large modules. Bandwidth may be obtained through the use of multiple stacks. The buffer
cost and density of the module is driven by the signal count to the DRAM hypercube
470. Thus, a reduction in contact count reduces the buffer cost and density.
[0026] Thus, the DRAM hypercube 470 provides a flexible method to configure a host physical
layer and multi-chip module (MCM) interconnect for a wide range of solutions. The
highest bandwidth may be provided by not tying all the vaults 470-482 together, whereas
a low pin count solution may be provided by tying all the vaults 470-482 together.
Accordingly, the low pin count solution can be applied to high density memory modules
and low cost/low power SOC's.
[0027] Fig. 5 illustrates a block diagram of a computer system 500 according to an embodiment.
In Fig. 5, a CPU 510 is coupled to double data rate type three (DDR type 3 or simply
DDR3) dynamic random access memory (DRAM) 520, 522. The CPU 510 is also coupled to
a primary memory controller 530, e.g., Northbridge. The primary memory controller
530 may include a Peripheral Component Interface (PCI) Express controller 540 and
handle communications between the CPU 510, PCI-E (or accelerated graphics processor
(AGP)) video adapters 550, 552, 554, and a secondary memory controller 560.
[0028] Fig. 6 illustrates a computer system 600 according to an embodiment. In Fig. 6, the
CPU 610 is coupled to the flexible memory system 620. The flexible memory system includes
a controller, such as a controller implemented in an application specific integrated
circuit (ASIC) 630 that includes logic blocks corresponding to vault interface blocks
640, and a DRAM hypercube 650. The use of an ASIC 630 may allow customization for
a particular use, rather than use of a general processor that may be arranged for
general-purpose use. The flexible memory system 620 can be coupled to the processor
core through a high speed link 660, e.g., a serialize/deserialize (SERDES) data link.
A high speed link 670 may also be used to couple the DRAM hypercube 650 to the ASIC
630.
[0029] Figs. 7a-b illustrate a flexible MCM memory system 700 according to an embodiment.
In Figs. 7a-b, an ASIC 710 is mounted to a MCM substrate 720. A DRAM hypercube 730
is also mounted to the MCM substrate 720. Signals from the connections 712 of the
ASIC 710 and the connections 732 of the DRAM hypercube 730 flow through blind vias
that do not fully penetrate the MCM substrate 720. The blind vias only go deep enough
to reach a routing layer. Other signals from either the ASIC or DRAM which need to
connect to the system through solder balls 722 will use vias that fully penetrate
the MCM substrate. The MCM memory system 700 provides a specialized electronic package
where multiple integrated circuits (ICs), semiconductor dies or other discrete components
are packaged onto a unifying substrate, thereby facilitating their use as a component
(e.g., appearing as one larger IC). The ASIC 710 may also include logic blocks 750
corresponding to the vault interface blocks. The logic blocks 750 may provide host
interface logic for processing signals between a host (e.g., CPU 710 in Fig. 7) and
the DRAM hypercube 730 and control logic for controlling the DRAM hypercube.
[0030] In some embodiments, the functionality of a logic layer may be implemented at the
ASIC 710, e.g., in logic blocks 750. Thus, the DRAM hypercube 730 may not include
a high-speed logic layer coupled to the vertical stacks of DRAM 736. However, in other
embodiments, the DRAM hypercube 730 may include a high-speed logic layer that is coupled
to the vertical stacks of DRAM 736.
[0031] The DRAM 736, along with the logic blocks 750, may handle data and DRAM control within
the hypercube 730. The TSVs 738 that pass through the DRAM 736 provide a high level
of concurrent connections. Memory access by the controller 710 is carried out on a
highly efficient interface 780 that supports high transfer rates, e.g., 1 Tb/s or
more.
[0032] Vaults 760, 762 of the DRAM hypercube 730 are paired to form vault pair 770. Thus,
the vault pair 770 serves one of vault interface blocks 1-8 (e.g., 752) of the ocntroller
710. However, those skilled in the art will recognize that a different number of vault
interface blocks may be implemented. Moreover, vault blocks 1-8 may be tied together
in pairs, fours, eights, etc., depending on the number of vault interface blocks to
which they will be coupled, for example.
[0033] Referring to Fig. 4 and Figs. 7a-b, clock signals may be reduced by including timing
logic 496, whether on a separate logic layer in the hypercube 730 or on the DRAM 736
itself, as may be the case when a separate logic layer not included in the hypercube
730. Timing logic 496 may snoop and analyze clock signals from ASIC 710 to identify
a vault targeted by a request, e.g., to determine whether a particular request is
destined to a particular vault. For example, timing logic 496 may determine that a
request is destined to vault 760 rather than vault 762. Responsive to identifying
a targeted vault, the timing logic 496 activates the targeted vault to receive the
request and to return data. The timing logic 496 may thus reduce clock count by analyzing
the clock signals. Host interface logic block 750 may be used to save the adjust timing
for a clock signal targeted to an identified vault and adjust the clock signal according
to the identified vault. The timing logic 496 is very low power.
[0034] Fig. 8 is a plot 800 showing the power savings according to an embodiment. In Fig.
8, the flexible memory system 810 is compared to a DDR3 memory system 820 in terms
of host physical power (PHY) 830 and the DRAM power 840. The flexible memory system
810 requires a host PHY power 830 of approximately 1.5 watts 832 and requires a DRAM
power 840 of approximately 2.5 watts 842. In contrast, the DDR3 memory system 820
requires a host PHY power 830 of approximately 6.0 watts 834 and requires a DRAM power
840 of approximately 33 watts 844. The flexible memory system 810 has an area of 10
mm
2 850 while the DDR3 memory system 820 has an area of 21.2 mm
2 860. Thus, the flexible memory system 810 enables the implementation of a lower contact
count while maintaining a lower power profile than the DDR3 memory system 820.
[0035] Fig. 9 is a flowchart 900 of a method for forming a flexible memory system according
to an embodiment. In block 910, a substrate is formed. In block 920, a plurality of
vault interface blocks of an interface interconnect are formed with a width associated
with a pitch of a vault of a DRAM. In block 930, a plurality of the vaults are tied
together to reduce a contact count for the DRAM.
[0036] The above detailed description includes references to the accompanying drawings,
which form a part of the detailed description. The drawings show, by way of illustration,
specific embodiments that may be practiced. These embodiments are also referred to
herein as "examples." Such examples can include elements in addition to those shown
or described. However, also contemplated are examples that include the elements shown
or described. Moreover, also contemplate are examples using any combination or permutation
of those elements shown or described (or one or more aspects thereof), either with
respect to a particular example (or one or more aspects thereof), or with respect
to other examples (or one or more aspects thereof) shown or described herein.
(deleted)
[0037] In this document, the terms "a" or "an" are used, as is common in patent documents,
to include one or more than one, independent of any other instances or usages of "at
least one" or "one or more." In this document, the term "or" is used to refer to a
nonexclusive or, such that "A or B" includes "A but not B," "B but not A," and "A
and B," unless otherwise indicated. In the appended claims, the terms "including"
and "in which" are used as the plain-English equivalents of the respective terms "comprising"
and "wherein." Also, in the following claims, the terms "including" and "comprising"
are open-ended, that is, a system, device, article, or process that includes elements
in addition to those listed after such a term in a claim are deemed to fall within
the scope of that claim. Moreover, in the following claims, the terms "first," "second,"
and "third," etc. are used merely as labels, and are not intended to suggest a numerical
order for their objects.
[0038] The above description is intended to be illustrative, and not restrictive. For example,
the above-described examples (or one or more aspects thereof) may be used in combination
with each other. Other embodiments can be used, such as by one of ordinary skill in
the art upon reviewing the above description. Also, in the above Detailed Description,
various features may be grouped together to streamline the disclosure. An unclaimed
disclosed feature is not to be interpreted essential to any claim. Rather, embodiments
may include fewer features than those disclosed in a particular example. Thus, the
following claims are hereby incorporated into the Detailed Description, with an individual
claim standing on its own as a separate embodiment. The scope of the embodiments disclosed
herein is to be determined with reference to the appended claims.
1. A memory system, comprising:
a substrate;
a stack of dynamic random-access memory (DRAM) coupled to the substrate and comprising
a number of vaults, wherein each vault of the number of vaults comprises a respective
stacked plurality of memory arrays; and
a controller coupled to the substrate and comprising a number of vault interface blocks
(100; 420) coupled to the number of vaults (472, 474) of the stack of dynamic random-access
memory, wherein each of the number of vault interface blocks comprises:
a plurality of data interface blocks (120, 122; 440, 450), each data interface block
providing a first plurality of contacts for data signals and timing signals; and
a command interface block (110; 430) providing a second plurality of contacts for
command signals;
wherein at least two vaults of the vaults of the stack of dynamic random-access memory
are tied together to form a vault pair to share a common command interface block of
a respective vault interface block, and wherein the number of vault interface blocks
is less than the number of vaults.
2. The memory system of claim 1, wherein the vault interface blocks are stacked with
the stack of dynamic random-access memory (DRAM), and wherein a width of at least
one of the number of vault interface blocks is matched to a pitch of at least one
of the number of vaults.
3. The memory system of claim 1, wherein each of the number of data interface blocks
is coupled to a respective vault of the number of vaults.
4. The memory system of claim 1, wherein the command interface block is coupled to a
respective plurality of vaults of the number of vaults.
5. The memory system of claim 1, wherein the number of data interface blocks in each
of the number of vault interface blocks comprises two data interface blocks in each
of the number of vault interface blocks, and wherein a respective vault of the number
of vaults is coupled to each of the data interface blocks.
6. The memory system of claim 5, wherein the command interface block in each of the number
of vault interface blocks is coupled to a respective two vaults of the number of vaults.
7. The memory system of claim 1, wherein each of the plurality of memory arrays of a
respective vault is located on a respective one of a plurality of stacked memory dies.
8. The memory system of claim 1, wherein the controller further comprises a number of
logic blocks that are associated with the number of vault interface blocks.
9. The memory system of claim 8, wherein the logic blocks comprise host interface logic
for processing signals between a host and the stack of dynamic random-access memory.
10. The memory system of claim 8, wherein the logic blocks comprise control logic for
controlling the stack of dynamic random-access memory.
11. The memory system of claim 1, wherein the controller is arranged to adjust a clock
signal for a targeted vault of the number of vaults.
12. The memory system of claim 1, wherein the stack of dynamic random-access memory comprises
timing logic arranged to analyze clock signals received from the controller to identify
a vault of the number of vaults targeted by a request.
13. The memory system of claim 12, wherein the timing logic is further configured to activate
the vault of the number of vaults responsive to identifying the vault as being targeted
by the request.
1. Speichersystem, umfassend:
ein Substrat;
einen Stapel von dynamischen Direktzugriffsspeichern (DRAM), welche mit dem Substrat
gekoppelt sind und eine Anzahl von Vaults umfassen, wobei jedes Vault von der Anzahl
von Vaults eine entsprechende gestapelte Mehrzahl von Speicherarrays umfasst; und
eine Steuerung, welche mit dem Substrat gekoppelt ist und eine Anzahl von Vault-Schnittstellenblöcken
(100; 420) umfasst, welche mit der Anzahl von Vaults (472, 474) des Stapels von dynamischen
Direktzugriffsspeichern gekoppelt sind, wobei jeder der Anzahl von Vault-Schnittstellenblöcken
umfasst:
eine Mehrzahl von Datenschnittstellenblöcken (120, 122; 440, 450), wobei jeder Datenschnittstellenblock
eine erste Mehrzahl von Kontakten für Datensignale und Zeitgabesignalen bereitstellt;
und
einen Befehlsschnittstellenblock (110; 430), welcher eine zweite Mehrzahl von Kontakten
für Befehlssignale bereitstellt;
wobei mindestens zwei Vaults von den Vaults des Stapels von dynamischen Direktzugriffsspeichern
zusammengebunden sind, um ein Vault-Paar zu bilden, um einen gemeinsamen Befehlsschnittstellenblock
eines entsprechenden Vault-Schnittstellenblocks zu teilen, und wobei die Anzahl von
Vault-Schnittstellenblöcken kleiner als die Anzahl von Vaults ist.
2. Speichersystem nach Anspruch 1, wobei die Vault-Schnittstellenblöcke mit dem Stapel
von dynamischen Direktzugriffsspeichern (DRAM) gestapelt sind, und wobei eine Breite
mindestens eines der Anzahl von Vault-Schnittstellenblöcken einer Teilung von mindestens
einem von der Anzahl von Vaults angepasst ist.
3. Speichersystem nach Anspruch 1, wobei jeder von der Anzahl von Datenschnittstellenblöcken
mit einem entsprechenden Vault von der Anzahl von Vaults gekoppelt ist.
4. Speichersystem nach Anspruch 1, wobei der Befehlsschnittstellenblock mit einer entsprechenden
Mehrzahl von Vaults der Anzahl von Vaults gekoppelt ist.
5. Speichersystem nach Anspruch 1, wobei die Anzahl von Datenschnittstellenblöcken in
jedem von der Anzahl von Vault-Schnittstellenblöcken zwei Datenschnittstellenblöcke
in jedem von der Anzahl von Vault-Schnittstellenblöcken umfasst, und wobei ein entsprechendes
Vault von der Anzahl von Vaults mit jedem der Datenschnittstellenblöcke gekoppelt
ist.
6. Speichersystem nach Anspruch 5, wobei der Befehlsschnittstellenblock in jedem von
der Anzahl von Vault-Schnittstellenblöcken mit entsprechenden zwei Vaults von der
Anzahl von Vaults gekoppelt ist.
7. Speichersystem nach Anspruch 1, wobei jedes der Mehrzahl von Speicherarrays eines
entsprechenden Vaults auf einem entsprechenden von einer Mehrzahl von gestapelten
Speicherchips angeordnet ist.
8. Speichersystem nach Anspruch 1, wobei die Steuerung ferner eine Anzahl von Logikblöcken
umfasst, welche der Anzahl von Vault-Schnittstellenblöcken zugeordnet sind.
9. Speichersystem nach Anspruch 8, wobei die Logikblöcke eine Host-Schnittstellenlogik
zum Verarbeiten von Signalen zwischen einem Host und dem Stapel von dynamischen Direktzugriffsspeichern
umfassen.
10. Speichersystem nach Anspruch 8, wobei die Logikblöcke eine Steuerlogik zum Steuern
des Stapels von dynamischen Direktzugriffsspeichern umfassen.
11. Speichersystem nach Anspruch 1, wobei die Steuerung ausgelegt ist, um ein Taktsignal
für ein gezieltes Vault von der Anzahl von Vaults einzustellen.
12. Speichersystem nach Anspruch 1, wobei der Stapel von dynamischen Direktzugriffsspeichern
eine Zeitgabelogik umfasst, welche ausgelegt ist, um Taktsignale zu analysieren, welche
von der Steuerung empfangen werden, um ein Vault von der Anzahl von Vaults zu identifizieren,
welches das Ziel einer Anforderung ist.
13. Speichersystem nach Anspruch 12, wobei die Zeitgabelogik ferner konfiguriert ist,
um das Vault von der Anzahl von Vaults als Reaktion auf das Identifizieren des Vaults
als Ziel einer Anforderung zu aktivieren.
1. Système de mémoire, comprenant :
un substrat ;
un empilement de mémoire vive dynamique (DRAM) qui est couplé au substrat et qui comprend
un nombre de coffres-forts, dans lequel chaque coffre-fort du nombre de coffres-forts
comprend une pluralité de réseaux de mémoire empilés respectifs ; et
un contrôleur qui est couplé au substrat et qui comprend un nombre de blocs d'interface
de coffre-fort (100 ; 420) qui sont couplés aux coffres-forts du nombre de coffres-forts
(472, 474) de l'empilement de mémoire vive dynamique, dans lequel chacun du nombre
de blocs d'interface de coffre-fort comprend :
une pluralité de blocs d'interface de données (120, 122 ; 440, 450), chaque bloc d'interface
de données constituant une première pluralité de contacts pour des signaux de données
et des signaux de cadencement ; et
un bloc d'interface de commande (110 ; 430) qui constitue une seconde pluralité de
contacts pour des signaux de commande ;
dans lequel au moins deux coffres-forts des coffres-forts de l'empilement de mémoire
vive dynamique sont liés ensemble pour former une paire de coffres-forts qui est destinée
à partager un bloc d'interface de commande commun d'un bloc d'interface de coffre-fort
respectif, et dans lequel le nombre de blocs d'interface de coffre-fort est inférieur
au nombre de coffres-forts.
2. Système de mémoire selon la revendication 1, dans lequel les blocs d'interface de
coffre-fort sont empilés avec l'empilement de mémoire vive dynamique (DRAM), et dans
lequel une largeur d'au moins l'un du nombre de blocs d'interface de coffre-fort est
mise en correspondance avec un pas d'au moins l'un du nombre de coffres-forts.
3. Système de mémoire selon la revendication 1, dans lequel chacun du nombre de blocs
d'interface de données est couplé à un coffre-fort respectif du nombre de coffres-forts.
4. Système de mémoire selon la revendication 1, dans lequel le bloc d'interface de commande
est couplé à une pluralité de coffres-forts respectifs du nombre de coffres-forts.
5. Système de mémoire selon la revendication 1, dans lequel le nombre de blocs d'interface
de données dans chacun du nombre de blocs d'interface de coffre-fort comprend deux
blocs d'interface de données dans chacun du nombre de blocs d'interface de coffre-fort,
et dans lequel un coffre-fort respectif du nombre de coffres-forts est couplé à chacun
des blocs d'interface de données.
6. Système de mémoire selon la revendication 5, dans lequel le bloc d'interface de commande
dans chacun du nombre de blocs d'interface de coffre-fort est couplé à deux coffres-forts
respectifs du nombre de coffres-forts.
7. Système de mémoire selon la revendication 1, dans lequel chacun de la pluralité de
réseaux de mémoire d'un coffre-fort respectif est localisé sur l'une respective d'une
pluralité de puces de mémoire empilées.
8. Système de mémoire selon la revendication 1, dans lequel le contrôleur comprend en
outre un nombre de blocs logiques qui sont associés au nombre de blocs d'interface
de coffre-fort.
9. Système de mémoire selon la revendication 8, dans lequel les blocs logiques comprennent
une logique d'interface hôte pour traiter des signaux entre un hôte et l'empilement
de mémoire vive dynamique.
10. Système de mémoire selon la revendication 8, dans lequel les blocs logiques comprennent
une logique de commande pour commander l'empilement de mémoire vive dynamique.
11. Système de mémoire selon la revendication 1, dans lequel le contrôleur est agencé
pour régler un signal d'horloge pour un coffre-fort ciblé du nombre de coffres-forts.
12. Système de mémoire selon la revendication 1, dans lequel l'empilement de mémoire vive
dynamique comprend une logique de cadencement qui est agencée pour analyser des signaux
d'horloge qui sont reçus depuis le contrôleur pour identifier un coffre-fort du nombre
de coffres-forts qui est ciblé par une requête.
13. Système de mémoire selon la revendication 12, dans lequel la logique de cadencement
est en outre configurée pour activer le coffre-fort du nombre de coffres-forts en
réponse à l'identification du coffre-fort comme étant ciblé par la requête.