(19)
(11) EP 2 895 958 B1

(12) EUROPEAN PATENT SPECIFICATION

(45) Mention of the grant of the patent:
25.12.2019 Bulletin 2019/52

(21) Application number: 13836722.2

(22) Date of filing: 13.09.2013
(51) International Patent Classification (IPC): 
G06F 12/02(2006.01)
G06F 12/10(2016.01)
(86) International application number:
PCT/US2013/059645
(87) International publication number:
WO 2014/043459 (20.03.2014 Gazette 2014/12)

(54)

ADDRESS MAPPING

ADRESSABBILDUNG

MISE EN CORRESPONDANCE D'ADRESSE


(84) Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

(30) Priority: 14.09.2012 US 201213616175

(43) Date of publication of application:
22.07.2015 Bulletin 2015/30

(73) Proprietor: Micron Technology, Inc.
Boise, ID 83716-9632 (US)

(72) Inventor:
  • Li, Tieniu
    San Jose, California 95120 (US)

(74) Representative: Gill Jennings & Every LLP 
The Broadgate Tower 20 Primrose Street
London EC2A 2ES
London EC2A 2ES (GB)


(56) References cited: : 
US-A- 5 375 214
US-A1- 2009 089 518
US-A1- 2011 283 048
US-A1- 2012 054 419
US-B2- 8 255 661
US-A1- 2006 212 674
US-A1- 2011 283 048
US-A1- 2012 023 282
US-B1- 7 278 008
   
       
    Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give notice to the European Patent Office of opposition to the European patent granted. Notice of opposition shall be filed in a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).


    Description

    Technical Field



    [0001] The present disclosure relates generally to semiconductor memory and methods, and more particularly, to address mapping.

    Background



    [0002] Memory devices are typically provided as internal, semiconductor, integrated circuits in computers or other electronic devices. There are many different types of memory including volatile and non-volatile memory. Volatile memory can require power to maintain its data (e.g., information) and includes random-access memory (RAM), dynamic random access memory (DRAM), and synchronous dynamic random access memory (SDRAM), among others. Non-volatile memory can provide persistent data by retaining stored data when not powered and can include NAND flash memory, NOR flash memory, static random access memory (SRAM), resistance variable memory, such as phase change random access memory (PCRAM) and resistive random access memory (RRAM), and magnetic random access memory (MRAM), such as spin torque transfer random access memory (STT RAM), among others.

    [0003] Memory devices can be combined together to form a solid state drive (SSD). A solid state drive can include non-volatile memory such as NAND flash memory and/or NOR flash memory, and/or can include volatile memory such as DRAM, among various other types of non-volatile and volatile memory. Flash memory devices, including floating gate flash devices and charge trap flash (CTF) devices can comprise memory cells having a storage structure (e.g., a floating gate or a charge trapping structure) used to store charge and may be utilized as non-volatile memory for a wide range of electronic applications.

    [0004] Various apparatuses (e.g., computing systems) can comprise an SSD coupled to a host (e.g., a personal laptop computer, a desktop computer, a digital camera, a mobile telephone, or a memory card reader, among various other types of hosts). Memory management processes associated with SSD operation can suffer inefficiencies due to sub-page writes, misaligned writes, and/or unpredictable temporal and spatial locality, for example. Such inefficiencies can be due to factors such as input/output (I/O) workload pattern irregularity associated with commands (e.g., write, read, erase, etc.) received from the host, which can increase write amplification and/or reduce garbage collection efficiency, among other drawbacks. Memory management processes often employ logical to physical (L2P) mapping data structures (e.g., tables) to map between logical address space and physical address space (e.g., to determine locations of physical data stored on a memory). However, many current L2P mapping approaches are not able to effectively account for host I/O workload pattern irregularity.

    [0005] By way of further background, reference is made to the following two patents, United States Patent No. US 7278008 B1 to Case et al., and United States Patent No. US 5375214 to Mirza et al.

    [0006] The first of these references, US 7278008 B1, discloses that a virtual address translation table and an on-chip address cache are usable for translating virtual addresses to physical addresses. Address translation information is provided using a cluster that is associated with some range of virtual addresses and that can be used to translate any virtual address in its range to a physical address, where the sizes of the ranges mapped by different clusters may be different. Clusters are stored in an address translation table that is indexed by virtual address so that, starting from any valid virtual address, the appropriate cluster for translating that address can be retrieved from the translation table. Recently retrieved clusters are stored in an on-chip cache, and a cached cluster can be used to translate any virtual address in its range without accessing the address translation table again.

    [0007] In particular, this reference discloses a method for address mapping of a type comprising: providing a mapping unit having logical to physical mapping data corresponding to a number of logical addresses, the mapping unit having a data unit type associated therewith, wherein the data unit type defines the particular amount of physical data to which the number of logical to physical mapping entries correspond, and wherein the data unit type is variable such that the amount of physical data to which the number of logical to physical mapping entries correspond is variable.

    [0008] This reference also discloses an apparatus of a type comprising: a memory storing a number of mapping units each having a data unit type of a number of different data unit types associated therewith, the data unit type defining a data unit size mapped by the respective mapping unit, wherein the data unit type defines the particular amount of physical data to which the number of logical to physical mapping entries correspond, and wherein the data unit type is variable such that the amount of physical data to which the number of logical to physical mapping entries correspond is variable; and a controller coupled to the memory and configured to: access a particular mapping unit based on a logical address associated with a write command received from a host.

    [0009] The second of these references, US 5375214, discloses a dynamic address translation mechanism which uses a single translation look aside buffer (TLB) facility for pages of various sizes. The single TLB is supported by a small amount of special hardware. This hardware includes logic for detecting a page size prior to translation and generating a mask. The logic selects a set of virtual address bits for addressing the entries in the TLB. Parts of the virtual address are masked and merged with the address read out of the TLB to form the real address.

    Summary of the Invention



    [0010] The present invention provides a method for address mapping, comprising: providing a mapping unit having logical to physical mapping data corresponding to a fixed range of non-overlapping logical addresses, the mapping unit having a data unit type associated therewith, wherein the data unit type defines the particular amount of physical data to which a quantity of logical to physical mapping entries correspond, and wherein the data unit type is variable such that the amount of physical data to which the quantity of logical to physical mapping entries correspond is variable; characterised by: said mapping unit comprising both: a first portion comprising mapping data comprising the quantity of logical to physical mapping entries each corresponding to a particular amount of physical data stored on a memory, wherein the quantity of logical to physical mapping entries depends on a data unit size defined by the data unit type; and a second portion comprising: mapping data indicating locations on the memory of a plurality of other mapping units of a mapping unit group to which the mapping unit belongs; and attribute data indicating the data unit type associated with the mapping units of the mapping unit group; and said mapping units of the plurality of other mapping units each comprise: a first fixed amount of space allocated for a data unit mapping table; and a second fixed amount of space allocated for a mapping unit mapping table; wherein the data unit mapping table of at least one of the plurality of other mapping units uses less than the first fixed amount of space allocated for the data unit mapping table and comprises: a first quantity of logical to physical mapping entries mapping to physical data units having a size defined by a data unit type of the respective mapping unit; and a second quantity of logical to physical mapping entries mapping to physical data units having a size other than the size defined by the data unit type of the respective mapping unit; and wherein said data unit type comprises: at least one data unit type that corresponds with a particular amount of physical data that is less than a physical page size corresponding to a memory device to which the data is mapped, and at least one data unit type that corresponds with a particular amount of physical data that is greater than a physical page size corresponding to a memory device to which the data is mapped.

    [0011] The present invention also provides an apparatus, comprising: a memory storing a logical to physical mapping data structure comprising a plurality of mapping units each having a data unit type of a number of different data unit types associated therewith, the data unit type defining a data unit size mapped by the respective mapping unit, wherein the data unit type defines the particular amount of physical data to which a quantity of logical to physical mapping entries correspond, and wherein the data unit type is variable such that an amount of physical data to which the quantity of logical to physical mapping entries correspond is variable; and a controller coupled to the memory and configured to: access a particular mapping unit based on a logical address associated with a write command received from a host; characterised by said controller further configured to: update a first portion of the particular mapping unit, the first portion comprising a first fixed amount of space allocated for a data unit mapping table comprising a quantity of logical to physical mapping entries each corresponding to a particular amount of physical data stored on the memory, wherein the quantity of logical to physical mapping entries depends on the size defined by the data unit type of the particular mapping unit; and update a second portion of the particular mapping unit, the second portion comprising attribute data indicating the data unit type associated with a mapping unit group to which the particular mapping unit belongs; wherein at least one data unit type corresponds with a particular amount of physical data that is less than a physical page size corresponding to a memory device to which the data is mapped; wherein at least one data unit type corresponds with a particular amount of physical data that is greater than a physical page size corresponding to a memory device to which the data is mapped; wherein at least one of the plurality of mapping units has an associated data unit type that corresponds to a quantity of logical to physical mapping entries that uses the whole of the first fixed amount of space allocated for the data unit mapping table; and wherein at least one of the plurality of mapping units has an associated data unit type that corresponds to a quantity of logical to physical mapping entries that uses less than the whole of first fixed amount of space allocated for the data unit mapping table, with a remaining portion of the first fixed amount of space comprising logical to physical mapping entries to physical data units having a size less than the size defined by the data unit type of the respective one of the plurality of mapping units.

    Brief Description of the Drawings



    [0012] 

    Figure 1 is a block diagram of an apparatus in the form of a computing system including at least one memory system in accordance with a number of embodiments of the present disclosure.

    Figure 2 is a logical to physical address map in accordance with a previous address mapping approach.

    Figure 3 is a logical to physical address map in accordance with a number of embodiments of the present disclosure.

    Figure 4 illustrates a number of mapping unit groups in accordance with a number of embodiments of the present disclosure.

    Figure 5 illustrates a number of mapping units associated with address mapping in accordance with a number of embodiments of the present disclosure.

    Figure 6 illustrates a number of mapping units associated with address mapping in accordance with a number of embodiments of the present disclosure.

    Figure 7 illustrates a number of mapping units associated with address mapping in accordance with a number of embodiments of the present disclosure.

    Figure 8 illustrates a number of mapping units associated with address mapping in accordance with a number of embodiments of the present disclosure.

    Figure 9 illustrates a number of mapping units associated with address mapping in accordance with a number of embodiments of the present disclosure.

    Figure 10 illustrates a functional flow chart associated with updating a mapping unit in accordance with a number of embodiments of the present disclosure.


    Detailed Description



    [0013] The present disclosure includes methods, memory units, and apparatuses for address mapping. One method includes providing a mapping unit having logical to physical mapping data corresponding to a number of logical addresses. The mapping unit has a variable data unit type associated therewith and comprises a first portion comprising mapping data indicating locations on a memory of a number of physical data units having a size defined by the variable data unit type, and a second portion comprising mapping data indicating locations on the memory of a number of other mapping units of a mapping unit group to which the mapping unit belongs.

    [0014] A number of embodiments of the present disclosure can provide a logical to physical address mapping approach that can be adjusted based on the I/O workload of a host, for example. As such, a number of embodiments can provide benefits such as improved garbage collection efficiency and/or reduced write amplification as compared to prior address mapping techniques.

    [0015] In the following detailed description of the present disclosure, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration how a number of embodiments of the disclosure may be practiced. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the embodiments of this disclosure, and it is to be understood that other embodiments may be utilized and that process, electrical, and/or structural changes may be made without departing from the scope of the present disclosure. As used herein, the designators "D" and "m", particularly with respect to reference numerals in the drawings, indicate that a number of the particular feature so designated can be included with a number of embodiments of the present disclosure. As used herein, "a number of' something can refer to one or more of such things.

    [0016] The figures herein follow a numbering convention in which the first digit or digits correspond to the drawing figure number and the remaining digits identify an element or component in the drawing. Similar elements or components between different figures may be identified by the use of similar digits. For example, 301 may reference element "01" in Figure 3, and a similar element may be referenced as 401 in Figure 4. As will be appreciated, elements shown in the various embodiments herein can be added, exchanged, and/or eliminated so as to provide a number of additional embodiments of the present disclosure. In addition, as will be appreciated, the proportion and the relative scale of the elements provided in the figures are intended to illustrate the embodiments of the present invention, and should not be taken in a limiting sense.

    [0017] Figure 1 is a block diagram of an apparatus in the form of a computing system 100 including at least one memory system 104 in accordance a number of embodiments of the present disclosure. As used herein, a memory system 104, a controller 108, or a memory device 110 might also be separately considered an "apparatus". The memory system 104 can be a solid state drive (SSD), for instance, and can include a host interface 106, a controller 108 (e.g., a processor and/or other control circuitry), and a number of memory devices 110-1, ..., 110-D (e.g., solid state memory devices such as NAND flash devices), which provide a storage volume for the memory system 104. In a number of embodiments, the controller 108, a memory device 110-1 to 110-D, and/or the host interface 106 can be physically located on a single die or within a single package (e.g., a managed NAND application). Also, in a number of embodiments, a memory (e.g., memory devices 110-1 to 110-D) can include a single memory device. In this example, each of the memory devices 110-1 to 110-D corresponds to a respective memory channel, which can comprise a group of memory devices (e.g., dies or chips); however, embodiments are not so limited. Additionally, in a number of embodiments, the memory devices 110 can comprise different types of memory. For instance, memory devices 110-1 to 110-D can comprise phase change memory devices, DRAM devices, multilevel (MLC) NAND flash devices, single level (SLC) NAND flash devices, and/or combinations thereof.

    [0018] The host interface 106 can be used to transfer data between the memory system 104 and a host 102. The interface 106 can be in the form of a standardized interface. For example, when the memory system 104 is used for data storage in a computing system 101, the interface 106 can be a serial advanced technology attachment (SATA), a serial attached SCSI (SAS), a peripheral component interconnect express (PCIe), or a universal serial bus (USB), among other connectors and interfaces. In general, however, interface 106 can provide an interface for passing control, address, data, and other signals between the memory system 104 and a host 102 having compatible receptors for the interface 106. Host 102 can be a host system such as a personal laptop computer, a desktop computer, a digital camera, a mobile telephone, or a memory card reader, among various other types of hosts. Host 102 can include a system motherboard and/or backplane and can include a number of memory access devices (e.g., a number of processors).

    [0019] The controller 108 can communicate with the memory (e.g., memory devices 110-1 to 110-D) to control data read, write, and erase operations, among other operations. The controller 108 can include, for example, a number of components in the form of hardware and/or firmware (e.g., one or more integrated circuits) and/or software for controlling access to the memory and/or for facilitating data transfer between the host 102 and memory.

    [0020] In the example illustrated in Figure 1, the controller 108 includes a host I/O management component 112, a flash translation layer (FTL) 114, and a memory unit management component 116. However, the controller 108 can include various other components not illustrated so as not to obscure embodiments of the present disclosure. Also, although the components 112, 114, and 116 are illustrated as resident on the controller 108, in some embodiments, the component 112, 114, and/or 116 may reside elsewhere in the system 100 (e.g., as an independent component or resident on a different component of the system).

    [0021] In embodiments in which the memory (e.g., memory devices 110-1 to 110-D) includes a number of arrays of memory cells, the arrays can be flash arrays with a NAND architecture, for example. However, embodiments are not limited to a particular type of memory array or array architecture. The memory cells can be grouped, for instance, into a number of blocks, which are erased together as a group and can store a number of pages of data per block. A number of blocks can be included in a plane of memory cells and an array can include a number of planes. As used herein, a "page of data" refers to an amount of data that the controller 108 is configured to write/read to/from the memory 110 as part of a single write/read operation and can be referred to as a "flash page". As an example, a memory device may have a page size of 8KB (kilobytes) and may be configured to store 128 pages of data per block, 2048 blocks per plane, and 16 planes per device.

    [0022] Unlike with traditional hard disk drives, data stored in flash memory cannot be directly overwritten. That is, a block of flash cells must be erased prior to rewriting data thereto (e.g., a page at a time). In embodiments in which at least one of memory devices 110-1 to 110-D comprises flash memory cells, the controller 108 can manage data transferred between the host 102 and the memory 110 via a logical to physical mapping scheme. For instance, the flash translation layer 114 can employ a logical addressing scheme (e.g., logical block addressing (LBA)). As an example, when new data received from host 102 is to replace older data already written to memory 110, the controller 108 can write the new data in a new location on memory 110 and the logical to physical mapping of FTL 114 can be updated such that the corresponding logical address(es) associated with the new data being written indicates (e.g., points to) the new physical location. The old location, which no longer stores valid data, will be erased prior to being written again.

    [0023] Flash memory cells can be cycled (e.g., programmed/erased) a limited number of times before they become unreliable. The controller 108 can implement wear leveling to control the wear rate on the memory 110, which can reduce the number of program/erase cycles performed on a particular group (e.g., block) by spreading the cycles more evenly over the entire array. Wear leveling can include a technique called garbage collection, which can include reclaiming (e.g., erasing and making available for writing), blocks that have the most invalid pages. An invalid page can refer to a page containing invalid data (e.g., a page that no longer has an up to date mapping associated therewith). Alternatively, garbage collection can include reclaiming blocks with more than a threshold amount of invalid pages. If sufficient free blocks exist for a writing operation, then a garbage collection operation may not occur.

    [0024] Write amplification may occur when writing data to flash memory devices 110. When randomly writing data to a memory array, the controller 108 scans for available space in the array. Available space in a memory array can be individual cells, pages, and/or blocks of memory cells that are not storing data and/or have been erased. If there is enough available space to write the data in a selected location, then the data is written to the selected location of the memory array. If there is not enough available space in the selected location, the data in the memory array is rearranged by reading, copying, moving, or otherwise rewriting and erasing the data that is already present in the selected location to a new location, leaving available space for the new data that is to be written in the selected location. The relocation of valid data in the memory array is referred to as write amplification because the amount of data written to memory is greater than the amount of data that would occur if there were sufficient available space in the selected location (e.g., the physical amount of data is greater than the logical amount intended to be written). Write amplification is undesirable since it can consume bandwidth, which reduces performance, and can reduce the useful lifetime of an SSD. The amount of write amplification can be effected by various factors such as garbage collection efficiency, wear leveling efficiency, amount of random writes (e.g., writes to non-sequential logical addresses), and/or over-provisioning (e.g., the difference between the physical capacity of flash memory and the logical capacity presented through the operating system as available to the user), among other factors.

    [0025] The flash translation layer 114 can, in collaboration with the host I/O management component 112 and the mapping unit management component 116, perform address mapping in accordance with a number of embodiments described herein. In a number of embodiments, the host I/O management component 112 manages data received in association with write commands from the host 102 (e.g., prior to mapping via FTL 114). The I/O workload of the host 102 can be irregular and/or variable. For instance, large file writes (e.g., writes corresponding to a large amount of data) can often be mixed with small writes (e.g., writes corresponding to a small amount of data). In this context, "large" and "small" refer only to a relative difference in size. As an example, a small write may refer to writing of 4KB of metadata corresponding to a larger file. A large file write may include writing of 128KB of data, for instance. A large file write may comprise a number of consecutive large file writes. For instance, writing of a 2GB video file may comprise a number of consecutive 128KB write commands from the host 102.

    [0026] The host I/O management 112 can be configured to manage received data in units of a particular size. The particular size of the data units managed by host I/O management component 112 can correspond to a size of available buffers and may be 4KB, for instance. As such, the host I/O management 112 can organize data received from host 102 as 4KB units to be mapped via FTL 114. As used herein, the data unit size managed by host I/O management 112 can be referred to as a "system page size".

    [0027] As described further herein, the mapping unit (MU) management 116 can, in a number of embodiments, monitor the I/O workload of the host 102. For instance, the MU management 116 can track, for multiple memory units and/or memory unit groups (MUGs), the logical addresses associated with incoming write commands from host 102 and the corresponding sizes (e.g., amount of data) associated therewith, which can be used to determine a data unit size for the respective MUs and/or MUGs, for instance.

    [0028] Figure 2 is a logical to physical (L2P) address map 201 in accordance with a previous address mapping approach. The L2P map 201 can be in the form of a data structure such as a table, for instance. The L2P map 201 includes a number of MUs 203-0, 203-1, 203-2, ..., 203-i, etc., which will be collectively referred to as MUs 203. Each of the MUs 203 can store a fixed amount of mapping data used to map between logical and physical addresses. The fixed amount of data can be equal to the page size associated with the memory to which data is to be written. For instance, if the memory (e.g., memory devices 110 shown in Figure 1) has an associated page size of 8KB, then each MU 203 may store 8KB of mapping data. Each MU 203 comprises a physical address (PA) data structure (e.g., table) comprising a fixed number of entries each mapping a logical page to a physical page. For instance, as shown in Figure 2, PA table 205 comprises a number of entries (e.g., PA[0], PA[1], PA[2], ..., PA[PA_PER_MU - 1]), where "PA_PER_MU" can be 2,048, for example. As such, if each MU 203 comprises 2,048 entries and each entry maps to an 8KB page, each MU 203 can map 16MB of memory (e.g., 8KB per entry multiplied by 2,048 entries).

    [0029] Due to the relatively large size of the address map 201, the MUs 203 can be stored on a memory (e.g., memory 110) and can be accessed, when appropriate, based on the logical addresses to be written in association with write operations from a host (e.g., host 102). For instance, one or more of the MUs 203 can be transferred from memory 110 to memory of the controller (e.g., DRAM, SRAM, etc.), for instance. The mapping data of the accessed MU 203 can then be updated such that it maps to the appropriate physical location on the memory, and the updated MU 203 can be written back to memory 210.

    [0030] However, prior art address mapping approaches, such as the address mapping approach described in Figure 2, can have various drawbacks, which can lead to increased write amplification, for instance. For example, each of the entries of the PA table 205 corresponding to each of the respective MUs 203 maps to a same amount of physical data (e.g., an 8KB page size in this example). Since the host I/O workload can be irregular and/or variable, the lack of flexibility with respect to the size of physical data units mapped by MUs 203, can lead to an increase in the amount of MU management overhead, such as a large number of updates (e.g., to mapping table entries) due to long consecutive data writes, sub-page writes (e.g., writes of amounts of data less than a page size) and/or misaligned writes (e.g., writes of data units that are not aligned with physical page boundaries) due to small data writes to random memory locations, for instance. The variable and/or irregular host I/O workload can also lead to unpredictable temporal and/or spatial locality, which can also lead to increased write amplification due to the lack of flexibility (e.g., adaptability) with respect to the size of the physical data units mapped by the MUs 203 of address map 201. As described further below in association with Figures 3 through 10, a number of embodiments of the present disclosure provide a more flexible address mapping approach as compared to approaches such as that described in connection with Figure 2, which can provide improved write amplification, among other benefits.

    [0031] Figure 3 is a L2P map 301 in accordance with a number of embodiments of the present disclosure. The L2P map 301 can be in the form of a data structure such as a table, for instance. The L2P map 301 includes a number of MUs 303-0, 303-1, 303-2, ..., 303-i, etc., collectively referred to as MUs 303. In a number of embodiments, and similar to the MUs 203 described in Figure 2, each of the MUs 303 can store a fixed amount of mapping data used to map between logical and physical addresses. The fixed amount of data can be equal to the page size associated with the memory (e.g., memory 110 shown in Figure 1) to which data is to be written. For instance, if the memory has an associated page size of 8KB, then each MU 303 may comprise 8KB of available mapping data space.

    [0032] Each MU 303 corresponds to a number of logical addresses. In a number of embodiments, each MU 303 corresponds to a fixed range of non-overlapping logical addresses. For instance, MU 303-0 can correspond to logical addresses 0 - 1,024, MU 303-1 can correspond to logical addresses 1,025 - 2,048, etc. Also, as described further herein, each MU 303 can have an associated variable data unit type, which can define the size of physical data units mapped by a particular MU 303.

    [0033] As illustrated by MU 303-i of Figure 3, each MU 303 can comprise a first portion 305 of mapping data and a second portion 307 of mapping data. The first portion 305 can comprise mapping data indicating locations on a memory (e.g., memory 110) of a number of physical data units having a size defined by the variable data unit type. As an example, the first portion 305 can comprise a physical address (PA) table (e.g., a data unit mapping table). However, unlike prior approaches using PA tables such as PA table 205 shown in Figure 2, which have entries each mapping to a physical page, the entries of PA table 305 each map to a physical data unit having a particular size, which may or may not be equal to a page size. For instance, in a number of embodiments, the L2P map 301 has a base data unit size associated therewith. As used herein, the base data unit size can refer to a smallest data unit size mapped via L2P map 301. For instance, the base data unit size can be smaller than, equal to, or larger than a page size of the memory. As an example, if the page size of the memory is 8KB, then the base data unit size may be 4KB, 8KB, 16KB, etc. In a number of embodiments, the base data unit size can be equal to a system page size (e.g., a data size managed by host I/O management 112 shown in Figure 1).

    [0034] In various instances, a system page size may not be the same as a page size of the memory (e.g., a flash page size). For example, the system page size may be 4KB and the page size of the memory may be 8KB, which can lead to data fragmentation and/or misalignment associated with I/O requests (e.g., if the logical to physical mapping maps only to 8KB page sizes, such as in previous approaches). As such, in a number of embodiments of the present disclosure, the base data unit size is the same as the system page size; however, embodiments are not so limited.

    [0035] In the example shown in Figure 3, the PA table 305 of MU 303-i comprises entries PA[0] through PA[PA_PER_MU - 1], where "PA_PER_MU" can be 1,024, for instance. As described further below, the number of entries in mapping table 305 can depend on factors such as the available mapping data space used for table 305 and the size of the data units mapped by the entries PA[0] through PA[PA_PER_MU - 1], among other factors. As an example, each of the entries of mapping table 305 can indicate the location on memory of a physical data unit having the base data unit size. However, as described further below, in a number of embodiments, the entries of a mapping table can indicate locations on the memory of physical data units having a size other than the base data unit size, which may be a multiple of the base data unit size, for example.

    [0036] The second portion 307 of a mapping unit 303 can comprise mapping data indicating locations on the memory of a number of other mapping units of a mapping unit group to which the mapping unit 303 belongs. As an example, the second portion 307 can be a mapping table (e.g., a MU mapping table) comprising entries pointing to locations (e.g., on memory such as memory 110) of each of the mapping units of a particular mapping unit group to which MU 303-i belongs. As such, MU mapping table 307 entries comprise physical mapping unit addresses PMUA[0], PMUA[1], PMUA[2], ..., PMUA[m-1], where "m" can be 128, 256, 512, 2,048, etc. As described further below, the L2P map 301 can comprise a number of mapping unit groups (MUGs), which can each be a fixed range of non-overlapping MUs 303. For instance, if each MUG comprises 512 MUs 303, then a first MUG could comprise MU[0] - MU[511] and a next MUG could comprise MU[512] - MU[1023], etc.

    [0037] As described further herein, in a number of embodiments, the second portion 307 can include data indicating the variable data unit type corresponding to the respective MUs of the MUG. For instance, each entry (e.g., PMUA[0], PMUA[1], PMUA[2], ..., PMUA[m-1]) of table 307 can include data indicating the variable data unit type corresponding to the particular MU to which the respective entry maps.

    [0038] In a number of embodiments, the total amount of available mapping data space corresponding to an MU (e.g., 303) can be allotted in various manners (e.g., between first portion 305 and second portion 307), while maintaining a particular total amount of space mapped by a MUG. That is, that table sizes can be adjusted such that the table sizes are different between MUGs. As one example, MU 303-i can be one of "m" MUs in a particular MUG that maps 2GB of space. For this example, assume each entry of table 305 maps to a physical data unit equal to a base data unit size of 4KB. Therefore, if the number of entries in table 305 (e.g., PA_PER_MU) is 1024, then MU 303-i maps 4MB of space (1024 x 4KB). As such, in order for the MUG to which MU 303-i belongs to map 2GB of space, the number of entries in table 307 (e.g., "m") must be 512 (4MB x 512 = 2GB). Alternatively, PA_PER_MU can be 128 such that MU 303-i maps 512KB (128 x 4KB) and "m" can be 4096 (512KB x 4096 = 2GB). However, embodiments are not limited to this example.

    [0039] Figure 4 illustrates a number of mapping unit groups in accordance with a number of embodiments of the present disclosure. In a number of embodiments, the MUs of a L2P map can be organized into a number of MUGs. Figure 4 illustrates an L2P map 401 comprising a number of MUGs 409-0 (MUG[0]) and 409-1 (MUG[1]); however, a number embodiments can include more than two MUGs. The MUs of Figure 4 can be analogous to the MUs 303 described in Figure 3. Each of the MUGs 409 comprises a number of MUs (e.g., MU[0], MU[1], MU[2], ..., MU[i]...). In a number of embodiments, each of the MUs maps a same amount of memory space (e.g., 4MB, 16MB, 128MB, etc.). However, the MUs of a particular MUG (e.g., 409-0) may map a different amount of memory space than the MUs of a different MUG (e.g., 409-1). In a number of embodiments, each of the MUGs 409 map a same amount of memory space (e.g., 256MB, 2GB, 4GB, etc.); however, embodiments are not so limited. In a number of embodiments, each MUG 409 corresponds to a particular range of logical addresses and the logical addresses corresponding to respective MUGs 409 do not overlap. As such, the logical address space mapped by MUG 409-0 is different than the logical address space mapped by MUG 409-1.

    [0040] As described above in association with Figure 3, the MUs of each MUG 409 can store a fixed amount of mapping data in a mapping data space that comprises a first portion 405 and a second portion 407. As shown in Figure 4, MU[2] of MUG 409-0 comprises a first portion 405-0 and a second portion 407-0. Similarly, MU[i] of MUG 409-1 comprises a first portion 405-1 and a second portion 407-1. The second portion 407 can be a MU mapping table comprising entries pointing to locations on memory of each of the MUs of a MUG 409. The table 407 can also indicate the variable data unit type corresponding to the MUs of the MUG 409.

    [0041] In operation, a host (e.g., 102) can issue a write command associated with a number of logical addresses. A controller (e.g., 108) can receive the write command and can access (e.g., via an FTL such as FTL 114) the particular MU (which may be stored on a memory such as memory 110) corresponding to the number of logical addresses. The data corresponding to the write command can be written to the memory and the mapping data of portion 405 of the MU can be updated such that the logical addresses map to the location on the memory of the newly written data. In a number of embodiments, the mapping data of the second portion 407 of the MU can also be updated (e.g., prior to writing the MU back to memory). As further described in association with Figure 10, maintaining up to date mapping data of the second portion 407 can include determining a current up to date one of the MUs of the MUG 409 (e.g., the MU which was most recently written to memory), and replacing the invalid indicators corresponding to the second portion 407 of the MU being updated with the up to date indicators corresponding to the second portion 407 of the current up to date MU of the MUG.

    [0042] Figure 5 illustrates a number of mapping units (503-0, 503-1, 503-2, ..., 503-i, ...) associated with address mapping in accordance with a number of embodiments of the present disclosure. As described above, the MUs 503 can be organized as a number of MUGs such as MUG 509 in a L2P map 501. The MUs 503 and MUG 509 of Figure 5 can be analogous to the MUs and MUGs described in association with prior Figures, for instance. In this example, the MUG 509 comprises MU 503-i (MU[i]), as well as a number of other "neighboring" MUs 503. As shown in Figure 5, MU[i] comprises a first portion 505 comprising mapping data indicating locations on a memory 510 of a number of physical data units, and a second portion 507 comprising mapping data indicating locations on the memory 510 of a number of other MUs of MUG 509, to which MU[i] belongs. In a number of embodiments, the mapping data of portion 507 can indicate locations on memory other than memory 510 (e.g., on a different memory device and/or different type of memory).

    [0043] The memory 510 can comprise a number of memory devices such as memory devices 110-1 to 110-D shown in Figure 1, for instance. In the example shown in Figure 5, the memory 510 comprises a first die (Die 0) and a second die (Die 1), each comprising a first plane (Plane 0) and a second plane (Plane 1) of memory cells; however, embodiments are not limited to a particular number of dies and/or planes associated with memory 510.

    [0044] The portion 505 of MU[i] can comprise a data unit mapping table, for instance. In the example shown in Figure 5, the portion 505 comprises a table having entries PA[0] through [PA_PER_MU - 1], with each entry indicating a location of (e.g., pointing to) a physical data unit on memory 510. For instance, in this example, the arrows 511-0, 511-1, and 511-2 corresponding to respective table entries PA[0], PA[1], and PA[2], point to 4KB physical data units on memory 510. That is, each table entry of portion 505 can indicate a physical address (PA) of a data unit on the memory 510. As described herein, the size of the data units to which the table entries of portion 505 point is defined by a variable data unit type corresponding to MU[i].

    [0045] The portion 507 of MU[i] can also comprise a MU mapping table, for instance. In the example shown in Figure 5, the portion 507 comprises a table including an entry corresponding to each respective MU of MUG 509. In this example, the number of MUs corresponding to MUG 509 is indicated by index "m". As such, each table entry of portion 507 indicates a location on memory 510 of a particular one of the "m" MUs of MUG 509 (e.g., as indicated by physical mapping unit addresses PMUA[0] through PMUA[m-1].

    [0046] Each table entry of portion 507 also includes attribute data corresponding thereto which indicates the variable unit data type (TP) of the MU to which the entry corresponds. The variable data unit type can, for instance, define the data unit size of the corresponding MU. As an example, TP=0 can indicate that the data unit size of the corresponding MU is equal to the base data unit size (e.g., a smallest data unit size of the number of variable data unit sizes). TP=1 can indicate that the data unit size of the corresponding MU is equal to twice the base unit size, and TP=2 can indicate that the data unit size of the corresponding MU is equal to four times the base unit size. In this example, the base data unit size is 4KB. As described above, in a number of embodiments, the base data unit size can be less than the page size corresponding to the memory (e.g., 510), which can be 8KB, for instance. As indicated by table entry 513-i, MU[i] has a corresponding variable data unit type of 0 (e.g., TP=0), which is reflected by the fact that the table entries of portion 505 point to 4KB data units on memory 510 (e.g., as indicated by arrow 511-0, 511-1, and 511-2). Embodiments are not limited to this example.

    [0047] Figure 6 illustrates a number of mapping units (603-0, 603-1, 603-2, ..., 603-i, ...) associated with address mapping in accordance with a number of embodiments of the present disclosure. As described above, the MUs 603 can be organized as a number of MUGs such as MUG 609 in a L2P map 601. The MUs 603 and MUG 609 of Figure 6 can be analogous to the MUs and MUGs described in association with prior Figures, for instance. In this example, the MUG 609 comprises MU 603-i (MU[i]), as well as a number of other "neighboring" MUs 603 (e.g., the other MUs 603 of the MUG 609 to which MU 603-i belongs). As shown in Figure 6, MU[i] comprises a first portion 605-i comprising mapping data indicating locations on a memory 610 of a number of physical data units, and a second portion 607-i comprising mapping data indicating locations on the memory 610 of a number of other MUs of MUG 609, to which MU[i] belongs. In a number of embodiments, the mapping data of portion 607-i can indicate locations on memory other than memory 610 (e.g., on a different memory device and/or different type of memory).

    [0048] The memory 610 can comprise a number of memory devices such as memory devices 610-1 to 610-D shown in Figure 1, for instance. In the example shown in Figure 6, the memory 610 comprises a first die (Die 0) and a second die (Die 1), each comprising a first plane (Plane 0) and a second plane (Plane 1) of memory cells; however, embodiments are not limited to a particular number of dies and/or planes associated with memory 610.

    [0049] The portion 605-i of MU[i] can comprise a fixed amount of space. In a number of embodiments, an amount of the space 605-i used for indicating locations (e.g., physical addresses) of the data units corresponding to the MU (e.g., the size of the data unit mapping table) may be less than the available amount of space corresponding to portion 605-i. In the example shown in Figure 6, a portion of available space 605-i comprises a data unit mapping table 615-i indicating the physical locations of data units corresponding to MU[i] that have a size defined by a variable data unit type corresponding to MU[i]. The amount of space 615-i can depend on the variable data unit type corresponding to MU[i]. As described further below, in this example, the variable data unit type corresponding to MU[i] is TP=1, which can indicate that the data unit size (e.g., 8KB) corresponding to MU[i] is twice that of the base data unit size (e.g., 4KB). As such, there are half as many entries in table 615-i as there would be if the variable data unit type corresponding to MU[i] were TP=0. That is, since the size of the data units corresponding to MU[i] is twice as large as the base data unit size, half as many table entries corresponding to portion 615-i are used in order to map the same amount of logical space. Therefore, if PA_PER_MU indicates the number of entries in table 615-i assuming a base data unit size, then PA_PER_MU/2 is the number of table entries in table 615-i assuming the data unit size corresponding to MU[i] is twice the base data unit size. As such, table 615-i shown in Figure 6 includes entries PA[0] through PA[PA_PER_MU/2], and each of the entries points to an 8KB data unit on memory 610 (e.g., as indicated by arrows 611-i).

    [0050] As shown in Figure 6, in a number of embodiments, the variable data unit type corresponding to an MU (e.g., MU[i]) may be such that the amount of space (e.g., 615-i) used for mapping data corresponding data units having a size defined by the variable data unit type is less than the amount of space available (e.g., 605-i). In such embodiments, at least a portion (e.g., 625-i) of the available space (605-i) can provide indicators pointing to data units corresponding to the MU that have a size other than the size defined by the variable data unit type. For instance, the portion 625-i can comprise a data structure such as a tree (e.g., a hash tree, b-tree, etc.), for example, that maintains map updates associated with overwrites to the logical space mapped by MU[i] (e.g., due to a variable/irregular host I/O workload). For instance, one or more 4KB write operations initiated by a host and associated with the logical address range corresponding to MU[i] may be interspersed with several 8KB write operations to the same logical address range corresponding to MU[i]. Rather than adjusting the current 8KB data unit size corresponding to MU[i], due to the 4KB writes, the portion 625-i can be used indicate the locations of the smaller 4KB data units on memory 610 (e.g., as indicated by arrows 619-i in Figure 6). Due to the limited amount of space associated with portion 625-i, in a number of embodiments, the variable data unit type can be adjusted responsive to the host I/O workload. For instance, if the number of updates associated with overwrites to the logical address range corresponding to MU[i] is such that the available update space 625-i becomes filled, then the variable data unit type can be adjusted (e.g., changed from TP=1 to TP=0).

    [0051] The portion 607-i of MU[i] can comprise a MU mapping table, for instance, having entries indicating the locations of the respective MUs 603 corresponding to MUG 609. The entries of MU mapping table 607-i can also indicate a variable data unit type (TP) corresponding to the respective MU. In this example, the number of MUs corresponding to MUG 609 is indicated by index "m". As such, each table entry of portion 607-i indicates a location on memory 610 of a particular one of the "m" MUs of MUG 609 (e.g., as indicated by physical mapping unit addresses PMUA[0] through PMUA[m-1].

    [0052] As shown in Figure 6, entry 613-i, which corresponds to MU[i], indicates a variable data unit type TP=1, which in this example indicates that the data unit size mapped by portion 615-i is twice that of the base data unit size of 4KB. As such, in this example, the variable data unit type TP=1 also indicates that half of space 605-i (e.g., 615-i) is used to map 8KB data units and half of space 605-i (e.g., 625-i) is used to map data units (e.g., 4KB data units) associated with overwrites to the logical addresses corresponding MU[i].

    [0053] MU[j] 603-j shown in Figure 6 is within the same MUG 609 as MU[i] 603-i. As such, portion 607-i of MU[i] comprises an entry 613-j, which indicates (via PMUA[j] represented by arrow 617-j) a physical location on memory 610 of MU[j], as well as a variable data unit type (e.g., TP=3) corresponding to MU[j]. In this example, TP=3 indicates that the data unit size corresponding to MU[j] is 32KB (e.g., eight times the base data unit size of 4KB). Since the data unit size corresponding to MU[j] is eight times that of the base data unit size, eight times fewer entries in MU mapping table 615-j are required to map the amount of logical space mapped by MU[j] than if the data size corresponding to MU[j] were the base data unit size. As such, portion 615-j comprises entries PA[0] through PA[PA_PER_MU/8], which provide indicators (e.g., as represented by arrows 611-j) pointing to the locations on memory 610 of 32KB data units. Therefore, 7/8 of the space 605-j available for mapping to data units having a size defined by the variable data unit type corresponding to MU[j] is unused.

    [0054] As described above, the portion 625-j can comprise a data structure such as a hash tree and/or b-tree, for instance, which can provide indicators pointing to data units corresponding to MU[j] that have a size other than the size defined by the variable data unit type (e.g., a size smaller than 32KB, such as a size equal to the base data unit size). In this example, the portion 625-j provides pointers (e.g., as represented by arrows 619-j) to 4KB data units on memory 610. The mapping data (e.g., tree entries) of portion 625-j can correspond to overwrites to the logical address space mapped by MU[j] (e.g., host initiated overwrites corresponding to logical space amounts less than the data unit size of 32KB, in this example).

    [0055] For a number of reasons, it may be useful to adjust the particular variable data unit type of one or more MUs (e.g., 603) of a MUG (e.g., to reduce the likelihood of fragmented and/or misaligned writes, to maintain accurate mapping data, etc). For instance, if a host issues a number of relatively small write commands, such as a number of 4KB metadata writes to the logical address space of a particular MU, and the data unit size defined by the particular variable data unit type of the MU is larger than the 4KB write size, then the space available for maintaining updates (e.g., 625-i) may reach and/or exceed capacity. As such, the particular variable data unit type of the MU can be adjusted to a different data unit type (e.g., a data unit type that defines a smaller data unit size). In a number of embodiments, the particular variable data unit type can be adjusted such that the defined data unit size of a particular MU is increased. For instance, if a host issues a number of 32 KB writes to the logical address space of a particular MU having a defined data unit size of 4KB (e.g., TP=0), it can be beneficial to increase the defined data unit size of the MU to 32KB (e.g., TP=3), for example. Increasing the data unit size of the MU can provide benefits such as reducing the amount of mapping data space (e.g., 605-i) of the data unit mapping table (e.g., 615-i), which increases the space (e.g., 625-i) available for mapping overwrites to the logical address space of the MU.

    [0056] In a number of embodiments, a mapping unit management component (e.g., MU management 116 shown in Figure 1) can monitor (e.g., track) the I/O workload of a host (e.g., host 102). For instance, the MU management component can track the sizes of incoming write commands and the MUs and/or MUGs to which they correspond (e.g., based on the logical addresses corresponding to the write commands). The MU management component can use the monitored information to, for example, determine the variable data unit types of the various MUs and/or whether to adjust the variable data unit types corresponding thereto.

    [0057] As such, a number of embodiments of the present disclosure provide the ability to adjust the variable data unit type of MUs. Adjusting the variable data unit types can provide various benefits by providing a flexible address mapping scheme that can be implemented by an FTL (e.g., FTL 114 shown in Figure 1), for instance, and that can be adapted to account for host I/O workload variability and/or irregularity, among other benefits.

    [0058] Figure 7 illustrates a number of mapping units (703-0, 703-1, 703-2, ..., 703-i, ...) associated with address mapping in accordance with a number of embodiments of the present disclosure. As described above, the MUs 703 can be organized as a number of MUGs such as MUG 709 in a L2P map 701. The MUs 703 and MUG 709 of Figure 7 can be analogous to the MUs and MUGs described in association with prior Figures, for instance. In this example, the MUG 709 comprises MU 703-i (MU[i]), as well as a number of other "neighboring" MUs 703 (e.g., the other MUs 703 of the MUG 709 to which MU 703-i belongs). As shown in Figure 7, MU[i] comprises a first portion 705-i comprising a data unit mapping table 715-i indicating locations on a memory 710 of a number of physical data units, and a second portion 707 comprising a MU mapping table indicating locations on the memory 710 of a number of other MUs of MUG 709, to which MU[i] belongs.

    [0059] As described above, the portion 705-i of MU[i] can comprise a fixed amount of space. In a number of embodiments, an amount of the mapping table 705-i used for indicating locations (e.g., physical addresses) of data units having a size defined by the variable data unit type corresponding to the MU[i]. The amount of space 715-i can depend on the variable data unit type corresponding to MU[i].

    [0060] In this example, as indicated by entry 713-i of MU table 707, the variable data unit type corresponding to MU[i] is TP=1, which can indicate that the data unit size corresponding to MU[i] is twice that of the base data unit size (e.g., 4KB). That is, the data unit size defined by TP=1 is 8KB. As such, there are half as many entries in data unit mapping table 715-i as there would be if the variable data unit type corresponding to MU[i] were TP=0 (with TP=0 corresponding to a defined variable data unit size equal to the base data unit size, e.g., 4KB, in this example). That is, since the size of the data units corresponding to MU[i] is twice as large as the base data unit size, half as many table entries corresponding to portion 715-i are used in order to map the same amount of logical space mapped by MU[i] (4MB, in this example). Therefore, if PA_PER_MU indicates the number of entries in table 715-i assuming a base data unit size, then PA_PER_MU/2 is the number of table entries in table 715-i assuming the data unit size corresponding to MU[i] is twice the base data unit size. As such, table 715-i shown in Figure 7 includes entries PA[0] through PA[PA_PER_MU/2], and although not shown in Figure 7, each of the entries points to an 8KB data unit on memory (e.g., memory 710).

    [0061] Since the variable data unit type (e.g., TP=1) corresponding to MU[i] is such that the size of the data unit mapping table 715-i is less than the amount of space 705-i available, a portion 725-i of the available space 705-i can comprise a data structure such as a tree (e.g., a hash tree, b-tree, etc.), for example, that maintains map updates associated with overwrites to the logical space mapped by MU[i] (e.g., due to host I/O workload variability/irregularity).

    [0062] The portion 707 of MU[i] can comprise a MU mapping table, for instance, having entries indicating the locations of the respective MUs 703 corresponding to MUG 709. The entries of MU mapping table 707 can also indicate a variable data unit type (TP) corresponding to the respective MU. In this example, the number of MUs 703 corresponding to MUG 709 is indicated by index "m". In a number of embodiments, a number of the MUs 703 corresponding to MUG 709 can be direct data units (DDU) MUs. As used herein, a DDU MU can refer to a MU (e.g., 703-j) whose corresponding data unit size is equal to the amount of data mapped by each of the respective MUs (e.g., 703) of the MUG (e.g., 709) to which the particular MU (e.g., 703-j) belongs.

    [0063] For instance, if each of the MUs 703 maps 4MB of data, then a DDU MU (e.g., 703-j) maps to a single 4MB physical data unit on memory, which is referred to as a DDU. In Figure 7, mapping entry 713-j corresponding to MU[j] provides an indicator (represented by arrow 721-j) pointing directly to a location on memory 710 of the 4MB data unit mapped by MU[j]. That is, PMUA[j] points directly to the DDU corresponding to DDU MU[j]. In a number of embodiments, the variable data unit type corresponding to a DDU MU is TP=A (where "A" can be a hexadecimal number). As such, entry 713-j includes attribute data comprising TP=A, as shown in Figure 7. Since PMUA[j] points directly to the DDU corresponding to MU[j], as opposed to pointing to a location on memory 710 of MU[j] 703-j, MU[j] need not be stored on memory 710 and accessed in order to determine the location of the data units corresponding thereto. As such, the presence of one or more DDU MUs can reduce the total number of stored MUs, for instance.

    [0064] Figure 8 illustrates a number of mapping units (803-0, 803-1, 803-2, ..., 803-i, ...) associated with address mapping in accordance with a number of embodiments of the present disclosure. The MUs 803 are organized as a number of MUGs including MUG 809 in a L2P map 801. The MUs 803 and MUG 809 of Figure 8 can be analogous to the MUs and MUGs described in association with prior Figures, for instance. In this example, MUG 809 comprises MU 803-i (MU[i]), as well as the other MUs 803 (including MU[j]) of the MUG 809 to which MU 803-i belongs).

    [0065] In a number of embodiments, and as shown in Figure 8, a MUG (e.g., MUG 809) can comprise all DDU MUs, such as DDU MU[j] described in association with Figure 7. For instance, each of the MUs 803 of MUG 809 are DDU MUs. In the examples described herein, the variable data unit type corresponding to DDU MUs is TP=A. In such examples, TP=A defines the data unit size to be equal to the amount of data mapped by the MUs of a MUG, which in this example is 4MB. Since each of the MUs 803 of MUG 809 are DDU MUs, each of the entries (e.g., 813-i, 813-j, etc.) in MU mapping table 807 includes a physical mapping unit address (e.g., PMUA[0] through PMUA[m-1]) indicating the location on memory 810 of a 4MB data unit corresponding to a respective MU 803. For instance, as indicated by arrow 821-i, PMUA[i] points to the 4MB data unit corresponding to MU[i], and as indicated by arrow 821-j, PMUA[j] points to the 4MB data unit corresponding to MU[j].

    [0066] In embodiments in which a MUG includes only DDU MUs, the amount of space allocated for a data unit mapping table (e.g., mapping table 715-i shown in Figure 7) can be omitted. As such, the space that would otherwise have been allocated for a data unit mapping table can be used for other purposes. For instance, in the embodiment illustrated in Figure 8, the portion 805-i of MU[i] is used to maintain mapping updates associated with overwrites to the logical space mapped by MUG 809. For example, the portion 805-i can comprise a data structure such as a tree (e.g., hash tree, b-tree, etc.) to maintain the mapping updates corresponding to MUG 809. As shown in Figure 8, the entries of the data structure of portion 805-i provide pointers (e.g., as represented by arrows 819) to data units on memory 810. In this example, the arrows 819 point to data units have a size equal to the base data unit size (e.g., 4KB); however, embodiments are not so limited.

    [0067] The data structure associated with portion 805-i can be similar to the data structures associated with portions 725-i described in Figure 7 and 625-i described in Figure 6, for instance. However, the data structure of portion 805-i can be used to maintain updates corresponding logical addresses across the entire logical address space mapped by a MUG (e.g., as opposed to maintaining updates corresponding to logical addresses across the address space mapped by a particular MU within a MUG).

    [0068] Figure 9 illustrates a number of mapping units (903-0, 903-1, 903-2, ..., 903-i, ...) associated with address mapping in accordance with a number of embodiments of the present disclosure. The MUs 903 are organized as a number of MUGs including MUG 909 in a L2P map 901. The MUs 903 and MUG 909 of Figure 9 can be analogous to the MUs and MUGs described in association with prior Figures, for instance. In this example, MUG 909 comprises MU 903-i (MU[i]), as well as the other MUs 903 of the MUG 909 to which MU 903-i belongs).

    [0069] The MU 903-i is a DDU MU, such as DDU MU 803-i described in association with Figure 8. For instance, the data unit size defined by the variable data unit type of MU 903-i is equal to the total amount of data mapped by MU 903-i (e.g., 4MB). However, MU 903-i has a different variable data unit type than MU 803-i. In the example shown in Figure 9, the variable data unit type corresponding to MU 903-i is TP=F (as opposed to TP=A). As used herein, an MU having a variable data unit type of TP=F will be referred to as a "packet DDU" MU.

    [0070] Packet DDU MU 903-i includes mapping data comprising a first portion 905-i and a second portion 907. The second portion 907 can be analogous to the second portion 807 of MU 803-i. That is, portion 907 can comprise a MU mapping table corresponding to the MUs 903 of MUG 909. However, the first portion 905-i comprises management information associated with packet DDU MU 903-i (e.g., as opposed to a data unit mapping table and/or tree data structure for maintaining updates as described above). The management information can include various information, which in this example includes medium type information and location information (e.g., PA[n][m] as shown in Figure 9). The medium type information can indicate a particular type of medium to which the physical data unit corresponding to MU 903-i is mapped, and the location information can indicate a physical address on the particular type of medium of the physical data unit, for instance.

    [0071] As an example, a packet DDU MU such as MU 903-i can be written to and/or retrieved from various types of media such as flash memory (e.g., MLC, SLC, etc.), DRAM, resistance variable memory (e.g., phase change memory, RRAM, etc.), storage area networks (SANs), and/or network attached storage (NAS), among other media types. As such, although various examples provided herein are based on a flash-page-based-framework, in various embodiments, the MUs of a L2P map (e.g., 901) can be managed outside of a flash-page-based-framework. As an example, a proxy component 931 (e.g., a proxy processor) associated with one or more different media types 933, 935, 937, 939, and/or 931 can be used to manage the MUs.

    [0072] Figure 10 illustrates a functional flow chart associated with updating a mapping unit in accordance with a number of embodiments of the present disclosure. In a number of embodiments, MUs such as those described herein can be stored in memory and garbage collected as appropriate. The MUs can be stored in a dedicated region of memory, for instance. As described herein, MUs can comprise a mapping unit table (e.g., 307, 407, 507, 607, 707, 807, 907) that can indicate the locations on memory of the MUs of a particular MUG. In a number of embodiments, a most recently written one of the MUs of a MUG includes the currently up to date MU table for the group. As such, the most recently written MU (e.g., the last MU to receive updates to its L2P mappings) is the current up to date MU and includes up to date indicators pointing to the physical locations of the MUs of a MUG. An indication of the current up to date MU of a MUG can be maintained by the controller, for instance.

    [0073] In Figure 10, MU[i] 1003-i represents a currently up to date MU. As such, MU 1003-i includes the currently up to date mapping table 1007-i. MU[j] 1003-j represents an MU, which is to be updated and subsequently written to memory. For instance, the entries associated with data unit table 1005-j of MU 1003-j may need to be updated responsive to a host-initiated write operation. Since MU 1003-j is not the currently up to date MU, its MU table 1007-j contains invalid indicators (e.g., MU table 1007-j is stale). After the data unit table 1005-j of MU 1003-j is updated, it will be written back to memory and will become the new currently up to date MU (e.g., the MU with the currently up to data MU mapping table).

    [0074] As such, prior to writing MU 1003-j back to memory, its MU mapping table 1007-j is replaced with the currently up to date MU mapping table 1007-i. As shown on the right side of Figure 10, the new MU[j] 1003-j includes the updated data unit mapping table (e.g., PA[]) and the currently up to date memory unit mapping table (e.g., PMUA[]). The new MU[j] 1003-j now contains the currently up to date MU mapping table 1007-j, which can be used to replace the stale MU table of a next MU to be updated and rewritten to memory.

    Conclusion



    [0075] The present disclosure includes methods, memory units, and apparatuses for address mapping. One method includes providing a mapping unit having logical to physical mapping data corresponding to a number of logical addresses. The mapping unit has a variable data unit type associated therewith and comprises a first portion comprising mapping data indicating locations on a memory of a number of physical data units having a size defined by the variable data unit type, and a second portion comprising mapping data indicating locations on the memory of a number of other mapping units of a mapping unit group to which the mapping unit belongs.

    [0076] As used herein, the term "and/or" includes any and all combinations of a number of the associated listed items. As used herein the term "or," unless otherwise noted, means logically inclusive or. That is, "A or B" can include (only A), (only B), or (both A and B). In other words, "A or B" can mean "A and/or B" or "a number of A and B."

    [0077] Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of a number of embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of skill in the art upon reviewing the above description. The scope of the a number of embodiments of the present disclosure includes other applications in which the above structures and methods are used. Therefore, the scope of a number of embodiments of the present disclosure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled.

    [0078] In the foregoing Detailed Description, some features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.


    Claims

    1. A method for address mapping, comprising:
    providing a mapping unit (303; 503; 603; 703; 803; 903; 1003) having logical to physical mapping data corresponding to a fixed range of non-overlapping logical addresses, the mapping unit having a data unit type associated therewith, wherein the data unit type defines the particular amount of physical data to which a quantity of logical to physical mapping entries correspond, and wherein the data unit type is variable such that the amount of physical data to which the quantity of logical to physical mapping entries correspond is variable;
    characterised by:

    said mapping unit comprising both:
    a first portion (305; 405; 505; 605; 705; 805; 905; 1005) comprising mapping data comprising the quantity of logical to physical mapping entries each corresponding to a particular amount of physical data stored on a memory (110; 510; 610; 710; 810), wherein the quantity of logical to physical mapping entries depends on a data unit size defined by the data unit type; and a second portion (307; 407; 507; 607; 707; 807; 907; 1007) comprising:

    mapping data indicating locations on the memory (110; 510; 610; 710; 810) of a plurality of other mapping units (303; 503; 603; 703; 803; 903; 1003) of a mapping unit group (409; 509; 609; 709; 809; 909) to which the mapping unit belongs; and

    attribute data indicating the data unit type associated with the mapping units of the mapping unit group; and

    said mapping units of the plurality of other mapping units each comprise:

    a first fixed amount of space (605-i) allocated for a data unit mapping table (615-i); and

    a second fixed amount of space (607-i) allocated for a mapping unit mapping table;

    wherein the data unit mapping table (615-i) of at least one of the plurality of other mapping units uses less than the first fixed amount of space (605-i) allocated for the data unit mapping table (615-i) and comprises:

    a first quantity of logical to physical mapping entries (611-i) mapping to physical data units having a size defined by a data unit type of the respective mapping unit; and

    a second quantity of logical to physical mapping entries (619-i) mapping to physical data units having a size other than the size defined by the data unit type of the respective mapping unit; and

    wherein said data unit type comprises:

    at least one data unit type that corresponds with a particular amount of physical data that is less than a physical page size corresponding to a memory device to which the data is mapped, and

    at least one data unit type that corresponds with a particular amount of physical data that is greater than a physical page size corresponding to a memory device to which the data is mapped.


     
    2. The method of claim 1, wherein the method includes updating the mapping unit (303; 503; 603; 703; 803; 903; 1003) responsive to a change in the logical to physical mapping data corresponding to one or more of the number of logical addresses.
     
    3. The method of claim 1, wherein the method includes monitoring a host (102) input/output (I/O) workload and adjusting the data unit size associated with the mapping unit (303; 503; 603; 703; 803; 903; 1003) based, at least partially, on the host (102) I/O workload.
     
    4. The method of claim 3, wherein adjusting the data unit size associated with the mapping unit includes adjusting the quantity of the number of logical to physical mapping entries corresponding to the first portion (305; 405; 505; 605; 705; 805; 905; 1005).
     
    5. The method of claim 1, wherein the method includes providing a number of mapping unit groups (409; 509; 609; 709; 809; 909), wherein each of the number of mapping unit groups (409; 509; 609; 709; 809; 909) comprises a same number of mapping units (303; 503; 603; 703; 803; 903; 1003).
     
    6. The method of claim 5, including providing the number of mapping unit groups (409; 509; 609; 709; 809; 909) such that each respective mapping unit corresponds to a same number of non-overlapping logical addresses.
     
    7. An apparatus, comprising:

    a memory (110; 510; 610; 710; 810) storing a logical to physical mapping data structure comprising a plurality of mapping units (303; 503; 603; 703; 803; 903; 1003) each having a data unit type of a number of different data unit types associated therewith, the data unit type defining a data unit size mapped by the respective mapping unit (303; 503; 603; 703; 803; 903; 1003), wherein the data unit type defines the particular amount of physical data to which a quantity of logical to physical mapping entries correspond, and wherein the data unit type is variable such that an amount of physical data to which the quantity of logical to physical mapping entries correspond is variable; and

    a controller (108) coupled to the memory (110; 510; 610; 710; 810) and configured to:

    access a particular mapping unit (303; 503; 603; 703; 803; 903; 1003) based on a logical address associated with a write command received from a host (102);
    characterised by said controller further configured to:

    update a first portion (305; 405; 505; 605; 705; 805; 905; 1005) of the particular mapping unit (303; 503; 603; 703; 803; 903; 1003), the first portion (305; 405; 505; 605; 705; 805; 905; 1005) comprising a first fixed amount of space allocated for a data unit mapping table comprising a quantity of logical to physical mapping entries each corresponding to a particular amount of physical data stored on the memory (110; 510; 610; 710; 810), wherein the quantity of logical to physical mapping entries depends on the size defined by the data unit type of the particular mapping unit (303; 503; 603; 703; 803; 903; 1003); and

    update a second portion of the particular mapping unit (303; 503; 603; 703; 803; 903; 1003), the second portion (307; 407; 507; 607; 707; 807; 907; 1007) comprising attribute data indicating the data unit type associated with-a mapping unit group (409; 509; 609; 709; 809; 909) to which the particular mapping unit (303; 503; 603; 703; 803; 903; 1003) belongs;

    wherein at least one data unit type corresponds with a particular amount of physical data that is less than a physical page size corresponding to a memory device to which the data is mapped;

    wherein at least one data unit type corresponds with a particular amount of physical data that is greater than a physical page size corresponding to a memory device to which the data is mapped;

    wherein at least one of the plurality of mapping units has an associated data unit type that corresponds to a quantity of logical to physical mapping entries that uses the whole of the first fixed amount of space allocated for the data unit mapping table; and

    wherein at least one of the plurality of mapping units has an associated data unit type that corresponds to a quantity of logical to physical mapping entries that uses less than the whole of first fixed amount of space allocated for the data unit mapping table, with a remaining portion of the first fixed amount of space comprising logical to physical mapping entries to physical data units having a size less than the size defined by the data unit type of the respective one of the plurality of mapping units.


     
    8. The apparatus of claim 7, wherein the mapping data of the second portion (307; 407; 507; 607; 707; 807; 907; 1007) comprises a mapping unit table and wherein the controller is configured to update the mapping unit table by replacing the mapping unit table with a currently up to date mapping unit table of another mapping unit (303; 503; 603; 703; 803; 903; 1003) of the mapping unit group (409; 509; 609; 709; 809; 909).
     
    9. The apparatus of claim 8, wherein the controller is configured to:

    write the particular mapping unit (303; 503; 603; 703; 803; 903; 1003) to memory (110; 510; 610; 710; 810) subsequent to replacing the mapping unit table with the currently up to date mapping unit table of the another mapping unit (303; 503; 603; 703; 803; 903; 1003); and

    subsequently determine a location on the memory (110; 510; 610; 710; 810) of a different mapping unit (303; 503; 603; 703; 803; 903; 1003) of the mapping unit group (409; 509; 609; 709; 809; 909) by accessing the replaced mapping unit table of the particular mapping unit (303; 503; 603; 703; 803; 903; 1003).


     
    10. The apparatus of claim 7, further comprising a memory management unit (116) configured to:

    monitor an input/output (I/O) workload of a host (102); and

    determine whether to adjust the data unit type of at least one of the number of mapping units (303; 503; 603; 703; 803; 903; 1003) based on the I/O workload of the host (102).


     
    11. The apparatus of claim 10, wherein the memory management unit (116) is further configured to track a size of write commands corresponding to logical addresses mapped by the plurality of mapping units (303; 503; 603; 703; 803; 903; 1003).
     


    Ansprüche

    1. Verfahren zum Zuordnen einer Adresse, Folgendes umfassend:

    Bereitstellen einer Zuordnungseinheit (303; 503; 603; 703; 803; 903; 1003) mit logisch-zu-physischen Zuordnungsdaten, die einem festen Bereich nicht überlappender logischer Adressen entsprechen, wobei die Zuordnungseinheit eine damit verknüpfte Dateneinheitenart aufweist, wobei die Dateneinheitenart den bestimmten Betrag physischer Daten definiert, dem eine Menge von logisch-zu-physischen Zuordnungseinträgen entspricht, und wobei die Dateneinheitenart derart variabel ist, dass der Betrag physischer Daten, der der Menge von logisch-zu-physischen Zuordnungseinträgen entspricht, variabel ist;

    gekennzeichnet indem:

    die Zuordnungseinheit Folgendes umfasst:

    sowohl einen ersten Abschnitt (305; 405; 505; 605; 705; 805; 905; 1005), der Zuordnungsdaten umfasst, die die Menge von logisch-zu-physischen Zuordnungseinträgen umfassen, die jeweils einem bestimmten Betrag physischer Daten entsprechen, die in einem Speicher (110; 510; 610; 710; 810) gespeichert sind, wobei die Menge von logisch-zu-physischen Zuordnungseinträgen von einer Dateneinheitengröße abhängt, die durch die Dateneinheitenart definiert ist; als auch

    einen zweiten Abschnitt (307; 407; 507; 607; 707; 807; 907; 1007), Folgendes umfassend:

    Zuordnungsdaten, die Stellen in dem Speicher (110; 510; 610; 710; 810) mehrerer anderer Zuordnungseinheiten (303; 503; 603; 703; 803; 903; 1003) einer Zuordnungseinheitengruppe (409; 509; 609; 709; 809; 909) angeben, zu denen die Zuordnungseinheit gehört; und

    Attributdaten, die die Dateneinheitenart angeben, die mit den Zuordnungseinheiten der Zuordnungseinheitengruppe verknüpft ist; und

    wobei die Zuordnungseinheiten der mehreren anderen Zuordnungseinheiten jeweils Folgendes umfassen:

    einen ersten festen Betrag an Platz (605-i), der für eine Dateneinheitenzuordnungstabelle (615-i) zugewiesen ist; und

    einen zweiten festen Betrag an Platz (607-i), der für eine Zuordnungseinheitenzuordnungstabelle zugewiesen ist;

    wobei die Dateneinheitenzuordnungstabelle (615-i) wenigstens einer der mehreren anderen Zuordnungseinheiten weniger als den ersten festen Betrag an Platz (605-i) verwendet, der für die Dateneinheitenzuordnungstabelle (615-i) zugewiesen ist, und Folgendes umfasst:

    eine erste Menge von logisch-zu-physischen Zuordnungseinträgen (611-i), die zu physischen Dateneinheiten mit einer Größe zugeordnet werden, die durch eine Dateneinheitenart der jeweiligen Zuordnungseinheit definiert ist; und

    eine zweite Menge von logisch-zu-physischen Zuordnungseinträgen (619-i), die zu physischen Dateneinheiten mit einer anderen Größe als der durch die Dateneinheitenart der jeweiligen Zuordnungseinheit definierten Größe zugeordnet werden; und

    wobei die Dateneinheitenart Folgendes umfasst:

    wenigstens eine Dateneinheitenart, die einem bestimmten Betrag physischer Daten entspricht, der kleiner als eine physische Seitengröße ist, die einer Speichervorrichtung entspricht, zu der die Daten zugeordnet sind, und

    wenigstens eine Dateneinheitenart, die einem bestimmten Betrag physischer Daten entspricht, der größer als eine physische Seitengröße ist, die einer Speichervorrichtung entspricht, zu der die Daten zugeordnet sind.


     
    2. Verfahren nach Anspruch 1, wobei das Verfahren ein Aktualisieren der Zuordnungseinheit (303; 503; 603; 703; 803; 903; 1003), reagierend auf eine Änderung in den logisch-zu-physischen Zuordnungsdaten, die einer oder mehreren der Anzahl logischer Adressen entsprechen, einschließt.
     
    3. Verfahren nach Anspruch 1, wobei das Verfahren ein Überwachen eines Host(102)-Eingabe/Ausgabe(input/output-I/O)-Lastprofils und ein Anpassen der Dateneinheitengröße, die mit der Zuordnungseinheit (303; 503; 603; 703; 803; 903; 1003) verknüpft ist, wenigstens teilweise basierend auf dem Host(102)-I/O-Lastprofil, einschließt.
     
    4. Verfahren nach Anspruch 3, wobei das Anpassen der Dateneinheitengröße, die mit der Zuordnungseinheit verknüpft ist, das Anpassen der Menge der Anzahl von logisch-zu-physischen Zuordnungseinträgen, die dem ersten Abschnitt (305; 405; 505; 605; 705; 805; 905; 1005) entsprechen, einschließt.
     
    5. Verfahren nach Anspruch 1, wobei das Verfahren das Bereitstellen einer Anzahl von Zuordnungseinheitengruppen (409; 509; 609; 709; 809; 909) einschließt, wobei jede der Anzahl von Zuordnungseinheitengruppen (409; 509; 609; 709; 809; 909) eine selbe Anzahl von Zuordnungseinheiten (303; 503; 603; 703; 803; 903; 1003) umfasst.
     
    6. Verfahren nach Anspruch 5, einschließend das Bereitstellen der Anzahl von Zuordnungseinheitengruppen (409; 509; 609; 709; 809; 909) derart, dass jede jeweilige Zuordnungseinheit einer selben Anzahl von nicht überlappenden logischen Adressen entspricht.
     
    7. Einrichtung, Folgendes umfassend:

    einen Speicher (110; 510; 610; 710; 810), der eine logisch-zu-physische Zuordnungsdatenstruktur speichert, die mehrere Zuordnungseinheiten (303; 503; 603; 703; 803; 903; 1003) umfasst, wobei jede eine damit verknüpfte Dateneinheitenart einer Anzahl unterschiedlicher Dateneinheitenarten aufweist, wobei die Dateneinheitenart eine Dateneinheitengröße definiert, die durch die jeweilige Zuordnungseinheit (303; 503; 603; 703; 803; 903; 1003) zugeordnet wird, wobei die Dateneinheitenart den bestimmten Betrag physischer Daten definiert, dem eine Menge von logisch-zu-physischen Zuordnungseinträgen entspricht, und wobei die Dateneinheitenart derart variabel ist, dass ein Betrag physischer Daten, dem die Menge von logisch-zu-physischen Zuordnungseinträgen entspricht, variabel ist; und

    eine Steuervorrichtung (108), die mit dem Speicher (110; 510; 610; 710; 810) gekoppelt und zu Folgendem konfiguriert ist:

    Zugreifen auf eine bestimme Zuordnungseinheit (303; 503; 603; 703; 803; 903; 1003) basierend auf einer logischen Adresse, die mit einem Schreibbefehl verknüpft ist, der von einem Host (102) empfangen wird;

    gekennzeichnet, indem die Steuervorrichtung ferner zu Folgendem konfiguriert ist:

    Aktualisieren eines ersten Abschnitts (305; 405; 505; 605; 705; 805; 905; 1005) der bestimmten Zuordnungseinheit (303; 503; 603; 703; 803; 903; 1003), wobei der erste Abschnitt (305; 405; 505; 605; 705; 805; 905; 1005) einen ersten festen Betrag an Platz umfasst, der für eine Dateneinheitenzuordnungstabelle zugewiesen ist, die eine Menge an logisch-zu-physischen Zuordnungseinträgen umfasst, die jeweils einem bestimmten Betrag physischer Daten entsprechen, die in dem Speicher (110; 510; 610; 710; 810) gespeichert sind, wobei die Menge von logisch-zu-physischen Zuordnungseinträgen von der Größe abhängt, die durch die Dateneinheitenart der bestimmten Zuordnungseinheit (303; 503; 603; 703; 803; 903; 1003) definiert ist; und

    Aktualisieren eines zweiten Abschnitts der bestimmten Zuordnungseinheit (303; 503; 603; 703; 803; 903; 1003), wobei der zweite Abschnitt (307; 407; 507; 607; 707; 807; 907; 1007) Attributdaten umfasst, die die Dateneinheitenart angeben, die mit einer Zuordnungseinheitengruppe (409; 509; 609; 709; 809; 909) verknüpft ist, zu der die bestimmte Zuordnungseinheit (303; 503; 603; 703; 803; 903; 1003) gehört;

    wobei wenigstens eine Dateneinheitenart einem bestimmten Betrag physischer Daten entspricht, der kleiner als eine physische Seitengröße ist, die einer Speichervorrichtung entspricht, zu der die Daten zugeordnet sind;

    wobei wenigstens eine Dateneinheitenart einem bestimmten Betrag physischer Daten entspricht, der größer als eine physische Seitengröße ist, die einer Speichervorrichtung entspricht, zu der die Daten zugeordnet sind;

    wobei wenigstens eine der mehreren Zuordnungseinheiten eine verknüpfte Dateneinheitenart aufweist, die einer Menge von logisch-zu-physischen Zuordnungseinträgen entspricht, die die Gesamtheit des ersten festen Betrags an Platz verwendet, der für die Dateneinheitenzuordnungstabelle zugewiesen ist; und

    wobei wenigstens eine der mehreren Zuordnungseinheiten eine verknüpfte Dateneinheitenart aufweist, die einer Menge von logisch-zu-physischen Zuordnungseinträgen entspricht, die weniger als die Gesamtheit des ersten festen Betrags an Platz verwendet, der für die Dateneinheitenzuordnungstabelle zugewiesen ist, wobei ein verbleibender Abschnitt des ersten festen Betrags an Platz logisch-zu-physische Zuordnungseinträge zu physischen Dateneinheiten umfasst, die eine Größe aufweisen, die kleiner als die Größe ist, die durch die Dateneinheitenart der jeweiligen einen der mehreren Zuordnungseinheiten definiert ist.


     
    8. Einrichtung nach Anspruch 7, wobei die Zuordnungsdaten des zweiten Abschnitts (307; 407; 507; 607; 707; 807; 907; 1007) eine Zuordnungseinheitentabelle umfassen und wobei die Steuervorrichtung konfiguriert ist, um die Zuordnungseinheitentabelle durch Austauschen der Zuordnungseinheitentabelle mit einer derzeit aktuellen Zuordnungseinheitentabelle einer anderen Zuordnungseinheit (303; 503; 603; 703; 803; 903; 1003) der Zuordnungseinheitengruppe (409; 509; 609; 709; 809; 909) zu aktualisieren.
     
    9. Einrichtung nach Anspruch 8, wobei die Steuervorrichtung zu Folgendem konfiguriert ist:

    Schreiben der bestimmten Zuordnungseinheit (303; 503; 603; 703; 803; 903; 1003) in den Speicher (110; 510; 610; 710; 810), im Anschluss an das Austauschen der Zuordnungseinheitentabelle mit der derzeit aktuellen Zuordnungseinheitentabelle der anderen Zuordnungseinheit (303; 503; 603; 703; 803; 903; 1003); und

    anschließend Festlegen einer Stelle in dem Speicher (110; 510; 610; 710; 810) einer unterschiedlichen Zuordnungseinheit (303; 503; 603; 703; 803; 903; 1003) der Zuordnungseinheitengruppe (409; 509; 609; 709; 809; 909) durch Zugreifen auf die ausgetauschte Zuordnungseinheitentabelle der bestimmten Zuordnungseinheit (303; 503; 603; 703; 803; 903; 1003).


     
    10. Einrichtung nach Anspruch 7, ferner umfassend eine Speicherverwaltungseinheit (116), die zu Folgendem konfiguriert ist:

    Überwachen eines Eingabe/Ausgabe(I/O)-Lastprofils eines Hosts (102); und

    Festlegen, ob die Dateneinheitenart von wenigstens einer der Anzahl von Zuordnungseinheiten (303; 503; 603; 703; 803; 903; 1003) anzupassen ist, basierend auf dem Eingabe/Ausgabe(I/O)-Lastprofil des Hosts (102).


     
    11. Einrichtung nach Anspruch 10, wobei die Speicherverwaltungseinheit (116) ferner konfiguriert ist, um eine Größe von Schreibbefehlen zu verfolgen, die logischen Adressen entsprechen, die durch die mehreren Zuordnungseinheiten (303; 503; 603; 703; 803; 903; 1003) zugeordnet werden.
     


    Revendications

    1. Procédé de mappage d'adresses, consistant à :

    fournir une unité de mappage (303 ; 503 ; 603 ; 703 ; 803 ; 903 ; 1003) ayant des données de mappage logiques à physiques correspondant à une plage fixe d'adresses logiques ne se chevauchant pas, l'unité de mappage ayant un type d'unité de données y étant associé, le type d'unité de données définissant la quantité particulière de données physiques à laquelle correspond une quantité d'entrées de mappage logiques à physiques, et le type d'unité de données étant variable de sorte que la quantité de données physiques à laquelle correspond la quantité d'entrées de mappage logiques à physiques est variable ; caractérisé par :

    ladite unité de mappage comprenant à la fois :
    une première partie (305 ; 405 ; 505 ; 605 ; 705 ; 805 ; 905 ; 1005) comprenant des données de mappage comprenant la quantité d'entrées de mappage logiques à physiques correspondant chacune à une quantité particulière de données physiques stockées dans une mémoire (110 ; 510 ; 610 ; 710 ; 810), la quantité d'entrées de mappage logiques à physiques dépendant d'une taille d'unité de données définie par le type d'unité de données ; et une seconde partie (307 ; 407 ; 507 ; 607 ; 707 ; 807 ; 907 ; 1007) comprenant :

    des données de mappage indiquant des emplacements sur la mémoire (110 ; 510 ; 610 ; 710 ; 810) d'une pluralité d'autres unités de mappage (303 ; 503 ; 603 ; 703 ; 803 ; 903 ; 1003) d'un groupe d'unités de mappage (409 ; 509 ; 609 ; 709 ; 809 ; 909) auquel l'unité de mappage appartient ; et

    attribuer des données indiquant le type d'unité de données associé aux unités de mappage du groupe d'unités de mappage ; et

    lesdites unités de mappage de la pluralité d'autres unités de mappage comprennent chacune :

    une première quantité d'espace fixe (605-i) allouée à une table de mappage d'unités de données (615-i) ; et

    une seconde quantité d'espace fixe (607-i) allouée à une table de mappage d'unités de mappage ;

    la table de mappage d'unités de données (615-i) d'au moins une unité de mappage parmi la pluralité d'autres unités de mappage utilise moins que la première quantité d'espace fixe (605-i) allouée pour la table de mappage d'unités de données (615-i) et comprend :

    une première quantité d'entrées de mappage logiques à physiques (611-i) mappant sur des unités de données physiques ayant une taille définie par un type d'unité de données de l'unité de mappage respective ; et

    une seconde quantité d'entrées de mappage logiques à physiques (619-i) mappant sur des unités de données physiques ayant une taille autre que la taille définie par le type d'unité de données de l'unité de mappage respective ; et

    ledit type d'unité de données comprend :

    au moins un type d'unité de données qui correspond à une quantité particulière de données physiques inférieure à une taille de page physique correspondant à un dispositif de mémoire sur lequel les données sont mappées, et

    au moins un type d'unité de données qui correspond à une quantité particulière de données physiques supérieure à une taille de page physique correspondant à un dispositif de mémoire sur lequel les données sont mappées.


     
    2. Procédé selon la revendication 1, dans lequel le procédé comprend la miseà jour de l'unité de mappage (303 ; 503 ; 603 ; 703 ; 803 ; 903 ; 1003) en réponse à une modification des données de mappage logiques à physiques correspondant à une ou plusieurs adresses logiques du nombre d'adresses logiques.
     
    3. Procédé selon la revendication 1, dans lequel le procédé comprend la surveillance d'une charge de travail d'entrée / sortie (I / O) hôte (102) et l'ajustement de la taille de l'unité de données associée à l'unité de mappage (303 ; 503 ; 603 ; 703 ; 803 ; 903 ; 1003) basée, au moins en partie, sur la charge de travail d'I / O de l'hôte (102).
     
    4. Procédé selon la revendication 3, dans lequel l'ajustement de la taille de l'unité de données associée à l'unité de mappage comporte l'ajustement de la quantité du nombre d'entrées de mappage logiques à physiques correspondant à la première partie (305 ; 405 ; 505 ; 605 ; 705 ; 805 ; 905 ; 1005).
     
    5. Procédé selon la revendication 1, dans lequel le procédé comporte la fourniture d'un certain nombre de groupes d'unités de mappage (409 ; 509 ; 609 ; 709 ; 809 ; 909), dans lequel chaque unité de mappage du nombre de groupes d'unités de mappage (409 ; 509 ; 609 ; 709 ; 809 ; 909) comprend un même nombre d'unités de mappage (303 ; 503 ; 603 ; 703 ; 803 ; 903 ; 1003).
     
    6. Procédé selon la revendication 5, comportant la fourniture du nombre de groupes d'unités de mappage (409 ; 509 ; 609 ; 709 ; 809 ; 909) de sorte que chaque unité de mappage respective correspond à un même nombre d'adresses logiques ne se chevauchant pas.
     
    7. Appareil comprenant :

    une mémoire (110 ; 510 ; 610 ; 710 ; 810) stockant une structure de données de mappage logiques à physiques comprenant une pluralité d'unités de mappage (303 ; 503 ; 603 ; 703 ; 803 ; 903 ; 1003) ayant chacune un type d'unité de données d'un nombre de types d'unités de données différents y étant associé, le type d'unité de données définissant une taille d'unité de données mappée par l'unité de mappage respective (303 ; 503 ; 603 ; 703 ; 803 ; 903 ; 1003), le type d'unité de données définissant la quantité particulière de données physiques auxquelles correspondent une quantité d'entrées de mappage logiques à physiques, et le type d'unité de données étant variable, de sorte qu'une quantité de données physiques à laquelle correspond la quantité d'entrées de mappage logiques à physiques est variable ; et

    un dispositif de commande (108) couplé à la mémoire (110 ; 510 ; 610 ; 710 ; 810) et configuré pour :

    accéder à une unité de mappage particulière (303 ; 503 ; 603 ; 703 ; 803 ; 903 ; 1003) basée sur une adresse logique associée à une commande d'écriture reçue d'un hôte (102) ;
    caractérisé en ce que ledit dispositif de commande est en outre configuré pour :

    mettre à jour une première partie (305 ; 405 ; 505 ; 605 ; 705 ; 805 ; 905 ; 1005) de l'unité de mappage particulière (303 ; 503 ; 603 ; 703 ; 803 ; 903 ; 1003), la première partie (305 ; 405 ; 505 ; 605 ; 705 ; 805 ; 905 ; 1005) comprenant une première quantité d'espace fixe allouée à une table de mappage d'unités de données comprenant une quantité d'entrées de mappage logiques à physiques correspondant chacune à une quantité particulière de données physiques stockées dans la mémoire (110 ; 510 ; 610 ; 710 ; 810), la quantité d'entrées de mappage logiques à physiques dépendant de la taille définie par le type d'unité de données de l'unité de mappage particulière (303 ; 503 ; 603 ; 703 ; 803 ; 903 ; 1003) ; et

    mettre à jour une seconde partie de l'unité de mappage particulière (303 ; 503 ; 603 ; 703 ; 803 ; 903 ; 1003), la seconde partie (307 ; 407 ; 507 ; 607 ; 707 ; 807 ; 907 ; 1007) comprenant des données d'attribut indiquant le type d'unité de données associé à un groupe d'unités de mappage (409 ; 509 ; 609 ; 709 ; 809 ; 909) auquel l'unité de mappage particulière (303 ; 503 ; 603 ; 703 ; 803 ; 903 ; 1003) appartient ;

    au moins un type d'unité de données correspondant à une quantité particulière de données physiques inférieure à une taille de page physique correspondant à un dispositif de mémoire sur lequel les données sont mappées ;

    au moins un type d'unité de données correspondant à une quantité particulière de données physiques supérieure à une taille de page physique correspondant à un dispositif de mémoire sur lequel les données sont mappées ;

    au moins une unité de mappage de la pluralité d'unités de mappage comportant un type d'unité de données associé qui correspond à une quantité d'entrées de mappage logiques à physiques qui utilise la totalité de la première quantité d'espace fixe allouée pour la table de mappage d'unités de données ; et

    au moins une unité de mappage de la pluralité d'unités de mappage comportant un type d'unité de données associé qui correspond à une quantité d'entrées de mappage logiques à physiques qui utilise moins que la totalité de la première quantité d'espace fixe allouée pour la table de mappage d'unités de données, avec une partie restante de la première quantité d'espace fixe comprenant des entrées de mappage logiques à physiques en unités de données physiques ayant une taille inférieure à la taille définie par le type d'unité de données de l'unité de mappage respective de la pluralité d'unités de mappage.


     
    8. Appareil selon la revendication 7, dans lequel les données de mappage de la seconde partie (307 ; 407 ; 507 ; 607 ; 707 ; 807 ; 907 ; 1007) comprennent une table d'unités de mappage et dans lequel le dispositif de commande est configuré pour mettre à jour la table d'unité de mappage en remplaçant la table d'unités de mappage par une table d'unités de mappage actuellement à jour d'une autre unité de mappage (303 ; 503 ; 603 ; 703 ; 803 ; 903 ; 1003) du groupe d'unités de mappage (409 ; 509 ; 609 ; 709 ; 809 ; 909).
     
    9. Appareil selon la revendication 8, dans lequel le dispositif de commande est configuré pour :

    écrire l'unité de mappage particulière (303 ; 503 ; 603 ; 703 ; 803 ; 903 ; 1003) sur la mémoire (110 ; 510 ; 610 ; 710 ; 810) après le remplacement de la table d'unités de mappage par la table d'unités de mappage actuellement à jour de l'autre unité de mappage (303 ; 503 ; 603 ; 703 ; 803 ; 903 ; 1003) ; et

    déterminer ensuite un emplacement sur la mémoire (110 ; 510 ; 610 ; 710 ; 810) d'une unité de mappage différente (303 ; 503 ; 603 ; 703 ; 803 ; 903 ; 1003) du groupe d'unités de mappage (409 ; 509 ; 609 ; 709 ; 809 ; 909) en accédant à la table d'unités de mappage remplacée de l'unité de mappage particulière (303 ; 503 ; 603 ; 703 ; 803 ; 903 ; 1003).


     
    10. Appareil selon la revendication 7, comprenant en outre une unité de gestion de mémoire (116) configurée pour :

    surveiller une charge de travail d'entrée / sortie (I / O) d'un hôte (102) ; et

    déterminer si il est nécessaire d'ajuster le type d'unité de données d'au moins une unité de mappage du nombre d'unités de mappage (303 ; 503 ; 603 ; 703 ; 803 ; 903 ; 1003) basé sur la charge de travail I / O de l'hôte (102).


     
    11. Appareil selon la revendication 10, dans lequel l'unité de gestion de mémoire (116) est en outre configurée pour suivre une taille de commandes d'écriture correspondant à des adresses logiques mappées par la pluralité d'unités de mappage (303 ; 503 ; 603 ; 703 ; 803 ; 903 ; 1003).
     




    Drawing



































    Cited references

    REFERENCES CITED IN THE DESCRIPTION



    This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.

    Patent documents cited in the description