BACKGROUND
[0001] Limitations and disadvantages of conventional approaches to data storage will become
apparent to one of skill in the art, through comparison of such approaches with some
aspects of the present method and system set forth in the remainder of this disclosure
with reference to the drawings.
US 2013/073717 A1 discloses a clustered-network attached storage (NAS) comprising a set of nodes/servers
attached to the Internet.
BRIEF SUMMARY
[0002] Methods and systems are provided for load balanced network file accesses substantially
as illustrated by and/or described in connection with at least one of the figures,
as set forth in the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0003]
FIG. 1 illustrates various example configurations of a distributed electronic storage
system in accordance with aspects of this disclosure.
FIG. 2 illustrates various example configurations of a compute node that uses a distributed
electronic storage system in accordance with aspects of this disclosure.
FIG. 3 illustrates various example configurations of a dedicated distributed electronic
storage system node in accordance with aspects of this disclosure.
FIG. 4 illustrates various example configurations of a dedicated storage node in accordance
with aspects of this disclosure.
FIG. 5A is a flowchart of an example process for load balancing the handling of file
system requests in accordance with aspects of this disclosure.
FIG. 5B-5D illustrate an example DESS during the process of FIG. 5A.
FIG. 6A is a flowchart of an example process for load balancing the handling of file
system requests in accordance with aspects of this disclosure.
FIG. 6B-6C illustrate an example DESS during the process of FIG. 6A.
FIG. 7A is a flowchart of an example process for handling of file system requests
in accordance with aspects of this disclosure.
FIG. 7B-7C illustrate an example DESS during the process of FIG. 7A.
FIG. 8A is a flowchart of an example process for load balancing the handling of file
system requests in accordance with aspects of this disclosure.
FIG. 8B-8C illustrate an example DESS during the process of FIG. 8A.
DETAILED DESCRIPTION
[0004] FIG. 1 illustrates various example configurations of a distributed electronic storage
system in accordance with aspects of this disclosure. Shown in FIG. 1 is a local area
network (LAN) 102 comprising one or more virtual distributed electronic storage system
(DESS) nodes 120 (indexed by integers from 1 to J, for j ≥ 1), and optionally comprising
(indicated by dashed lines): one or more dedicated storage nodes 106 (indexed by integers
from 1 to M, for M ≥ 1), one or more compute nodes 104 (indexed by integers from 1
to N, for N ≥ 1), and/or an edge router that connects the LAN 102 to a remote network
118. The remote network 118 optionally comprises one or more storage services 114
(indexed by integers from 1 to K, for K ≥ 1), and/or one or more dedicated storage
nodes 115 (indexed by integers from 1 to L, for L ≥ 1). The nodes of the LAN 102 are
communicatively coupled via interconnect 101 (e.g., copper cables, fiber cables, wireless
links, switches, bridges, hubs, and/or the like).
[0005] Each compute node 104
n (n an integer, where 1 ≤ n ≤ N) is a networked computing device (e.g., a server,
personal computer, or the like) that comprises circuitry for running a variety of
client processes (either directly on an operating system of the device 104
n and/or in one or more virtual machines/containers running in the device 104
n) and for interfacing with one or more DESS nodes 120. As used in this disclosure,
a "client process" is a process that reads data from storage and/or writes data to
storage in the course of performing its primary function, but whose primary function
is not storage-related (i.e., the process is only concerned that its data is reliable
stored and retrievable when needed, and not concerned with where, when, or how the
data is stored). Example applications which give rise to such processes include: an
email server application, a web server application, office productivity applications,
customer relationship management (CRM) applications, and enterprise resource planning
(ERP) applications, just to name a few. Example configurations of a compute node 104
n are described below with reference to FIG. 2.
[0006] Each DESS node 120
j (j an integer, where 1 ≤ j ≤ J) is a networked computing device (e.g., a server,
personal computer, or the like) that comprises circuitry for running DESS processes
and, optionally, client processes (either directly on an operating system of the device
104
n and/or in one or more virtual machines running in the device 104
n). As used in this disclosure, a "DESS process" is a process that implements one or
more of: the DESS driver, the DESS front end, the DESS back end, and the DESS memory
controller described below in this disclosure. Example configurations of a DESS node
120
j are described below with reference to FIG. 3. Thus, in an example implementation,
resources (e.g., processing and memory resources) of the DESS node 120
j may be shared among client processes and DESS processes. The processes of the DESS
may be configured to demand relatively small amounts of the resources to minimize
the impact on the performance of the client applications. From the perspective of
the client process(es), the interface with the DESS may be independent of the particular
physical machine(s) on which the DESS process(es) are running.
[0007] Each on-premises dedicated storage node 106
m (m an integer, where 1 ≤ m ≤ M) is a networked computing device and comprises one
or more storage devices and associated circuitry for making the storage device(s)
accessible via the LAN 102. An example configuration of a dedicated storage node 106
m is described below with reference to FIG. 4.
[0008] Each storage service 114
k (k an integer, where 1 ≤ k ≤ K) may be a cloud-based service such as Amazon S3, Microsoft
Azure, Google Cloud, Rackspace, Amazon Glacier, and Google Nearline.
[0009] Each remote dedicated storage node 115
1 (l an integer, where 1 ≤ l ≤ L) may be similar to, or the same as, an on-premises
dedicated storage node 106. In an example implementation, a remote dedicated storage
node 115
1 may store data in a different format and/or be accessed using different protocols
than an on-premises dedicated storage node 106 (e.g., HTTP as opposed to Ethernet-based
or RDMA-based protocols).
[0010] FIG. 2 illustrates various example configurations of a compute node that uses a DESS
in accordance with aspects of this disclosure. The example compute node 104
n comprises hardware 202 that, in turn, comprises a processor chipset 204 and a network
adaptor 208.
[0011] The processor chipset 204 may comprise, for example, an x86-based chipset comprising
a single or multi-core processor system on chip, one or more RAM ICs, and a platform
controller hub IC. The chipset 204 may comprise one or more bus adaptors of various
types for connecting to other components of hardware 202 (e.g., PCIe, USB, SATA, and/or
the like).
[0012] The network adaptor 208 may, for example, comprise circuitry for interfacing to an
Ethernet-based and/or RDMA-based network. In an example implementation, the network
adaptor 208 may comprise a processor (e.g., an ARM-based processor) and one or more
of the illustrated software components may run on that processor. The network adaptor
208 interfaces with other members of the LAN 100 via (wired, wireless, or optical)
link 226. In an example implementation, the network adaptor 208 may be integrated
with the chipset 204.
[0013] Software running on the hardware 202 includes at least: an operating system and/or
hypervisor 212, one or more client processes 218 (indexed by integers from 1 to Q,
for Q ≥ 1) and one or both of: a DESS driver 221 and DESS front end 220. Additional
software that may optionally run on the compute node 104
n includes: one or more virtual machines (VMs) and/or containers 216 (indexed by integers
from 1 to R, for R ≥ 1).
[0014] Each client process 218
q (q an integer, where 1 ≤ q ≤ Q) may run directly on an operating system/hypervisor
212 or may run in a virtual machine and/or container 216
r (r an integer, where 1 ≤ r ≤ R) serviced by the OS and/or hypervisor 212. Each client
processes 218 is a process that reads data from storage and/or writes data to storage
in the course of performing its primary function, but whose primary function is not
storage-related (i.e., the process is only concerned that its data is reliably stored
and is retrievable when needed, and not concerned with where, when, or how the data
is stored). Example applications which give rise to such processes include: an email
server application, a web server application, office productivity applications, customer
relationship management (CRM) applications, and enterprise resource planning (ERP)
applications, just to name a few.
[0015] The DESS driver 221 is operable to receive/intercept local file system commands (e.g.,
POSIX commands) and generate corresponding file system requests (e.g., read, write,
create, make directory, remove, remove directory, link, etc.) to be transmitted onto
the interconnect 101. In some instances, the file system requests transmitted on the
interconnect 101 may be of a format customized for use with the DESS front end 220
and/or DESS back end 222 described herein. In some instances, the file system requests
transmitted on the interconnect 101 may adhere to a standard such as Network File
System (NFS), Server Message Block (DMB), Common Internet File System (CIFS), and/or
the like.
[0016] Each DESS front end instance 220, (s an integer, where 1 ≤ s ≤ S if at least one
front end instance is present on compute node 104
n) provides an interface for routing file system requests to an appropriate DESS back
end instance (running on a DESS node), where the file system requests may originate
from one or more of the client processes 218, one or more of the VMs and/or containers
216, and/or the OS and/or hypervisor 212. Each DESS front end instance 220, may run
on the processor of chipset 204 or on the processor of the network adaptor 208. For
a multi-core processor of chipset 204, different instances of the DESS front end 220
may run on different cores.
[0017] FIG. 3 shows various example configurations of a dedicated distributed electronic
storage system node in accordance with aspects of this disclosure. The example DESS
node 120
j comprises hardware 302 that, in turn, comprises a processor chipset 304, a network
adaptor 308, and, optionally, one or more storage devices 306 (indexed by integers
from 1 to W, for W ≥ 1).
[0018] Each storage device 306
p (p an integer, where 1 ≤ p ≤ P if at least one storage device is present) may comprise
any suitable storage device for realizing a tier of storage that it is desired to
realize within the DESS node 120
j.
[0019] The processor chipset 304 may be similar to the chipset 204 described above with
reference to FIG. 2. The network adaptor 308 may be similar to the network adaptor
208 described above with reference to FIG. 2 and may interface with other nodes of
LAN 100 via link 326.
[0020] Software running on the hardware 302 includes at least: an operating system and/or
hypervisor 212, and at least one of: one or more instances of DESS front end 220 (indexed
by integers from 1 to W, for W ≥ 1), one or more instances of DESS back end 222 (indexed
by integers from 1 to X, for X ≥ 1), and one or more instances of DESS memory controller
224 (indexed by integers from 1 to Y, for Y ≥ 1). Additional software that may optionally
run on the hardware 302 includes: one or more virtual machines (VMs) and/or containers
216 (indexed by integers from 1 to R, for R ≥ 1), and/or one or more client processes
318 (indexed by integers from 1 to Q, for Q ≥ 1). Thus, as mentioned above, DESS processes
and client processes may share resources on a DESS node and/or may reside on separate
nodes.
[0021] The client processes 218 and VM(s) and/or container(s) 216 may be as described above
with reference to FIG. 2.
[0022] Each DESS front end instance 220, (w an integer, where 1 ≤ w ≤ W if at least one
front end instance is present on DESS node 120
j) provides an interface for routing file system requests to an appropriate DESS back
end instance (running on the same or a different DESS node), where the file system
requests may originate from one or more of the client processes 218, one or more of
the VMs and/or containers 216, and/or the OS and/or hypervisor 212. Each DESS front
end instance 220, may run on the processor of chipset 304 or on the processor of the
network adaptor 308. For a multi-core processor of chipset 304, different instances
of the DESS front end 220 may run on different cores.
[0023] Each DESS back end instance 222
x (x an integer, where 1 ≤ x ≤ X if at least one back end instance is present on DESS
node 120
j) services the file system requests that it receives and carries out tasks to otherwise
manage the DESS (e.g., load balancing, journaling, maintaining metadata, caching,
moving of data between tiers, removing stale data, correcting corrupted data, etc.)
Each DESS back end instance 222
x may run on the processor of chipset 304 or on the processor of the network adaptor
308. For a multi-core processor of chipset 304, different instances of the DESS back
end 222 may run on different cores.
[0024] Each DESS memory controller instance 224
u (u an integer, where 1 ≤ u ≤ U if at least DESS memory controller instance is present
on DESS node 120
j) handles interactions with a respective storage device 306 (which may reside in the
DESS node 120j or another DESS node 120 or a storage node 106). This may include,
for example, translating addresses, and generating the commands that are issued to
the storage device (e.g., on a SATA, PCIe, or other suitable bus). Thus, the DESS
memory controller instance 224
u operates as an intermediary between a storage device and the various DESS back end
instances of the DESS.
[0025] FIG. 4 illustrates various example configurations of a dedicated storage node in
accordance with aspects of this disclosure. The example dedicated storage node 106
m comprises hardware 402 which, in turn, comprises a network adaptor 408 and at least
one storage device 306 (indexed by integers from 1 to Z, for Z ≥ 1). Each storage
device 306
z may be the same as storage device 306
w described above with reference to FIG. 3. The network adaptor 408 may comprise circuitry
(e.g., an arm based processor) and a bus (e.g., SATA, PCIe, or other) adaptor operable
to access (read, write, etc.) storage device(s) 406
1-406
Z in response to commands received over network link 426. The commands may adhere to
a standard protocol. For example, the dedicated storage node 106
m may support RDMA based protocols (e.g., Infiniband, RoCE, iWARP etc.) and/or protocols
which ride on RDMA (e.g., NVMe over fabrics).
[0026] In an example implementation, tier 1 memory is distributed across one or more storage
devices 306 (e.g., FLASH devices) residing in one or more storage node(s) 106 and/or
one or more DESS node(s) 120. Data written to the DESS is initially stored to Tier
1 memory and then migrated to one or more other tier(s) as dictated by data migration
policies, which may be user-defined and/or adaptive based on machine learning.
[0027] FIG. 5A is a flowchart of an example process for load balancing the handling of file
system requests in accordance with aspects of this disclosure. The process is described
with reference to FIGS. 5B-5D which depict a plurality (three in the non-limiting
example shown) of computing devices 554 which operate as file system servers to server
file system requests transmitted over the interconnect 101 by a plurality (nine in
the non-limiting example shown) of computing devices 552 which operate as file system
clients.
[0028] Each of the computing devices 552 comprises an instance of driver 221. Each instance
of driver 221 is configured to send file system requests (e.g., transmitted in accordance
with NFS and/or SMB standards) to the network address (an IPv4 address in the non-limiting
example shown) stored in a its respective memory address 560.
[0029] Each of the computing devices 554 comprises an instance of DESS front end 220. Each
instance of the DESS front end 220 is configured to serve any file systems requests
received via interconnect 101 that are destined for the network addresses stored in
its respective memory location 556. Also, each instance of the DESS front end 220
is configured to track statistics, in a respective memory location 558, regarding
file systems requests that it serves. The statistics may comprise, for example: count
of file system requests destined, during a determined time interval, to each of the
network addresses in its memory location 556, count of bits (e.g., total and/or average)
sent, during a determined time interval, to and/or from each of the network addresses
in its memory location 556, amount of time (e.g., total and/or average milliseconds)
to serve received file system requests, and/or the like. FIG. 5B depicts the network
before a failure condition, FIG. 5C depicts the network during the failure condition,
and FIG. 5D depicts the network after the failure condition.
[0030] Returning to FIG. 5A, the process begins in block 502 in which a set of network addresses
is allocated for use as the destination addresses of file system requests. In the
example of FIG. 5B, the allocated addresses are IPv4 addresses 0.0.0.1 to 0.0.0.9.
[0031] In block 504, a unique subset of multiple of the IP addresses is allocated to each
of a plurality of computing devices that will operate to serve file system requests.
In the example of FIG. 5B, IPv4 addresses 0.0.0.1 to 0.0.0.3 are allocated to device
554
1, IPv4 addresses 0.0.0.4 to 0.0.0.6 are allocated to device 554
2, and IPv4 addresses 0.0.0.7 to 0.0.0.9 are allocated to device 554
3.
[0032] In block 506, each client device that will issue network file system requests is
assigned or selects one or more IP addresses from the set. The assignment may, for
example, comprise manual configuration by a network administrator or automatic configuration
by the devices themselves and/or a device operating as a management entity (e.g.,
a router to which the devices 552
1-552
9 are connected). In the example of FIG. 5B, each of devices 552
1-552
9 is assigned a respective one of IPv4 addresses 0.0.0.1 to 0.0.0.9.
[0033] In block 508, the client devices transmit file system requests using their respective
one or more of the IP addresses. In the example of FIG. 5B, each of devices 552
1-552
3 transmits file system requests to a respective one of addresses 0.0.0.1-0.0.0.3 and
the requests are served by device 554
1, each of devices 552
4-552
6 transmits file system requests to a respective one of addresses 0.0.0.4-0.0.0.6 and
the requests are served by device 554
2, and each of devices 552
7-552
9 transmits file system requests to a respective one of addresses 0.0.0.7-0.0.0.9 and
the requests are served by device 554
3. Each of the devices 554
1-554
3 maintains statistics regarding the files system requests that it serves.
[0034] In block 510, a network event occurs which triggers load balancing. Example network
events include: loss of a device (e.g., due to failure or simple removal from the
network), a recovery of a previously failed device, an addition of a device to the
network, a lapse of a determined time interval, a detection of an imbalance in the
load imposed on the devices 554 by file system requests, number of file requests per
time interval going above or falling below a threshold, and/or the like. In the example
of FIG. 5C, the network event is a failure of device 554
2.
[0035] In block 512, the statistics regarding the network addresses are used to distribute
the failed server's subset of the network addresses among the remaining servers in
an effort to evenly distribute the failed server's load among the remaining servers.
In the example of FIG. 5C, the statistics indicate the load on IP address 0.0.0.4
is larger than the load on IP addresses 0.0.0.5 and 0.0.0.6 (i.e., client 552
4 is generating more file system traffic than clients 552
5 and 552
6). Accordingly, to redistribute the load as evenly as possible, 0.0.0.4 is reassigned
to device 554
1 and addresses 0.0.0.5 and 0.0.06 are reassigned to device 554
3. Thus, the fact that device 554
2 served multiple network addresses enables redistributing its load among multiple
other devices 554 in manner that is transparent to the client devices 552
1-552
9,
[0036] In block 514, a network event occurs which triggers load balancing. In the example
of FIG. 5D, the event is the recovery of device 554
2 such that it is ready to again begin serving file system requests.
[0037] In block 516, the statistics regarding the network addresses are used to offload
some of the network addresses to the recovered or newly added server in an effort
to evenly distribute the load of file system requests among the servers. In the example
of FIG. 5D, the statistics indicate that the most even distribution is to reassign
0.0.0.4 and 0.0.0.7 to device 554
2.
[0038] The reassignment may comprise, for example, ranking the loads on all the IP addresses
and then using the ranking to assign network addresses in an effort to distribute
the load as evenly as possible. In FIG. 5D, for example, the ranking may be 0.0.0.4
> 0.0.0.1 > 0.0.0.8 > 0.0.0.2 > 0.0.0.9 > 0.0.0.3 > 0.0.0.5 > 0.0.0.6 > 0.0.0.7. In
some instances, such as in FIG. 5D, the different servers may be assigned different
numbers of network addresses in order to more uniformly distribute the load.
[0039] As just one example of method for (re)assignment of network addresses, the (re)assignment
may comprise assigning a weight to the load on each network address and such that
the total load assigned to each server is as uniform as possible. For example, the
normalized loads may be as shown in the following table:
| Ranked IP address |
Normalized load |
| 0.0.0.4 |
4 |
| 0.0.0.1 |
3.5 |
| 0.0.0.8 |
3.2 |
| 0.0.0.2 |
2.1 |
| 0.0.0.9 |
2 |
| 0.0.0.3 |
1.8 |
| 0.0.0.5 |
1.3 |
| 0.0.0.6 |
1.1 |
| 0.0.0.7 |
1 |
The total load is then 4+3.5+3.2+2.1+2+1.8+1.3+1.1+1=20 and thus the redistribution
may seek to assign each server a normalized load that is as close as possible to 20/3=6.67.
For the example values in the table, this may result in an assignment of 0.0.0.4,
0.0.0.3, 0.0.0.7 (total normalized load of 6.8) to a first one of the servers, 0.0.0.8,
0.0.0.2, 0.0.0.5 (total normalized load of 6.6) to a second one of the servers, and
0.0.0.1, 0.0.0.9, 0.0.0.6 (total normalized load of 6.6) to a third one of the servers.
[0040] In an example implementation the (re)assignment of network addresses may take into
account the available resources of the various servers. That is, a first server may
have more available resources (e.g., processor cycles, network bandwidth, memory,
etc.) than a second server and thus the first server may be assigned a larger percentage
of the file system request load than the second server. For example, again using the
example from the table above, if one of the servers can handle twice the load as each
of the other two servers, then the first may be assigned a total normalized load close
to 10 (e.g., 0.0.0.4, 0.0.0.1, 0.0.0.6, 0.0.0.7 for a total of 9.9) while each of
the other two is assigned a load close to 5 (e.g., 0.0.0.8, 0.0.0.3 for a total of
5, and 0.0.0.2, 0.0.0.9, 0.0.0.y for a total of 5.1).
[0041] In parallel with blocks 508-516, are blocks 518 and 520 which may, for example, run
continually or periodically as a background process. In block 518, each server tracks
statistics regarding each IP address of its subset of the IP addresses. In block 520,
the statistics generated in block 518 are distributed among the servers and/or provided
to a coordinator device (e.g., one of the servers elected as coordinator through some
voting process or other selection algorithm) such that the statistics of all the servers
are available for performing reassignment of network addresses.
[0042] FIG. 6A is a flowchart of an example process for load balancing the handling of file
system requests in accordance with aspects of this disclosure. The process is described
with reference to FIGS. 6B-6C which depict a plurality (three in the non-limiting
example shown) of computing devices 654 which operate as file system servers to server
file system requests transmitted over the interconnect 101 by a plurality (nine in
the non-limiting example shown) of computing devices 652 which operate as file system
clients. FIG. 6B illustrates the network before and after a device failure and FIG.
6C shows the network during a failure condition.
[0043] Each of the computing devices 652 comprises an instance of driver 221. Each instance
of driver 221 is configured to send file system requests (e.g., transmitted in accordance
with NFS and/or SMB standards) to a network address selected by its address selector
circuitry 662. The address selector 662 may select from the set of addresses in its
memory 664. The address selector 662 may sequentially cycle through the set of addresses
in its memory 664, or may randomly select from among the set of addresses in its memory
664 such that requests are uniformly distributed among the network addresses. A new
address may, for example, be selected periodically, prior to each file system request
to be sent, every Nth (N being an integer) file system request, and/or the like.
[0044] Each of the computing devices 654 comprises an instance of DESS front end 220. Each
instance of the DESS front end 220 is configured to serve any file systems requests
received via interconnect 101 that are destined for the network addresses stored in
its respective memory location 556.
[0045] Returning to FIG. 6A, the process begins in block 604 in which each of a plurality
of computing devices that will serve file system requests is assigned a network addresses.
In the example of FIG. 6B, device 654
1 is assigned to handle file system requests destined for 0.0.0.1, device 654
2 is assigned to handle file system requests destined for 0.0.0.2, and device 654
3 is assigned to handle file system requests destined for 0.0.0.3.
[0046] In block 606, the client devices begin transmitting file system requests onto the
interconnect 101 with the destinations addresses of the requests being uniformly distributed
among the network addresses in the set of network addresses.
[0047] Blocks 608 and 610 represent one possible sequence of events after block 606 and
blocks 612-616 represent another possible sequence of events after block 606.
[0048] In block 608, one of the devices serving the file system requests fails. In the example
of FIG. 6C, the device 654
2 fails.
[0049] In block 610, the network address assigned to the failed device is removed from (or
flagged as "do not use" in) the set of network addresses from which the address selectors
662 are selecting addresses for file system requests. Thus, in the example of FIG.
6C, IPv4 address 0.0.0.2 associated with failed device 654
2 is removed from each of the memory locations 664.
[0050] In block 612, the failed device recovers or a new device for serving file system
requests is added. In the example of FIG. 6B, server 654
2 recovers and is ready to again begin serving file system requests.
[0051] In block 614, the recovered device is assigned a network address. In the example
of FIG. 6B, the device 654
2 is again assigned 0.0.0.2.
[0052] In block 616, the network address assigned to the recovered or newly-added computing
device is added to the set from which the address selectors 662 are selecting addresses
for file system requests. Thus, in the example of FIG. 6B, IPv4 address 0.0.0.2 associated
with new or recovered device 654
2 is added to the set in each of the memory locations 664.
[0053] FIG. 7A is a flowchart of an example process for handling of file system requests
in accordance with aspects of this disclosure. In block 702, one or more first devices
are servicing file system requests while one or more second devices are in standby.
In the example of FIGS. 7B, devices 754
1 and 754
2 are servicing file system requests while 754
3 is in standby. In block 704, one of the first devices fails. In the example of FIG.
7C, device 754
2 fails. In block 706, one or more of the second devices come out of standby to handle
the file system requests previously handled by the failed device. In the example of
FIG. 7C, the device 754
3 comes out of standby and takes over the IP addresses that were being handled by the
failed device 754
2. In block 708, rebalancing triggered as a result of, for example, the device 754
3 having different available resources than the device 754
2. The rebalancing may, for example, result in some of the addresses being shifted
from device 754
1 to 754
3, or visa-versa. Where devices 754
2 and 754
3 are identical, for example, such rebalancing may be unnecessary.
[0054] FIG. 8A is a flowchart of an example process for load balancing the handling of file
system requests in accordance with aspects of this disclosure. In block 802, one or
more first devices are servicing file system requests while one or more second devices
are in standby. In the example of FIG. 8B, devices 854
1 and 854
2 are servicing file system requests while 854
3 is in standby. In block 804, the file system request load on the devices (e.g., measured
in terms of number of file system requests per time interval, number of currently
pending file system requests, average time for one or more of the devices 854 to service
a file system request, and/or the like) exceeds a threshold. In block 806, one or
more of the second devices come out of standby. In the example of FIGS. 8C, the device
854
3 comes out of standby. In block 808, rebalancing triggered and the network addresses
are redistributed among the devices including the device(s) which came out of standby.
In the example of FIGS. 8C, the load balancing results in device 854
1 handling addresses 0.0.0.2 and 0.0.0.3, device 854
2 handling addresses 0.0.0.5 and 0.0.0.6, and device 854
3 handling address 0.0.0.1 and 0.0.0.4.
[0055] In accordance with an example implementation of this disclosure, a system comprises
a plurality of computing devices (e.g., 552
1-552
9, 554
1-554
3, and/or one or more devices (e.g., router) of interconnect 101) and control circuitry
(e.g., hardware 202 and associated software and/or firmware of one or more of the
devices 554
1-554
3, and/or hardware and associated software and/or firmware of a device (e.g., router)
of interconnect 101). The control circuitry is operable to: assign, prior to a network
event, a first plurality of IP addresses of a set of IP addresses to a first server
of the plurality of computing devices such that file system requests destined to any
of the first plurality of IP addresses are to be served by the first server; assign,
prior to the network event, a second plurality of IP addresses of the set of IP addresses
to a second server of the plurality of computing devices such that file system requests
destined to any of the second plurality of IP addresses are to be served by the second
server; and assign, prior to the network event, a third plurality of IP addresses
of the set of IP addresses to a third server of the plurality of computing devices
such that file system requests destined to any of the third plurality of the IP addresses
are to be served by the third server. The control circuitry is operable to maintain
statistics regarding file system requests sent to each IP address of the set of IP
addresses. The control circuitry is operable to determine, based on the statistics,
a first portion of the first plurality of IP addresses to reassign to the second server
and a second portion of the first plurality of IP addresses to reassign to the third
server. The control circuitry is operable to reassign, subsequent to the network event,
the first portion of the first plurality of IP addresses to the second server such
that file system requests destined to any of the first portion of the first plurality
of IP addresses are to be served by the second server. The control circuitry is operable
to reassign, subsequent to the network event, a second portion of the first plurality
of IP addresses to the third server such that file system requests destined to any
of the second portion of the first plurality of IP addresses are to be served by the
third server, wherein the reassignment is based on the statistics.
[0056] In accordance with an example implementation of this disclosure, a system comprises
a plurality of computing devices (e.g., 552
1-552
9, 554
1-554
3, and/or one or more devices (e.g., router) of interconnect 101) and control circuitry
(e.g., hardware 202 and associated software and/or firmware of one or more of the
devices 554
1-554
3, and/or hardware and associated software and/or firmware of a device (e.g., router)
of interconnect 101). The control circuitry is operable to assign a first of the computing
devices (e.g., 554
1) to serve file system requests destined for any of a first plurality of network addresses;
assign a second of the computing devices (e.g., 554
2) to serve file system requests destined for any of a second plurality of network
addresses; maintain statistics regarding file system requests sent to each of the
first plurality of network addresses and the second plurality of network addresses;
and reassign, based on the statistics, the first of the computing devices to serve
file system requests destined for a selected one of the second plurality of network
addresses. The plurality of computing devices may comprise a plurality of third computing
devices (e.g., 552
1-552
9), each of which is assigned to send its file system requests to a respective one
of the first plurality of network addresses and the second plurality of network addresses.
The plurality of computing devices may comprise a plurality of third computing devices
(e.g., 552
1-552
9) operable to generate a plurality of file system requests, wherein destination network
addresses of the plurality of file system requests are uniformly distributed among
the first plurality of network addresses.
[0057] Thus, the present methods and systems may be realized in hardware, software, or a
combination of hardware and software. The present methods and/or systems may be realized
in a centralized fashion in at least one computing system, or in a distributed fashion
where different elements are spread across several interconnected computing systems.
Any kind of computing system or other apparatus adapted for carrying out the methods
described herein is suited. A typical combination of hardware and software may be
a general-purpose computing system with a program or other code that, when being loaded
and executed, controls the computing system such that it carries out the methods described
herein. Another typical implementation may comprise an application specific integrated
circuit or chip. Some implementations may comprise a non-transitory machine-readable
medium (e.g., FLASH drive(s), optical disk(s), magnetic storage disk(s), and/or the
like) having stored thereon one or more lines of code executable by a computing device,
thereby configuring the machine to be configured to implement one or more aspects
of the virtual file system described herein.
[0058] While the present method and/or system has been described with reference to certain
implementations, it will be understood by those skilled in the art that various changes
may be made and equivalents may be substituted without departing from the scope of
the present method and/or system. In addition, many modifications may be made to adapt
a particular situation or material to the teachings of the present disclosure without
departing from its scope. Therefore, it is intended that the present method and/or
system not be limited to the particular implementations disclosed, but that the present
method and/or system will include all implementations falling within the scope of
the appended claims.
[0059] As utilized herein the terms "circuits" and "circuitry" refer to physical electronic
components (i.e. hardware) and any software and/or firmware ("code") which may configure
the hardware, be executed by the hardware, and or otherwise be associated with the
hardware. As used herein, for example, a particular processor and memory may comprise
first "circuitry" when executing a first one or more lines of code and may comprise
second "circuitry" when executing a second one or more lines of code. As utilized
herein, "and/or" means any one or more of the items in the list joined by "and/or".
As an example, "x and/or y" means any element of the three-element set {(x), (y),
(x, y)}. In other words, "x and/or y" means "one or both of x and y". As another example,
"x, y, and/or z" means any element of the seven-element set {(x), (y), (z), (x, y),
(x, z), (y, z), (x, y, z)}. In other words, "x, y and/or z" means "one or more of
x, y and z". As utilized herein, the term "exemplary" means serving as a non-limiting
example, instance, or illustration. As utilized herein, the terms "e.g.," and "for
example" set off lists of one or more non-limiting examples, instances, or illustrations.
As utilized herein, circuitry is "operable" to perform a function whenever the circuitry
comprises the necessary hardware and code (if any is necessary) to perform the function,
regardless of whether performance of the function is disabled or not enabled (e.g.,
by a user-configurable setting, factory trim, etc.).
1. A system comprising:
control circuitry, wherein the control circuitry is operable to maintain statistics
(518) regarding file system requests sent by one or more client devices of a plurality
of client devices to one or more IP addresses of a set of IP addresses,
wherein each IP address of the set of IP addresses is an address of one of a plurality
of computing devices and each one of the plurality of client devices is allocated
a respective subset of the set of IP addresses to be used as a destination address
for one or more file system requests,
and wherein prior to a network event:
a first server of the plurality of computing devices is operable to serve (504) a
file system request destined to any of a first plurality of IP addresses of the set
of IP addresses,
a second server of the plurality of computing devices is operable to serve (504) a
file system request destined to any of a second plurality of IP addresses of the set
of IP addresses, and
a third server of the plurality of computing devices is operable to serve (504) a
file system request destined to any of a third plurality of IP addresses of the set
of IP addresses,
and wherein subsequent to the network event (510, 514), based on the statistics maintained
by the control circuitry:
a first portion of the first plurality of IP addresses is reassigned to the second
server (512, 516), and
a second portion of the first plurality of IP addresses is reassigned to the third
server (512, 516).
2. The system of claim 1, wherein the network event
(i) is a failure of the first server; and/or
(ii) is a lapse of a determined time interval; and/or
(iii) is a detection of an imbalance in the load imposed on the
first server and the second computing server by the one or more file system requests.
3. The system of claim 1, wherein the statistics comprise, for each IP address of the
set of IP addresses, a count of how many of the file system requests are sent, during
a determined time interval, to the IP address.
4. The system of claim 1, wherein the statistics comprise, for each IP address of the
set of IP addresses, a count of number of bits sent, during a determined time interval,
to or from the IP address.
5. The system of claim 1, wherein the statistics comprise, for each IP address of the
set of IP addresses, an indication of an amount of time required to serve those of
the file system requests that are sent to the IP address.
6. The system of claim 1, wherein the reassignment is based on a goal of uniform distribution
of file system requests.
7. The system of claim 1, wherein the control circuitry comprises one or more network
adaptors.
8. A system comprising control circuitry, wherein the control circuitry is operable to:
assign a first computing device to serve file system requests (504) destined for any
of a first plurality of network addresses;
assign a second computing device to serve file system requests (504) destined for
any of a second plurality of network addresses;
assign a third computing device to serve file system requests (504) destined for any
of a third plurality of network addresses;
maintain statistics (518) regarding file system requests sent to each of the first
plurality of network addresses, the second plurality of network addresses, and the
third plurality of network addresses; and
reassign, based on the statistics, (512, 516) the first computing device to serve
file system requests destined for one or more of the second plurality of network addresses
and one or more of the third plurality of network addresses.
9. The system of claim 8, wherein the reassignment is in response to a network event.
10. The system of claim 9, wherein the network event
(i) is a failure of one or both of the second computing device and the third computing
device; and/or
(ii) is a lapse of a determined time interval; and/or
(iii) is a detection of an imbalance in the load imposed on
the first computing device, the second computing device and the third computing device
by the one or more file system requests.
11. The system of claim 8, wherein the statistics comprise, for each network address of
the first plurality of network addresses, the second plurality of network addresses
and the third plurality of network addresses, a count of number of file system requests
sent, during a determined time interval, to the network address.
12. The system of claim 8, wherein the statistics comprise, for each network address of
the first plurality of network addresses, the second plurality of network addresses
and the third plurality of network addresses, a count of number of bits sent, during
a determined time interval, to or from the network address.
13. The system of claim 8, wherein the statistics comprise, for each network address of
the first plurality of network addresses, the second plurality of network addresses
and the third plurality of network addresses, an indication of an amount of time required
to serve file system requests sent to the network address.
14. The system of claim 8, wherein the control circuitry is operable to:
detect a network event; and
in response to the network event, trigger a computing device to transition out of
a standby mode and take over serving of file system requests destined for a selected
one or more of the third plurality of network addresses and/or a selected one or more
of the second plurality of network addresses.
15. The system of claim 14, wherein the network event
(i) is a failure of one or both of the second computing device and the third computing
device; and/or
(ii) is a number of file system requests per time interval, on one or both of the
third computing device and the second computing device, going above a determined threshold.
1. System, umfassend:
Steuerschaltung, wobei die Steuerschaltung betreibbar ist, um Statistiken (518) bezüglich
Dateisystemanforderungen zu führen, die von einem oder mehreren Client-Geräten einer
Mehrzahl von Client-Geräten an eine oder mehrere IP-Adressen eines Satzes von IP-Adressen
gesendet wurden,
wobei jede IP-Adresse des Satzes von IP-Adressen eine Adresse einer aus einer Mehrzahl
von Rechenvorrichtungen ist und jedem aus der Mehrzahl von Client-Geräten eine jeweilige
Teilmenge des Satzes von IP-Adressen zugewiesen wird, die als Zieladresse für eine
oder mehrere Dateisystemanforderungen zu verwenden ist,
und wobei vor einem Netzwerkereignis:
ein erster Server der Mehrzahl von Rechenvorrichtungen betreibbar ist, um eine Dateisystemanforderung
zu bedienen (504), die für eine beliebige einer ersten Mehrzahl von IP-Adressen des
Satzes von IP-Adressen vorgesehen ist,
ein zweiter Server der Mehrzahl von Rechenvorrichtungen betreibbar ist, um eine Dateisystemanforderung
zu bedienen (504), die für eine beliebige einer zweiten Mehrzahl von IP-Adressen des
Satzes von IP-Adressen vorgesehen ist, und
ein dritter Server der Mehrzahl von Rechenvorrichtungen betreibbar ist, um eine Dateisystemanforderung
zu bedienen (504), die für eine beliebige einer dritten Mehrzahl von IP-Adressen des
Satzes von IP-Adressen vorgesehen ist,
und wobei im Anschluss an das Netzwerkereignis (510, 514), basierend auf der durch
die Steuerschaltung geführten Statistik:
ein erster Teil der ersten Mehrzahl von IP-Adressen dem zweiten Server (512, 516)
erneut zugewiesen wird, und
ein zweiter Teil der ersten Mehrzahl von IP-Adressen dem dritten Server (512, 516)
erneut zugewiesen wird.
2. System nach Anspruch 1, wobei das Netzwerkereignis
(i) ein Ausfall des ersten Servers ist; und/oder
(ii) ein Ablauf eines bestimmten Zeitintervalls ist; und/oder
(iii) ein Erkennen eines Ungleichgewichts in der Last ist, die dem ersten Server und
dem zweiten Rechenserver durch die eine oder mehrere Dateisystemanforderungen auferlegt
wird.
3. System nach Anspruch 1, wobei die Statistik für jede IP-Adresse des Satzes von IP-Adressen
eine Zählung umfasst, wie viele der Dateisystemanforderungen während eines bestimmten
Zeitintervalls an die IP-Adresse gesendet wurden.
4. System nach Anspruch 1, wobei die Statistik für jede IP-Adresse des Satzes von IP-Adressen
eine Zählung der Anzahl von Bits umfasst, die während eines bestimmten Zeitintervalls
zu oder von der IP-Adresse gesendet wurden.
5. System nach Anspruch 1, wobei die Statistik für jede IP-Adresse des Satzes von IP-Adressen
eine Angabe einer Zeitspanne umfasst, die erforderlich ist, um diejenigen der Dateisystemanforderungen
zu bedienen, die an die IP-Adresse gesendet wurden.
6. System nach Anspruch 1, wobei die erneute Zuweisung basierend auf dem Ziel einer gleichmäßigen
Verteilung der Dateisystemanforderungen erfolgt.
7. System nach Anspruch 1, wobei die Steuerschaltung einen oder mehrere Netzwerk-Adapter
umfasst.
8. System, das eine Steuerschaltung umfasst, wobei die Steuerschaltung betreibbar ist,
um:
eine erste Rechenvorrichtung zuzuweisen, um Dateisystemanforderungen (504) zu bedienen,
die für eine beliebige aus einer ersten Mehrzahl von Netzwerkadressen vorgesehen sind;
eine zweite Rechenvorrichtung zuzuweisen, um Dateisystemanforderungen (504) zu bedienen,
die für eine beliebige aus einer zweiten Mehrzahl von Netzwerkadressen vorgesehen
sind;
eine dritte Rechenvorrichtung zuzuweisen, um Dateisystemanforderungen (504) zu bedienen,
die für eine beliebige aus einer dritten Mehrzahl von Netzwerkadressen vorgesehen
sind;
Statistiken (518) über Dateisystemanforderungen zu führen, die an jede der ersten
Mehrzahl von Netzwerkadressen, der zweiten Mehrzahl von Netzwerkadressen und der dritten
Mehrzahl von Netzwerkadressen gesendet wurden; und
basierend auf den Statistiken (512, 516) die erste Rechenvorrichtung erneut zuzuweisen,
um Dateisystemanforderungen zu bedienen, die für eine oder mehrere der zweiten Mehrzahl
von Netzwerkadressen und eine oder mehrere der dritten Mehrzahl von Netzwerkadressen
vorgesehen sind.
9. System nach Anspruch 8, wobei die erneute Zuweisung als Reaktion auf ein Netzwerkereignis
erfolgt.
10. System nach Anspruch 9, wobei das Netzwerkereignis
(i) ein Ausfall einer oder beider der zweiten Rechenvorrichtung und der dritten Rechenvorrichtung
ist; und/oder
(ii) ein Ablauf eines bestimmten Zeitintervalls ist; und/oder
(iii) ein Erkennen eines Ungleichgewichts in der Last, die der ersten Rechenvorrichtung,
der zweiten Rechenvorrichtung und der dritten Rechenvorrichtung durch die eine oder
mehrere Dateisystemanforderungen auferlegt wird.
11. System nach Anspruch 8, wobei die Statistik für jede Netzwerkadresse der ersten Mehrzahl
von Netzwerkadressen, der zweiten Mehrzahl von Netzwerkadressen und der dritten Mehrzahl
von Netzwerkadressen eine Zählung der Anzahl von Dateisystemanforderungen umfasst,
die während eines bestimmten Zeitintervalls an die Netzwerkadresse gesendet wurden.
12. System nach Anspruch 8, wobei die Statistik für jede Netzwerkadresse der ersten Mehrzahl
von Netzwerkadressen, der zweiten Mehrzahl von Netzwerkadressen und der dritten Mehrzahl
von Netzwerkadressen eine Zählung der Anzahl von Bits umfasst, die während eines bestimmten
Zeitintervalls an die oder von der Netzwerkadresse gesendet wurden.
13. System nach Anspruch 8, wobei die Statistik für jede Netzwerkadresse aus der ersten
Mehrzahl von Netzwerkadressen, der zweiten Mehrzahl von Netzwerkadressen und der dritten
Mehrzahl von Netzwerkadressen eine Angabe über die Zeitdauer umfasst, die erforderlich
ist, um an die Netzwerkadresse gesendete Dateisystemanforderungen zu bedienen.
14. System nach Anspruch 8, wobei die Steuerschaltung betreibbar ist, um: ein Netzwerkereignis
zu erkennen; und als Reaktion auf das Netzwerkereignis eine Rechenvorrichtung zu veranlassen,
aus einem Stand-by-Modus überzugehen und die Bedienung von Dateisystemanforderungen
zu übernehmen, die für eine oder mehrere ausgewählte der dritten Mehrzahl von Netzwerkadressen
und/oder eine oder mehrere ausgewählte der zweiten Mehrzahl von Netzwerkadressen vorgesehen
sind.
15. System nach Anspruch 14, wobei das Netzwerkereignis
(i) ein Ausfall einer oder beider der zweiten Rechenvorrichtung und der dritten Rechenvorrichtung
ist; und/oder
(ii) eine Anzahl von Dateisystemanforderungen pro Zeitintervall auf der dritten Rechenvorrichtung
und/oder der zweiten Rechenvorrichtung ist, die einen bestimmten Schwellenwert überschreitet.
1. Système comprenant :
un montage de commande, dans lequel le montage de commande est utilisable pour maintenir
des statistiques (518) concernant des demandes de système de fichiers envoyées par
un ou plusieurs appareils clients d'une pluralité d'appareils clients à une ou plusieurs
adresses IP d'un ensemble d'adresses IP,
dans lequel chaque adresse IP de l'ensemble d'adresses IP est une adresse d'un appareil
parmi une pluralité d'appareils informatiques et l'on attribue à chacun des appareils
clients un sous-ensemble respectif de l'ensemble d'adresses IP destiné à être utilisé
comme adresse de destination pour une ou plusieurs demandes de système de fichiers,
et dans lequel avant un évènement de réseau :
un premier serveur de la pluralité d'appareils informatiques est utilisable pour répondre
(504) à une demande de système de fichiers destinée à l'une quelconque d'une première
pluralité d'adresses IP de l'ensemble d'adresses IP,
un deuxième serveur de la pluralité d'appareils informatiques est utilisable pour
répondre (504) à une demande de système de fichiers destinée à l'une quelconque d'une
deuxième pluralité d'adresses IP de l'ensemble d'adresses IP, et
un troisième serveur de la pluralité d'appareils informatiques est utilisable pour
répondre (504) à une demande de système de fichiers destinée à l'une quelconque d'une
troisième pluralité d'adresses IP de l'ensemble d'adresses IP,
et dans lequel, suite à l'évènement de réseau (510, 514), en se basant sur les statistiques
maintenues par le montage de commande :
une première partie de la première pluralité d'adresses IP est réattribuée au deuxième
serveur (512, 516), et
une deuxième partie de la première pluralité d'adresses IP est réattribuée au troisième
serveur (512, 516).
2. Système selon la revendication 1, dans lequel l'évènement de réseau :
(i) est une panne du premier serveur ; et/ou
(ii) est une durée d'un intervalle de temps déterminé ; et/ou
(iii)est une détection d'un déséquilibre de la charge imposée au premier serveur et
au deuxième serveur informatique par la ou les demande(s) de système de fichiers.
3. Système selon la revendication 1, dans lequel les statistiques comprennent, pour chaque
adresse IP de l'ensemble d'adresses IP, un comptage du nombre de demandes de système
de fichiers envoyées, pendant un intervalle de temps déterminé, à l'adresse IP.
4. Système selon la revendication 1, dans lequel les statistiques comprennent, pour chaque
adresse IP de l'ensemble d'adresses IP, un comptage du nombre de bits envoyés, pendant
un intervalle de temps déterminé, vers ou depuis l'adresse IP.
5. Système selon la revendication 1, dans lequel les statistiques comprennent, pour chaque
adresse IP de l'ensemble d'adresses IP, une indication d'une durée nécessaire pour
répondre aux demandes de système de fichiers qui sont envoyées à l'adresse IP.
6. Système selon la revendication 1, dans lequel la réattribution est basée sur un but
de répartition égale des demandes de système de fichiers.
7. Système selon la revendication 1, dans lequel le montage de commande comprend un ou
plusieurs adaptateur(s) de réseau.
8. Système comprenant un montage de commande, dans lequel le montage de commande est
utilisable pour :
attribuer un premier appareil informatique pour répondre à des demandes de système
de fichiers (504) destinées à l'une quelconque d'une première pluralité d'adresses
de réseau ;
attribuer un deuxième appareil informatique pour répondre à des demandes de système
de fichiers (504) destinées à l'une quelconque d'une deuxième pluralité d'adresses
de réseau ;
attribuer un troisième appareil informatique pour répondre à des demandes de système
de fichiers (504) destinées à l'une quelconque d'une troisième pluralité d'adresses
de réseau ;
maintenir des statistiques (518) concernant des demandes de système de fichiers envoyées
à chacune des première pluralité d'adresses de réseau, deuxième pluralité d'adresses
de réseau et troisième pluralité d'adresses de réseau ; et
réattribuer, en se basant sur les statistiques, (512, 516) le premier appareil informatique
pour répondre aux demandes de système de fichiers destinées à l'une ou plusieurs adresses
de la deuxième pluralité d'adresses de réseau et à l'une ou plusieurs adresses de
la troisième pluralité d'adresses de réseau.
9. Système selon la revendication 8, dans lequel la réattribution se fait en réponse
à un évènement de réseau.
10. Système selon la revendication 9, dans lequel l'évènement de réseau :
(i) est une panne de l'un des deuxième et troisième appareils informatiques, ou des
deux ; et/ou
(ii) est une durée d'un intervalle de temps déterminé ; et/ou
(iii)est une détection d'un déséquilibre de la charge imposée au premier appareil
informatique, au deuxième appareil informatique et au troisième appareil informatique
par la ou les demande(s) de système de fichiers.
11. Système selon la revendication 8, dans lequel les statistiques comprennent, pour chaque
adresse de réseau de la première pluralité d'adresses de réseau, de la deuxième pluralité
d'adresses de réseau et de la troisième pluralité d'adresses de réseau, un comptage
du nombre de demandes de système de fichiers envoyées, pendant un intervalle de temps
déterminé, à l'adresse de réseau.
12. Système selon la revendication 8, dans lequel les statistiques comprennent, pour chaque
adresse de réseau de la première pluralité d'adresses de réseau, de la deuxième pluralité
d'adresses de réseau et de la troisième pluralité d'adresses de réseau, un comptage
du nombre de bits envoyés, pendant un intervalle de temps déterminé, vers ou depuis
l'adresse de réseau.
13. Système selon la revendication 8, dans lequel les statistiques comprennent, pour chaque
adresse de réseau de la première pluralité d'adresses de réseau, de la deuxième pluralité
d'adresses de réseau et de la troisième pluralité d'adresses de réseau, une indication
d'une durée nécessaire pour répondre aux demandes de système de fichiers qui sont
envoyées à l'adresse de réseau.
14. Système selon la revendication 8, dans lequel le montage de commande est utilisable
pour :
détecter un évènement de réseau ; et
en réponse à l'évènement de réseau, déclencher la transition d'un appareil informatique
pour qu'il sorte d'un mode de veille et qu'il prenne en charge la gestion des demandes
de système de fichiers destinées à une adresse choisie ou plus de la troisième pluralité
d'adresses de réseau et/ou à une adresse choisie ou plus de la deuxième pluralité
d'adresses de réseau.
15. Système selon la revendication 14, dans lequel l'évènement de réseau :
(i) est une panne de l'un des deuxième et troisième appareils informatiques, ou des
deux ; et/ou
(ii) est un nombre de demandes de système de fichiers par intervalle de temps, sur
l'un des deuxième et troisième appareils informatiques, ou les deux, qui dépasse un
seuil déterminé.