(19)
(11)EP 3 281 369 B1

(12)EUROPEAN PATENT SPECIFICATION

(45)Mention of the grant of the patent:
04.11.2020 Bulletin 2020/45

(21)Application number: 15896561.6

(22)Date of filing:  26.06.2015
(51)International Patent Classification (IPC): 
H04L 29/08(2006.01)
H04L 12/803(2013.01)
(86)International application number:
PCT/US2015/038048
(87)International publication number:
WO 2016/209275 (29.12.2016 Gazette  2016/52)

(54)

SERVER LOAD BALANCING

SERVERLASTAUSGLEICH

ÉQUILIBRAGE DE CHARGE DE SERVEUR


(84)Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

(43)Date of publication of application:
14.02.2018 Bulletin 2018/07

(73)Proprietor: Hewlett Packard Enterprise Development LP
Houston, TX 77070 (US)

(72)Inventors:
  • VACARO, Juliano
    90619-900 Porto Alegre (BR)
  • TANDEL, Sebastien
    90619-900 Porto Alegre (BR)
  • STIEKES, Bryan
    Brownstown Township, Michigan 48173 (US)

(74)Representative: Haseltine Lake Kempner LLP 
Redcliff Quay 120 Redcliff Street
Bristol BS1 6HU
Bristol BS1 6HU (GB)


(56)References cited: : 
EP-A1- 2 284 700
US-A1- 2006 221 973
US-A1- 2010 332 664
US-A1- 2014 304 415
US-A1- 2016 156 708
US-A1- 2002 163 919
US-A1- 2008 066 073
US-A1- 2013 268 646
US-A1- 2014 325 636
  
      
    Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give notice to the European Patent Office of opposition to the European patent granted. Notice of opposition shall be filed in a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).


    Description

    BACKGROUND



    [0001] Network communications between computing devices are often carried out by transmitting network packets from one device to another, e.g., using a packet-switched network. In some client-server network environments, multiple server computers may be used to handle communications to and from a variety of client devices. Network load balancing techniques may be used in a manner designed to ensure server computers do not get overloaded when processing network communications. US2006/0221973 discloses a traffic distribution device. US2010/0332664 discloses a load-balancing cluster.

    BRIEF DESCRIPTION OF THE DRAWINGS



    [0002] The following detailed description references the drawings, wherein:

    FIG. 1 is a block diagram of an example computing device for server load balancing.

    FIG. 2A is an example data flow for server load balancing.

    FIG. 2B is an example data flow for load balancing a server.

    FIG. 3 is a flowchart of an example method for server load balancing.


    DETAILED DESCRIPTION



    [0003] In a packet-switched network, various devices, such as personal computers, mobile phones, and server computers, often send data to one another using network packets. Network packets are formatted units of data that include a variety of information, such as source and destination identifiers and payload data intended for a destination device. In some networking environments, such as a cloud computing environment that uses multiple server computers to provide services to multiple client devices in communication with the server computers, it is useful to distribute the incoming network traffic between server computers in a manner designed to ensure that servers do not get overloaded with network traffic.

    [0004] Intermediary network devices, which are hardware devices used to route network traffic between network devices, e.g., routers, switches, programmable network cards, programmable network components, or other such devices, can be configured to perform operations that proactively and/or reactively load balance server computers. In certain types of networks, such as software defined networks (SDNs), a network controller manages intermediary network devices by providing the network devices with configuration data used to create rules for forwarding network traffic throughout the network. For example, a SDN network controller may provide switches within the SDN with rules that are used by the switches to forward network traffic to servers within the SDN. The rules may, for example, cause a switch to determine which server a particular network packet should be sent to by determining the modulus of a random or pseudo-random value associated with the particular network packet. Modulus, which provides the remainder of division, may be used for pseudo-random server selection for speed and efficiency. For example, a more complex random number generation algorithm or hashing function may be incapable of being performed by, or perform relatively poorly on, an SDN switch.

    [0005] In operation, an intermediary network device may receive rules from a network controller, and the rules may identify servers to which network traffic is to be forwarded and buckets associated with those servers. When the intermediary device receives a new network packet, a value associated with the network packet - such as the source IP or source port - may be divided by the number of buckets associated with the servers. The division results in a remainder, e.g., the value of which may range from 0 to one less than the number of buckets, and the remainder may be used to select a bucket associated with one of the destination servers. The intermediary network device may then forward the network packet to the destination server associated with the selected bucket. In situations where the value associated with the network packet is random or pseudo-randomly generated, the remainder used to select a server will also be random or pseudo-random.

    [0006] When a particular server is to be load balanced, the network controller may provide the intermediary network device with instructions for load balancing the particular server. The rules may cause the intermediary network device to forward, to a load balancing device, network traffic that would otherwise be forwarded to the particular server. The load balancing device determines whether each received network packet should be forwarded to the server in need of load balancing or forwarded to a different server. The determination may be made, for example, based on whether or not the network packets are new packets or part of an existing network flow being processed by the particular server, e.g., determined based on a TCP_SYN flag or other metadata included in each network packet. After making the determination, the load balancing device may modify the destination of the received network packet and provide the modified network packet to the intermediary network device, which forwards the modified network packet to its intended destination.

    [0007] When server load balancing is managed in a manner similar to that described above, intermediary network devices may facilitate relatively efficient distribution of network traffic between server devices. Communications between the SDN switches and an SDN controller, for example, may be relatively light. In addition, load balancing servers in the manner described above may be performed on hardware devices that have less available hardware resources than a software-implemented load balancer operating on a server computer. The potential for increased load balancing speed and reduced network latency introduced by the manner of load balancing described above may be beneficial to both users and administrators of a network implementing the technology. Further details regarding load balancing of network packets, and the transition of network packet flows from one server to another, are discussed in further detail in the paragraphs that follow.

    [0008] FIG. 1 is a block diagram of an example computing device 100 for server load balancing. Computing device 100 may be, for example, an intermediary network device, such as a programmable network switch, router, or any other electronic device suitable for use as an intermediary device in a packet-switched network, including a software defined network (SDN) programmable network element. In the embodiment of FIG. 1, computing device 100 includes a hardware processor, 110, and machine-readable storage medium, 120.

    [0009] Hardware processor 110 may be one or more central processing units (CPUs), semiconductor-based microprocessors, and/or other hardware devices suitable for retrieval and execution of instructions stored in machine-readable storage medium 120. Hardware processor 110 may fetch, decode, and execute instructions, such as 122-128, to control the process for server load balancing. As an alternative or in addition to retrieving and executing instructions, hardware processor 110 may include one or more electronic circuits that include electronic components for performing the functionality of one or more of instructions.

    [0010] A machine-readable storage medium, such as 120, may be any electronic, magnetic, optical, or other physical storage device that contains or stores executable instructions. Thus, machine-readable storage medium 120 may be, for example, Random Access Memory (RAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a storage device, an optical disc, and the like. In some implementations, storage medium 120 may be a non-transitory storage medium, where the term "non-transitory" does not encompass transitory propagating signals. As described in detail below, machine-readable storage medium 120 may be encoded with a series of executable instructions: 122-128, for replicating network communications received by the computing device 100.

    [0011] As shown in FIG. 1, the computing device 100 receives a network packet 132 from a source device 130, and the network packet 132 includes data specifying a value (122). The network packet 132 may be, for example, an internet protocol (IP) packet comprising a header portion and a payload portion. The value may be any number of values associated with the network packet 132, such as the source port, at least a portion of the source and/or destination IP address, and/or a numerical representation of characters included in the network packet 132. The source device 130 is an end-point device, such as a personal computer, mobile phone, server computer, or other computing device from which the network packet 132 originates. The network packet 132 need not be sent directly to the computing device 100, but may instead be routed through various intermediary network devices and, in some implementations, other end-point devices.

    [0012] The computing device 100 divides the value included in the network packet by a divisor (124). The divisor is a non-zero number that may be provided by the network controller and/or based on a variety of things, such as the number of servers to which the computing device 100 may forward the network packet 132. In some implementations, each server to which the computing device 100 may forward the network packet 132 may correspond to one or more buckets, and the divisor may be based on the number of buckets. For example, in a situation with five servers, two buckets may be associated with each server, making a total of 10 buckets. A divisor based on the number of buckets may be, for example, the number of buckets, e.g., 10, or a number evenly divisible by the number of buckets, e.g., 5, 10, 15, etc. In a situation where the value is the last set of digits in a source IP address of the network packet 132, e.g., the value 105, the computing device 100 may divide 105 by 10.

    [0013] In some implementations, the division of the value and/or the number of servers and/or buckets is based on one or more rules provided by a network controller. For example, in situations where the computing device 100 is a software-defined switch operating in a SDN, an SDN controller may generate rules for how network traffic should be forwarded by the switch and send the rules to the switch, e.g., for storage in the storage medium 120 or a separate machine-readable storage medium. Using the example above, a set of rules provided to the computing device 100 may be:

    If (Source IP Address) % (number of buckets) == 0; fwd port 1.

    If (Source IP Address) % (number of buckets) == 1; fwd port 2.

    If (Source IP Address) % (number of buckets) == 2; fwd port 3.

    If (Source IP Address) % (number of buckets) == 3; fwd port 4.

    If (Source IP Address) % (number of buckets) == 4; fwd port 5.

    If (Source IP Address) % (number of buckets) == 5; fwd port 1.

    If (Source IP Address) % (number of buckets) == 6; fwd port 2.

    If (Source IP Address) % (number of buckets) == 7; fwd port 3.

    If (Source IP Address) % (number of buckets) == 8; fwd port 4.

    If (Source IP Address) % (number of buckets) == 9; fwd port 5.



    [0014] For each of the above example rules, performing the modulus operation, e.g., represented by the "%" symbol, obtains a remainder by first dividing the Source IP Address by the number of buckets. As described below, the result of the division may be used to load balance network packets.

    [0015] The computing device 100 determines, from a group of servers, a destination server 140 for the network packet 132 based on the remainder of the division (126). Using the example values provided above, the remainder of 105 divided by 10 is 5. The remainder, 5, may be used to select a bucket, and the server that corresponds to the selected bucket may be chosen as the destination server 140 for the network packet 132. In the example implementation above, where rules are provided by the network controller, the condition specified by the sixth rule is met. For example, 105 % 10 == 5. Accordingly, in this situation, the rule specifies that the computing device 100 should forward the network packet 132 to the destination server associated with port 1. In the foregoing example, the use of 5 different ports may indicate that there are 5 different servers to which network packets may be forwarded, e.g., each server corresponding to one of the 5 ports. In situations where the Source IP Address value is random, the distribution of network packets between servers will also be random.

    [0016] The computing device 100 forwards the network packet 132 to the destination server 140 (128). In the foregoing example, the computing device 100 would forward the network packet 132 to the destination server 140 through port 1. The manner in which the computing device 100 forwards the network packet 132 may vary. For example, the network packet 132 may be encapsulated and the encapsulated packet may be forwarded to the destination server. In some implementations, the destination address of the network packet 132 is changed to the address of the destination server 140.

    [0017] In some implementations, the computing device 100 generates a rule specifying that additional network packets included in the same flow as the network packet 132 are to be forwarded to the same destination server 140. Network packets may be considered part of the same flow, for example, in situations where - within a predetermined period of time - the network packets have the same source address/port and the same destination address/port. The rule generated to match the network flow, because it is more specific than the general rule(s) that would cause random or pseudo-random distribution, is designed to ensure that all network packets of the same network flow are processed by the same destination device. Using the foregoing example values, an example rule may be:
    If ((source IP == 105) & (destination IP == [IP of destination server 140])); fwd port 5.

    [0018] In this situation, additional network packets received by the computing device 100 that specify the same source IP as the source device 130 and the destination IP address of the destination server 140 will be forwarded to the destination server 140 through port 5, without the need to check against the other rules. Rules may have additional features or characteristics not provided in the above examples. For example, rules may have expiration timers, after which they are deleted, removed, or ignored. Expiration may be used to prevent unnecessary forwarding of unrelated network packets to a particular device. Another example rule feature or characteristic may be a rule priority that indicates which rule(s) take precedent over other rules. For example, the more specific forwarding rule to keep network packets of the same flow together may have a higher priority than the rules for distributing network packets using the modulus calculation.

    [0019] While FIG. 1 depicts the intermediary computing device 100 as the only intermediary network device between the source device 130 and destination server 140, there may be any number of other intermediary network devices between the source device 130 and the computing device 100 and/or between the computing device 100 and the destination server 140 and other destination server(s). For example, a computing device 100 may logically reside at the edge of a private network, receive network packets from a switch or router operating in a public network, such as the Internet, and forward packets, as instructed, to various destination servers within the private network through one or more private network switches/routers.

    [0020] In some implementations, network traffic to the destination device 140, or to another destination server, may be imbalanced. For example, if the random or pseudo-random distribution of network packets is skewed, one server may become overloaded. In these situations, the computing device 100 may execute instructions designed to load balance the overloaded server or servers, e.g., by transitioning network traffic from each overloaded server to another available destination server. The process for transitioning network traffic from one server to another is described in further detail below, e.g., with respect to FIGs. 2A and 2B.

    [0021] FIG. 2A is an example data flow 200 for server load balancing. FIG. 2B is an example data flow 205 for load balancing a server. In the example data flow 200, an intermediary network device 230 is in communication with a source device 210 and a network controller 220. The network controller 220 provides at least one rule 202 identifying servers as recipients of network packets, each of the servers being associated with a bucket. For example, the rules 202 provided to the intermediary network device 230 specify four different servers to which network traffic may be forwarded via various ports of the intermediary network device, and each of the four servers is associated with a bucket, e.g., a value representing the possible remainder of division by the number of buckets. The example rules 202 specify that the remainder of dividing a network packet's source port by the number of buckets, e.g., 4, will be used to determine which port the network packet will be forwarded through.

    [0022] The source device 210, which may be, for example, a device like the source device 130 of FIG. 1, provides a network packet 204 to the intermediary network device 230. The network packet 204 includes a variety of information, including a source port, source address, destination address, and a payload. The information includes a value, such as a non-zero numerical value, or data that may be converted into a divisible value. In some implementations, particular portions of the information included in the network packet 204 are non-zero numerical values. In some implementations, non-numerical values may be converted into numerical values in a variety of ways, e.g., characters may be converted into their corresponding hexadecimal values, or alphabetical characters may be converted to numerical digits.

    [0023] The intermediary network device 230, upon receipt of the network packet 204, divides the value included in the network packet 204 by a total number of buckets associated with the servers. For example, in the data flow 200, there may be 4 buckets, one for each destination server. In some implementations, the number of buckets may be a multiple of the number of destination servers. For example, using a multiple of the number of destination servers may ensure that each server corresponds to the same number of buckets as each other server.

    [0024] The intermediary network device 230 determines a destination server for the network packet 204 based on the rules 202 and the remainder of the division. In the example data flow 200, the example rules 202 indicate which destination server will be used for each possible remainder of the division. When dividing by the number of buckets, e.g., 4 in this situation, the remainder will always be a value between 0 and one less than the number of buckets - 0, 1, 2, or 3, in the example data flow 200.

    [0025] As indicated in the example rules 202, when the remainder of the division is 0, the intermediary network device 230 forwards the network packet 204 through a numbered port, e.g., port 1, which, in this example, corresponds to the first server 240. When the remainder of the division is 1, network packets are forwarded through port 2, which corresponds to the second server 250. When the remainder of the division is 2, network packets are forwarded through port 3, which corresponds to the third server 260. When the remainder of the division is 3, network packets are forwarded through port 4, which corresponds to the fourth server 270.

    [0026] In some implementations, the value was generated in a manner that results in a substantially even distribution of the remainder of division. Each source port, for example, may be sequentially selected by various computing devices, such as the source device 210. Given many sequentially selected numbers, the remainders of the division of the sequentially selected numbers may have a substantially even distribution. E.g., dividing each number between 1 and 100 by four would result in 25 occurrences of 0 as a remainder, 25 occurrences of 1 as a remainder, 25 occurrences of 2 as a remainder, and 25 occurrences of 3 as a remainder. In the examiner data flow 200, this would result in an even distribution of network flows between each destination server.

    [0027] After determining which destination server the network packet 204 is to be forwarded to, the intermediary network device 230 forwards the network packet 204 to the destination server, e.g., the first server 240 in the example data flow 200. While the example rules 202 identify a port through which the intermediary network device 230 will forward network packets, other methods may also be used, e.g., forwarding to a particular address or to a different network device for forwarding to the destination server. In some implementations, the destination address of the network packet 204 may be updated to the address of the destination server, and/or the network packet 204 may be encapsulated for forwarding to its destination server. Other methods for forwarding network traffic to the various destination servers may be used to provide the network packet 204 to the intended destination.

    [0028] In some implementations, the intermediary network device 230 may generate an additional rule or rules designed to ensure that network packets included in the same network flow are forwarded to the same destination server. In the example data flow 200, the intermediary network device 230 generates an additional rule 206 that specifies that additional network packets arriving with the same source address as the network packet 204 are to be forwarded through port 1, e.g., to the first server 240. The rule 206 may also be associated with a duration, after which the rule will timeout, and no longer be applied. The additional rules 206 may have a higher priority than the general rules 202 provided by the network controller 202, which enables network packets to be forwarded to the appropriate destination server without identifying the modulus of a value included in the network packets that match the condition specified by the rule 206.

    [0029] As noted above, FIG. 2B depicts an example data flow 205 for load balancing a server. In some implementations, the network controller 220 may determine that one or more of the destination servers is unbalanced, e.g., based on monitoring the server's network traffic and system resources. In situations where the network controller 220 determines that a particular server is in need of load balancing, it may send the intermediary network device 230 instructions to load balance the particular server.

    [0030] In some implementations, the intermediary network device 230 may determine when a particular server is unbalanced. The determination may be made in a variety of ways. For example, the intermediary network device may use counters to determine when a destination server has received a threshold number of network packets or flows. The threshold may be a predetermined value, or may be based network traffic handled by other destination servers, e.g., a threshold may be used to determine that a particular destination server is unbalanced when it has received 75% more network packets than any other destination server, or 50% more network packets than the number of network packets received by the destination server with the second highest network traffic. Other methods may also be used by the intermediary network device 230 to determine whether a particular server is imbalanced. For example, aggregate size of the network packets may be tracked to determine when a server is imbalanced. As another example, in situations where temporary rules are generated for handling network traffic belonging to the same network flow, the number of rules active for destination servers may be tracked to determine when a server is imbalanced.

    [0031] In the example data flow 205, the intermediary network device 230 receives the load balancing instructions 212 from the network controller 220. The instructions 212 specify that the first server 240 is to be load balanced. The manner in which servers are load balanced may vary, and in the example data flow 205, the intermediary network device 230 changes any rules that specify network packets are to be sent to the first server 240 so that the rules now specify that the network packets are to be forwarded to a load balancing device 280. In the example data flow 205, some of the rules 216, e.g., those previously specifying port 1 as the destination port, have been updated to specify port 5 as the port through which network packets will be forwarded.

    [0032] The intermediary network device 230 receives a second network packet 214 from a second source device 215. The second network packet may include the same source address as the first network packet 204, e.g., in situations where the second source device 215 is the same device as the first source device 210. In this situation, the source address would meet the condition specified by the rule, "Source Address == <source address>; fwd port 5," and the second network packet 214 would be forwarded to the load balancing device 280 through port 5. In situations where the second source device 215 is different from the first source device 210, and where the modulus of the source port is 0, the rules 216 also specify that the second network packet 214 be forwarded to the load balancing device 280.

    [0033] The load balancing device 280 may be any computing device capable of network communications and data processing, and may include virtual computing devices, such as a virtual switch, e.g., implemented by a server computer in communication with the intermediary network device. In particular, the load balancing device 280 determines whether the second network packet 214 is to be forwarded to the imbalanced server, e.g., the first server 240, or if the second network packet 214 should instead be forwarded to a different destination server, e.g., one of the existing destination servers or a new destination server.

    [0034] In some implementations, the load balancing device 280 may determine that network packets of new network flows should be forwarded to a new destination server, while network packets of network flows currently being processed by the imbalanced server should continue to be sent to the imbalanced server. This may be determined, for example, by examining the TCP_SYN flag or other metadata of a network packet, which may indicate whether the network packet is requesting a new connection. After making the determination, the load balancing device 280 may modify the second network packet 214, and send the modified network packet 218 back to the intermediary network device 230. The modified network packet 218 specifies which destination device is to be the recipient of the modified network packet 218, and the intermediary network device 230 forwards the modified network packet 218 to the intended recipient.

    [0035] By way of example, in situations where the second source device 215 is the same as the first source device 210, the source address of the second network packet 214 is the same as that of the first network packet 204. In this situation, the second network packet 214 meets the conditions specified by the rule, "Source Address == <source address>; fwd port 5," and the intermediary network device 230 forwards the second network packet 214 to the load balancing device 280. The load balancing device 280 determines, based on session metadata included in the second network packet 214, that the second network packet 214 belongs to the same network flow as the first network packet 204 and should be handled by the same server, e.g., the first server 240. The load balancing device 280 modifies the second network packet 214, or generates a new modified network packet 218, which may include data included in the first network packet 204 and specifies the first server 240 as the destination, and provides the modified network packet 218 to the intermediary network device 230. The intermediary network device 230 then forwards the modified network packet 218 to the first server 240.

    [0036] As another example, in situations where the second source device 215 is different from the first source device 210, the source address of the second network packet 214 may be different from that of the first network packet 204. In this situation, the general rules 216 may be applied to the second network packet 214 to determine its destination. If, in the example data flow 205, the modulus of the second network packet's source port is 1, 2, or 3, the intermediary network device 230 will forward the second network packet to, respectively, the second 250, third 260, or fourth server 270. If the modulus of the second network packet's source port is 0, the updated rule causes the intermediary network device 230 to forward the second network packet 214 to the load balancing device 280.

    [0037] The load balancing device 280 determines, based on the TCP_SYN flag of the second network packet 214, that the second network packet 214 is the beginning of a new network flow and should be handled by a server other than the imbalanced server. The load balancing device 280 chooses a destination server for the second network packet 214, such as the second server 250, modifies the second network packet 214, or generates a new modified network packet 218, which specifies the chosen destination server and provides the modified network packet 218 to the intermediary network device 230. The intermediary network device 230 then forwards the modified network packet 218 to the chosen destination server which, in the example data flow 205, is the second server 250. In some implementations, the intermediary network device 230 may also generate a new rule based on the modified network packet, e.g., a rule causing subsequent network traffic belonging to the same network flow to be forwarded to the second server 250. This allows subsequent network packets of the same flow to be forwarded directly to the second server 250, rather than being provided to the load balancing device 280.

    [0038] The load balancing device 280 may determine which destination server is to receive network packets from new network flows in a variety of ways. In some implementations, the load balancing device 280 may be in communication with the network controller 220 and distribute new network flows in the manner in which the network controller specifies. In some implementations, the load balancing device may randomly, or pseudo-randomly, cause network packets to be sent to existing destination servers and/or new destination servers. In another implementation, the load balancing device 280 implement logic designed to choose particular destination servers based on their current load, e.g., as monitored by the load balancing device 280 or another device in communication with the load balancing device 280.

    [0039] FIG. 3 is a flowchart 300 of an example method for server load balancing. The method 300 may be performed by an intermediary network device, such as a computing device described in FIG. 1 and/or intermediary network device described in FIGs. 2A and 2B. Other computing devices may also be used to execute method 300. Method 300 may be implemented in the form of executable instructions stored on a machine-readable storage medium, such as the storage medium 120 and/or in the form of electronic circuitry.

    [0040] At least one rule is received from a network controller, each rule specifying one of a plurality of servers as a recipient of network packets that meet a condition specified by the rule (302). For example, a condition specified by each rule may be that the modulus of the source address of a network packet matches a particular value. If the modulus of the source address of a particular packet matches the particular value for a particular rule, the recipient of the network packet may be the recipient specified by that particular rule.

    [0041] A network packet is received from a source device, the network packet including data specifying a value (304). Network packets may include many different types of values, numerical or otherwise. For the purpose of using a value for load balancing, a randomly generated value, or pseudo-randomly generated value, may be the specified value, e.g., source port or source address.

    [0042] A function is applied to the value to yield a remainder (306). For example, the modulus function may be applied to the value to obtain the remainder. The modulus function is applied with a particular divisor and results in the remainder of division of the value by the divisor. The divisor used for the modulus function may, in some implementations, depend up on a number of buckets associated with the rules and potential destination servers. Each bucket may represent the rules and/or values that result in network packets being forwarded to a particular server. For example, one bucket may be associated with modulus results of 1 and 5, e.g., with rules that cause network packets having values that result in a modulus of 1 or 5 being provided to a particular destination server.

    [0043] The computing device determines that the remainder meets the condition specified by a particular rule of the rules (308). For example, modulus (5) applied to the received network packet's source port may result in a remainder between 0 and 4. One of the rules specifies a condition that is met by the remainder.

    [0044] The network packet is forwarded to a destination server specified by the particular rule (310). For example, the particular rule having a condition met by the result of the modulus function is associated with a particular destination server. That particular destination server is chosen as the recipient for the network packet.

    [0045] In some implementations, a first rule is generated for the source device, the first rule specifying: additional network packets included in the same flow as the network packet are to be forwarded to the destination server; and a period of time after which the first rule will timeout. The first rule is designed to ensure that network packets included in the same flow are handled by the same destination server, and the timeout is designed to ensure that future network flows are capable of being provided to different servers.

    [0046] In some implementations, instructions to load balance the destination server are received from a network controller. For example, when the method 300 is implemented in a SDN switch, the switch may receive instructions to change the forwarding rules that identify the destination server as the recipient to specify a load balancing device, such as a virtual switch, as the recipient instead.

    [0047] In situations where load balancing is being implemented, e.g., where traffic is transitioning from an imbalanced server to one or more other servers, the load balancing device may determine to which server incoming network packets will be sent. In some implementations of method 300, a second network packet may be received from a second source device. A determination may be made that the second network packet meets a condition specified by one of the changed rules, e.g., that the source address of the second network packet matches a source address specified by one of the changed rules. In response to the determination, the second network packet may be forwarded to the load balancing device. A modified network packet may then be received from the load balancing device, the modified network packet specifying the destination server as the recipient of the modified network packet, and the modified network packet being based on the second network packet. The modified network packet may then be forwarded to its intended recipient, e.g., the destination server. The foregoing may take place, for example, when the load balancing device determines that the second network packet belongs to a network flow that is already being handled by the destination device.

    [0048] In some implementations of method 300, a second network packet including second data specifying a second value is received from a second source device. A function may be applied to the second value to yield a second remainder, and it may be determined that the second remainder meets a condition specified by one of the changed rules. For example, the remainder may match a value specified by one of the changed rules. In response to the determination, the second network packet may be forwarded to the load balancing device. A modified network packet may be received from the load balancing device, the modified network packet specifying, as the recipient of the modified network packet, a target server that is different from the destination server, and the modified network packet being based on the second network packet. The modified network packet may then be forwarded to its intended recipient, e.g., the target server.

    [0049] The foregoing disclosure describes a number of example implementations for load balancing servers. As detailed above, examples provide a mechanism for load balancing network traffic at an intermediary network device and causing the packets to be distributed to separate destination servers. Other examples provide a mechanism for using an intermediary network device to transition network traffic from an imbalanced server to one or more other servers. The scope of protection is defined by the claims.


    Claims

    1. A method for server load balancing, implemented by a hardware processor of an intermediary network device for server load balancing and a hardware processor of a load balancing device, the method comprising:

    receiving (122) at the intermediary device a network packet (204) from a source device (210), the network packet (204) including data specifying a value;

    dividing (124) at the intermediary device the value included in the network packet (204) by a divisor;

    determining at the intermediary device, from a plurality of servers, a destination server (240) for the network packet (204) based on a remainder of the division;

    forwarding from the intermediary device (128) the network packet to the destination server (240);

    receiving at the intermediary device from a network controller (220), instructions to load balance the destination server (240);

    receiving at the intermediary device, from a second source device (215), a second network packet (214) including second data specifying a second value;

    determining at the intermediary device that the second network packet (214) is destined for the destination server (240); and characterized by:

    in response to determining that the second network packet (214) is destined for the destination server (240), forwarding by the intermediary device the second network packet (214) to a load balancing device (280);

    receiving at the intermediary device, from the load balancing device (280), a modified network packet (218) that specifies, as a recipient of the modified network packet (218), i) the destination server (240), or ii) a target server that is different from the destination server (240), the modified network packet (218) being based on the second network packet (214); and

    forwarding by the intermediary device the modified network packet (218) to the recipient, wherein:

    the determination that the second network packet (214) is to be forwarded to the destination server is based on:

    division of the second value by the divisor; or

    a rule specifying that network packets received from the second source device (215) are to be forwarded to the destination server;

    wherein the load balancing device (280) determines that the recipient of the modified network packet is the destination server by determining that the second network packet (214) is part of an existing flow and the load balancing device determines that that the recipient of the modified network packet is the target server that is different from the destination server by determining that the second network packet is a first packet of a new flow;

    generating at the intermediary device, when the recipient of the received modified network packet is the target server, a destination rule specifying that subsequent network packets included in the same flow as the received modified network packet are to be forwarded to the target server; and

    sending by the intermediary device the subsequent network packets matching the generated destination rule directly to the target server.


     
    2. The method of claim 1, wherein the destination rule specifies a period of time after which the destination rule will timeout.
     
    3. The method of claim 1, further comprising:

    applying a function to the second value to yield a second remainder;

    determining, from the plurality of servers, a destination server for the second network packet (214) based on the second remainder.


     
    4. The method of claim 2, wherein the instructions further cause the hardware processor (110) to:

    divide the second value included in the second network packet (214) by a total number of buckets associated with the plurality of servers;

    determine, based on a second remainder of the division of the second value, that the second network packet (214) matches one of the at least one buckets associated with the destination server.


     
    5. The method of claim 1, wherein:

    each of a plurality of buckets corresponds to only one of the plurality of servers;

    each of the plurality of servers corresponds to at least one of the plurality of buckets; and

    the divisor is based on a number of buckets included in the plurality of buckets;

    wherein the divisor based on the number of buckets being
    the number of buckets, particularly 10, or
    a number evenly divisible by the number of buckets, particularly, 5 or 10 or 15.


     
    6. A system (230) for server load balancing, the system (230) comprising an intermediary device and a load balancing device, the system being able to operate according to any of method claims 1 to 5.
     
    7. A non-transitory machine-readable storage medium (120) encoded with instructions which when executed by a system, cause the system to operate according to any of method claims 1 to 5.
     


    Ansprüche

    1. Verfahren zum Serverlastausgleich, implementiert durch einen Hardwareprozessor einer zwischengeschalteten Netzwerkvorrichtung für den Serverlastausgleich und einen Hardwareprozessor einer Lastausgleichsvorrichtung, wobei das Verfahren Folgendes umfasst:

    Empfangen (122) eines Netzwerkpakets (204) von einer Quellvorrichtung (210) an der zwischengeschalteten Vorrichtung, wobei das Netzwerkpaket (204) Daten beinhaltet, die einen Wert angeben;

    Teilen (124) des in dem Netzwerkpaket (204) beinhalteten Werts durch einen Teiler an der zwischengeschalteten Vorrichtung;

    Bestimmen, aus mehreren Servern, eines Zielservers (240) für das Netzwerkpaket (204) auf der Basis eines Rests der Teilung an der zwischengeschalteten Vorrichtung;

    Weiterleiten des Netzwerkpakets von der zwischengeschalteten Vorrichtung (128) an den Zielserver (240);

    Empfangen von Anweisungen, den Zielserver (240) lastenmäßig auszugleichen, von einer Netzwerksteuerung (220) an der zwischengeschalteten Vorrichtung;

    Empfangen eines zweiten Netzwerkpakets (214), das zweite Daten beinhaltet, die einen zweiten Wert angeben, von einer zweiten Quellvorrichtung (215) an der zwischengeschalteten Vorrichtung;

    Bestimmen, dass das zweite Netzwerkpaket (214) für den Zielserver (240) bestimmt ist, an der zwischengeschalteten Vorrichtung; und

    gekennzeichnet durch:

    als Reaktion auf das Bestimmen, dass das zweite Netzwerkpaket (214) für den Zielserver (240) bestimmt ist, Weiterleiten des zweiten Netzwerkpakets (214) zu einer Lastausgleichsvorrichtung (280) durch die zwischengeschaltete Vorrichtung;

    Empfangen eines modifizierten Netzwerkpakets (218), das als Empfänger des modifizierten Netzwerkpakets (218) i) den Zielserver (240) oder ii) einen Zielserver, der sich von dem Zielserver (240) unterscheidet, angibt, wobei das modifizierte Netzwerkpaket (218) auf dem zweiten Netzwerkpaket (214) basiert, von der Lastausgleichsvorrichtung (280) an der zwischengeschalteten Vorrichtung; und

    Weiterleiten des modifizierten Netzwerkpakets (218) an den Empfänger durch die zwischengeschaltete Vorrichtung, wobei:

    die Bestimmung, dass das zweite Netzwerkpaket (214) an den Zielserver weitergeleitet werden soll, auf Folgendem basiert:

    Teilung des zweiten Werts durch den Teiler; oder

    einer Regel, die angibt, dass die von der zweiten Quellvorrichtung (215) empfangenen Netzwerkpakete an den Zielserver weitergeleitet werden sollen;

    wobei die Lastausgleichsvorrichtung (280) bestimmt, dass der Empfänger des modifizierten Netzwerkpakets der Zielserver ist, durch Bestimmen, dass das zweite Netzwerkpaket (214) Teil eines vorhandenen Flusses ist, und die Lastausgleichsvorrichtung bestimmt, dass der Empfänger des modifizierten Netzwerkpakets der Zielserver ist, der sich von dem Zielserver unterscheidet, durch Bestimmen, dass das zweite Netzwerkpaket ein erstes Paket eines neuen Flusses ist;

    Erzeugen, an der zwischengeschalteten Vorrichtung, wenn der Empfänger des empfangenen modifizierten Netzwerkpakets der Zielserver ist, einer Zielregel, die angibt, dass nachfolgende Netzwerkpakete, die in demselben Fluss wie das empfangene modifizierte Netzwerkpaket beinhaltet sind, an den Zielserver weitergeleitet werden sollen; und

    Senden der nachfolgenden Netzwerkpakete, die der erzeugten Zielregel entsprechen, direkt an den Zielserver durch die zwischengeschaltete Vorrichtung.


     
    2. Verfahren nach Anspruch 1, wobei die Zielregel einen Zeitraum angibt, nach dem die Zielregel eine Zeitüberschreitung aufweist.
     
    3. Verfahren nach Anspruch 1, das ferner Folgendes umfasst:

    Anwenden einer Funktion auf den zweiten Wert, um einen zweiten Rest zu erhalten;

    Bestimmen, aus den mehreren Servern, eines Zielservers für das zweite Netzwerkpaket (214) auf der Basis des zweiten Rests.


     
    4. Verfahren nach Anspruch 2, wobei die Anweisungen ferner den Hardwareprozessor (110) zu Folgendem veranlassen:

    Teilen des zweiten Werts, der in dem zweiten Netzwerkpaket (214) beinhaltet ist, durch eine Gesamtzahl der Buckets, die den mehreren Servern zugeordnet ist;

    Bestimmen, auf der Basis eines zweiten Rests der Teilung des zweiten Werts, dass das zweite Netzwerkpaket (214) mit einem des mindestens einen Bucket übereinstimmt, die dem Zielserver zugeordnet sind.


     
    5. Verfahren nach Anspruch 1, wobei:

    jeder von mehreren Buckets nur einem der mehreren Servern entspricht;

    jeder der mehreren Server mindestens einem der mehreren Buckets entspricht; und der Teiler auf einer Anzahl von Buckets, die in den mehreren Buckets beinhaltet sind, basiert;

    wobei der Teiler, der auf der Anzahl von Buckets basiert,

    die Anzahl von Buckets, insbesondere 10, oder

    eine Anzahl, die gleichmäßig durch die Anzahl von Buckets teilbar ist, insbesondere 5 oder 10 oder 15 ist.


     
    6. System (230) zum Serverlastausgleich, wobei das System (230) eine zwischengeschaltete Vorrichtung und eine Lastausgleichsvorrichtung umfasst, wobei das System nach einem der Verfahrensansprüche 1 bis 5 arbeiten kann.
     
    7. Nichtflüchtiges maschinenlesbares Speichermedium (120), das mit Anweisungen codiert ist, die, wenn sie durch ein System ausgeführt werden, bewirken, dass das System nach einem der Verfahrensansprüche 1 bis 5 arbeitet.
     


    Revendications

    1. Procédé d'équilibrage de charge de serveur, mis en Ĺ“uvre par un processeur matériel d'un dispositif réseau intermédiaire pour l'équilibrage de charge de serveur I et un processeur matériel d'un dispositif d'équilibrage de charge, le procédé comprenant :

    la réception (122) au niveau du dispositif intermédiaire d'un paquet réseau (204) d'un dispositif source (210), le paquet réseau (204) comportant des données spécifiant une valeur ;

    la division (124) au niveau du dispositif intermédiaire de la valeur comprise dans le paquet réseau (204) par un diviseur ;

    la détermination au niveau du dispositif intermédiaire, à partir d'une pluralité de serveurs, d'un serveur de destination (240) pour le paquet réseau (204) sur la base d'un reste de la division ;

    la transmission depuis le dispositif intermédiaire (128) du paquet réseau au serveur de destination (240) ;

    la réception au niveau du dispositif intermédiaire d'un contrôleur de réseau (220), d'instructions pour équilibrer la charge du serveur de destination (240) ;

    la réception au niveau du dispositif intermédiaire, depuis un second dispositif source (215), d'un second paquet réseau (214) comportant des secondes données spécifiant une seconde valeur ;

    la détermination au niveau du dispositif intermédiaire du fait que le second paquet réseau (214) est destiné au serveur de destination (240) ; et caractérisé par :

    en réponse à la détermination du fait que le second paquet réseau (214) est destiné au serveur de destination (240), la transmission par le dispositif intermédiaire du second paquet réseau (214) à un dispositif d'équilibrage de charge (280) ;

    la réception au niveau du dispositif intermédiaire, depuis le dispositif d'équilibrage de charge (280), d'un paquet réseau modifié (218) qui spécifie, en tant que destinataire du paquet réseau modifié (218), i) le serveur de destination (240), ou ii) un serveur cible différent du serveur de destination (240), le paquet réseau modifié (218) étant basé sur le second paquet réseau (214) ; et

    la transmission par le dispositif intermédiaire du paquet réseau modifié (218) au destinataire :

    la détermination du fait que le second paquet réseau (214) doit être transmis au serveur de destination étant basée sur :

    la division de la seconde valeur par le diviseur ; ou

    une règle spécifiant que les paquets réseau reçus du second dispositif source (215) doivent être transmis au serveur de destination ;

    le dispositif d'équilibrage de charge (280) déterminant que le destinataire du paquet réseau modifié est le serveur de destination en déterminant que le second paquet réseau (214) fait partie d'un flux existant et le dispositif d'équilibrage de charge détermine que le destinataire du paquet réseau modifié est le serveur cible qui est différent du serveur de destination en déterminant que le second paquet réseau est un premier paquet d'un nouveau flux ;

    la génération au niveau du dispositif intermédiaire, lorsque le destinataire du paquet réseau modifié reçu est le serveur cible, d'une règle de destination spécifiant que les paquets réseau ultérieurs compris dans le même flux que le paquet réseau modifié reçu doivent être transmis au serveur cible ; et

    l'envoi par le dispositif intermédiaire des paquets réseau suivants correspondant à la règle de destination générée directement au serveur cible.


     
    2. Procédé selon la revendication 1, la règle de destination spécifiant une période de temps après laquelle la règle de destination expirera.
     
    3. Procédé selon la revendication 1, comprenant en outre :

    l'application d'une fonction à la seconde valeur pour produire un second reste ;

    la détermination, à partir de la pluralité de serveurs, d'un serveur de destination pour le second paquet réseau (214) sur la base du second reste.


     
    4. Procédé selon la revendication 2, les instructions amenant en outre le processeur matériel (110) à :

    diviser la seconde valeur comprise dans le second paquet réseau (214) par un nombre total de compartiments associés à la pluralité de serveurs ;

    déterminer, sur la base d'un second reste de la division de la seconde valeur, que le second paquet réseau (214) correspond à l'un desdits compartiments associés au serveur de destination.


     
    5. Procédé selon la revendication 1 :

    chacun d'une pluralité de compartiments correspondant à un seul de la pluralité de serveurs ;

    chacun de la pluralité de serveurs correspondant à au moins un de la pluralité de compartiments ; et

    le diviseur étant basé sur un certain nombre de compartiments compris dans la pluralité de compartiments ;

    le diviseur basé sur le nombre de compartiments étant le nombre de compartiments, en particulier 10, ou un nombre également divisible par le nombre de compartiments, en particulier 5 ou 10 ou 15.


     
    6. Système (230) pour l'équilibrage de charge de serveur, le système (230) comprenant un dispositif intermédiaire et un dispositif d'équilibrage de charge, le système pouvant fonctionner selon l'une quelconque des revendications de procédé 1 à 5.
     
    7. Support d'informations non transitoire lisible par machine (120) codé avec des instructions qui, lorsqu'elles sont exécutées par un système, amènent le système à fonctionner selon l'une quelconque des revendications 1 à 5 du procédé.
     




    Drawing

















    Cited references

    REFERENCES CITED IN THE DESCRIPTION



    This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.

    Patent documents cited in the description