(19)
(11)EP 2 922 249 B1

(12)EUROPEAN PATENT SPECIFICATION

(45)Mention of the grant of the patent:
29.05.2019 Bulletin 2019/22

(21)Application number: 15160035.0

(22)Date of filing:  20.03.2015
(51)International Patent Classification (IPC): 
H04L 12/715(2013.01)
H04L 12/803(2013.01)
H04L 12/717(2013.01)

(54)

CONTROL PLANE OPTIMIZATION OF COMMUNICATION NETWORKS

STEUERUNGSEBENENOPTIMIERUNG VON KOMMUNIKATIONSNETZWERKEN

OPTIMISATION DE PLAN DE COMMANDE DE RÉSEAUX DE COMMUNICATION


(84)Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

(30)Priority: 20.03.2014 IN MU09212014

(43)Date of publication of application:
23.09.2015 Bulletin 2015/39

(73)Proprietor: Tata Consultancy Services Limited
Maharashtra (IN)

(72)Inventors:
  • Rath, Hemant Kumar
    560066 Karnataka (IN)
  • Revoori, Vishvesh
    560066 Karnataka (IN)
  • Nadaf, Shameemraj Mohinuddin
    560066 Karnataka (IN)
  • Simha, Anantha
    560066 Karnataka (IN)

(74)Representative: Shipp, Nicholas et al
Kilburn & Strode LLP Lacon London 84 Theobalds Road
London WC1X 8NL
London WC1X 8NL (GB)


(56)References cited: : 
  
  • ADVAIT DIXIT ET AL: "Towards an elastic distributed SDN controller", HOT TOPICS IN SOFTWARE DEFINED NETWORKING, ACM, 2 PENN PLAZA, SUITE 701 NEW YORK NY 10121-0701 USA, 16 August 2013 (2013-08-16), pages 7-12, XP058030691, DOI: 10.1145/2491185.2491193 ISBN: 978-1-4503-2178-5
  
Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give notice to the European Patent Office of opposition to the European patent granted. Notice of opposition shall be filed in a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).


Description

TECHNICAL FIELD



[0001] The present subject matter relates, in general, to communication networks and, in particular, to optimization of control plane in Software Defined Network (SDN).

BACKGROUND



[0002] Communication networks are vastly utilized and relied upon across the globe to share information between two or more end users. A communication network, also referred to as a network, typically involves one or more network devices, such as network switches and network routers, apart from other components, for the purpose of transferring information amongst the end users.

[0003] The information is transferred in the form of digitized data packets, simply referred to as packets. At a network device, packets are received at one or more input ports of and are forwarded to one or more output ports of the network device. The forwarding is based on a path or a route of the packet, for being forwarded to an end user, which may in turn be based on the configuration of the network. Typically, each forwarder in a network is configured with an in-built control logic, also referred to as the control plane. The control plane determines forwarding rules or conditions that allow the network device to control the forwarding behaviour or flow of packets between the input and output port(s) of the network device.

[0004] More recently, computer networks with dynamic architectures, such as Software Defined Networks (SDNs) that allow the control logic to be decoupled from the network device and be moved to external central controllers are increasingly being used. The SDN architecture decouples the control plane of the network from the data plane and provides direct control of the network devices such that the network may be managed with greater flexibility and efficiency.
Reference is made to Advait Dixit et al: "Towards an elastic distributed SDN controller", Hot Topics In Software Defined Networking, ACM, 2 Penn Plaza, Suite 701 New York NY 10121-0701 USA.

BRIEF DESCRIPTION OF DRAWINGS



[0005] The detailed description is described with reference to the accompanying figure(s). In the figure(s), the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the figure(s) to reference like features and components. Some embodiments of systems and/or methods in accordance with embodiments of the present subject matter are now described, by way of example only, and with reference to the accompanying figure(s), in which:

Figure 1 illustrates a network environment implementing a system for network control plane optimization in software defined network (SDN), according to an implementation of the present subject matter.

Figure 2 illustrates a network controller, according to an implementation of the present subject matter.

Figure 3 illustrates a central optimization controller, according to an implementation of the present subject matter.

Figure 4 illustrates a network control plane optimization method, according to an implementation of the present subject matter.

Figure 5(a) illustrates an SDN topology implementing the non-zero sum game based network control plane optimization operation, according to an implementation of the present subject matter.

Figure 5(b) illustrates an SDN topology implementing the non-zero sum game based network control plane optimization operation for a decreasing network load, according to an implementation of the present subject matter.

Figure 6(a) illustrates an SDN topology implementing the non-zero sum game based network control plane optimization operation, according to an implementation of the present subject matter.

Figure 6(b) illustrates an SDN topology implementing the non-zero sum game based network control plane optimization operation for an increasing network load, according to an implementation of the present subject matter.

Figure 7(a) illustrates an SDN topology implementing the non-zero sum game based network control plane optimization operation, according to an implementation of the present subject matter.

Figure 7(b) illustrates an SDN topology implementing the non-zero sum game based network control plane optimization operation for a change in network load, according to an implementation of the present subject matter.



[0006] It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems embodying the principles of the present subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.

[0007] The object of the present invention is achieved by the features of the appended independent claims.

DETAILED DESCRIPTION



[0008] Software Defined Network (SDN) technology provides customization and optimization of data forwarding in communication networks. Modern communication networks are simplified by the SDN technology by decoupling a data-forwarding layer and control layer, for example, the data plane and the control plane. In conventional communication networks, the control plane function, such as routing, resourcing, and management functionality are performed in the network devices, such as a switch or a router itself, whereas, in case of communication networks supporting SDN, the network devices are configured to implement the data plane functions, while the control plane functions are provided by an SDN controller mapped to the network devices. Open Application Programming Interface (API) services, such as Open Flow protocol are implemented to manage the interactions between the data plane and control plane. SDN in conjunction with an Open API service provides flexibility and increased control over the network devices.

[0009] Conventionally, communication networks implemented based on SDN architecture provides a logically centralized control of a physically distributed control plane. Such systems implement a distributed SDN controller with the mapping between a network device, such as a switch or a router and a controller being statically configured. The terms SDN controller, network controller and controller have been used interchangeably in the specification. Statically configured controllers make it difficult for the control plane to adapt to traffic load variations in the communication networks, such as data centre networks, enterprise networks, that have significant variations in temporal traffic and spatial traffic characteristics. In such scenarios of statically configured controllers, a controller may become overloaded if the network devices mapped to this controller observe a large traffic. Further, some controllers in the communication network may be in an overload condition while other controllers may be underutilized. The load may shift across controllers over time, depending on the temporal and spatial variations in traffic conditions and static mapping can result in non-optimal performance.

[0010] Majority of the techniques follow a centralized control plane architecture, where a central controller can decide the number of controllers required and their allocation to network devices. Also, certain conventional techniques provide distributed control plane architecture for communication networks implemented based on SDN architecture. The load in such architecture is dynamically shifted to allow the controllers to operate within a specified load restriction. As the load on the communication network changes, the load on each controller also changes and the architecture dynamically expands or shrinks the controller pool as based on the change in the network load. As load imbalance occurs, a controller with heavy network load transfers its load on to another controller with relatively less load. The algorithm and techniques underlying the architecture to provide change in control pool are generally based on the existing Open Flow standard.

[0011] However, the presently available methods and systems for distributed controller architecture, as described above, do not provide optimal solutions for controller placement. Further, such methods and systems provide for addition and deletion of controllers based on the load of the communication network, but the number of the controllers in the network may not be optimum. If the non-optimum number of controllers in the communication network is high, it may lead to underutilization of some controllers and further result in delay in the control resolution, result in more electricity consumption, and cause high operational expenditure and capital expenditures for the communication network. On the other hand, if the number of controllers in the communication network is low, it may result in poor Quality-of-Service (QoS) of the communication network due to packet drops and delayed resolution of flows. Moreover, the decision of addition and deletion of network controllers based on the load of the communication network is taken by a centralized control entity. More often than not, a malfunctioning of the centralized control entity results in failure or improper functioning of the communication network.

[0012] Further, conventionally available methods are topology specific and may not be compatible with different types of communication networks. Also, conventionally available methods are often not backward compatible making them difficult to be implemented in existing communication networks. Also, some conventionally known techniques provide solution that require incurring significant cost and expenditure of resources for their implementation.

[0013] The present subject matter describes systems and methods for control plane optimization in a communication network. In an embodiment, the systems and methods allow determination of optimum number of network controllers in the communication network. Further, according to an implementation of the present subject matter, the determined optimum number of controllers may be placed at optimal locations in the control plane of the communication network. Placement of the controllers may be defined as mapping of controller on network devices, such as network switch in order to achieve a uniform load over the network, maximum utilization of the controllers, and minimum delay of control resolution.

[0014] According to an implementation of the present subject matter, the communication network may be implemented based on SDN architecture. In one implementation, the optimum number of controller(s) is determined based on the load on the communication network. Since the load on a communication network is a function of time and changes dynamically, the number of controllers to support the load may also change dynamically. Providing optimal number of controllers may include addition or deletion of controllers dynamically. Accordingly, in one implementation of the present subject matter, network controllers may be dynamically added or deleted in the communication network, such as a SDN. Further, in one embodiment, the placement of the network controllers in the control plane may be dynamically varied.

[0015] In one embodiment of the present subject matter, the optimization of the number of the controllers and their respective placement may be determined in accordance with a non-zero sum game based network control plane optimization operation. In the non-zero sum game based network control plane optimization operation, hereinafter referred to as control plane optimization operation; each network controller in the communication network computes its self payoff value. The self payoff value is indicative of whether the controller is optimally utilized, underutilized or overutilized.

[0016] In one implementation, any controller of the communication network which is underutilized and has a capacity to take over more load, may be considered as a greedy controller. Based on the control plane optimization operation, the greedy controller may increase its utilization by sharing load of one or more neighboring controllers. However, in case the controller is significantly underutilized, it may transfer its existing load to one or more neighboring greedy controllers and enter an inactive mode. This approach not only enables equal distribution of load across the various controllers but also ensures that the controllers that have a significantly low utilization are no longer active, thus allowing optimization of the operational cost of the communication network.

[0017] In another embodiment, an over-utilized controller may off-load some of its load to one or more neighboring controllers to balance its load. For example, the load may be off-loaded to a neighboring controller that is underutilized. In one embodiment, in case the over-utilized controller is unable to off-load it load to a neighboring controller or is facing excessive load in spite of the off-loading, the overutilized controller may generate a request for activation of an additional controller in the communication network. This, again, ensures equal distribution of load across the various controllers. Also, instances where additional controllers may have to be added in the communication network are promptly identified such that there is no loss of QoS. Activation of the additional controller only at such instances ensures optimization of the operational cost of the communication network.

[0018] The control plane optimization operation is carried out by each of the controllers in the communication network. The decision to add or delete network controllers to the communication network is not taken by a centralized control entity but is rather distributed across the various controllers of the communication network. Thus, the performance of the communication network is unaffected by any delay or failure in functioning of the centralized control entity. Further, the systems and methods for control plane optimization as described in accordance with various embodiments of the present subject matter are backward compatible and may also be implemented in legacy communication networks. Furthermore, the systems and methods for control plane optimization described herein are independent of the topology of the communication network. Additionally, the systems and methods for control plane optimization provide a scalable solution for network control plane optimization that may be implemented in any communication network irrespective of the size of the communication network or the amount of load that the communication network handles.

[0019] The following disclosure describes systems and methods for control plane optimization in a communication network. It should be noted that the description merely illustrates the principles of the present subject matter. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described herein, embody the principles of the present subject matter and are included within its scope. Furthermore, all examples recited herein are principally intended expressly to be only for pedagogical purposes to aid the reader in understanding the principles of the present subject matter and the concepts contributed by the inventor(s) to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the present subject matter, as well as specific examples thereof, are intended to encompass equivalents thereof.

[0020] While aspects of the described system and method can be implemented in any number of different computing systems, environments, and/or configurations, embodiments for the information extraction system are described in the context of the following system(s) and method(s).

[0021] Figure 1 illustrates a network environment implementing a system 100 for control plane optimization in communication network 102, such as a software defined network (SDN), according to an implementation of the present subject matter. In one implementation, the communication network 102 can be a public network, including multiple computing devices 104-1, 104-2......104-N, individually and commonly referred to as computing device(s) 104 hereinafter. The computing devices 104, such as personal computers, laptops, various servers, such as blade servers, and other computing devices connected to the communication network 102 to communicate with each other. In another implementation, the communication network 102 can be a private network with a limited number of computing devices 104, such as personal computers, servers, laptops, and/or communication devices, such as PDAs, tablets, mobile phones and smart phones connected to the communication network 102 to communicate with each other.

[0022] The network environment allows the computing devices 104 to transmit and receive data to and from each other. The computing devices 104 may belong to an end user, such as an individual, a service provider, an organization or an enterprise. The network environment may be understood as a public or a private network system, implementing the system 100 for control plane optimization of the communication network 102 over which the computing devices 104 may communicate with each other.

[0023] The communication network 102 may be a wireless network, wired network, or a combination thereof. The communication network 102 can be a combination of individual networks, interconnected with each other and functioning as a single large network, for example, the Internet or an intranet. The communication network 102 may be any public or private network, including a local area network (LAN), a wide area network (WAN), the Internet, an intranet, a peer to peer network, and a virtual private network (VPN). According to an implementation of the present subject matter, the communication network 102 may be a software defined network. Further, embodiments of the present subject matter, the concepts of SDN may be extended to non-SDN networks also.

[0024] In an implementation, the communication network 102 may include a plurality of network devices 106-1, 106-2, 106-3, ..., 106-N, individually and commonly referred to as network device(s) 106 hereinafter. The network device 106 may be any network hardware device, such as network switch, simple forwarders, routers, gateways, network bridges, and hubs for mediation of data in the communication network 102. Further, a network device 106 may be hybrid network device, such as multilayer switches, proxy servers, or firewalls. The network device 106 may be utilized for communication process through the communication network 102. The network devices 106 may communicate with other network devices 106 of the communication network 102 based on communication links 108.

[0025] The communication network 102 may further include a plurality of network controllers 110-1, 110-2, ..., 110-N, individually and commonly referred to as controller(s) 110 hereinafter. The controller(s) 110 may be employed on a control plane of a communication network 102 and may manage the flow control of the communication network 102. The controller(s) 110 may receive data from the network devices 106 employed on a data plane of the communication network 102. Further, the controller(s) 110 may obtain a forwarding path for the requests coming from networking devices 106 and configures the networking devices 106 such that networking devices 106 may forward data to other network devices 106 or to a computing device 104-1, 104-2......104-N. The controller(s) 110 may be a virtual controller or a physical controller.

[0026] In one embodiment of the present subject matter, the system 100 determines an optimum number of controller(s) 110 for the communication network 102 based on the load on the communication network 102. In one embodiment, the system 100 performs a non-zero sum game based network control plane optimization operation, interchangeably referred to as control plane optimization operation, to determine the optimum number of controller(s) 110 for the communication network 102. The control plane optimization operation has been explained in details later in this specification.

[0027] In accordance with one implementation of the present subject matter, the system 100 includes a central optimization controller (COC) 112 in the communication network 102. In another embodiment of the present subject matter, the COC 112 may be a controller 110 of the communication network 102, assigned to work as COC 112. The COC 112 may optimize the number of controllers 110 in the communication network 102. The COC 112 may be communicatively coupled to the controllers 110 through communication link(s) 108-1, 108-2, ..., 108-N. The COC 112 may receive requests from one or more of the ccontrollers 110 for activation or deactivation of additional controllers in the communication network 102. Based on factors, such as a current traffic profile of the controller that sends the request, the network load and quality of service parameters, the COC 112 may allow or refuse the request for activation or deactivation of virtual controllers.

[0028] Activation of an additional network controller may include addition of a virtual network controller or invoking an existing dormant physical network controller. Deactivating a network controller may include deleting a virtual controller or putting an active physical controller in a dormant mode. In one example, controllers 110 may run on virtual machines. In such a network configuration, the COC 112 may provide for logical addition and deletion of the controllers 110 in the communication network 102. Logical addition and deletion of controllers 110 may be achieved through the virtual machines running the controllers 110. For instance, each controller 110 runs on a separate virtual machine. The capacity of each virtual machine, such as number of cores or CPUs, memory, disk, may be assigned dynamically. In another example, where the network configuration includes physical network controllers, the physical controllers may be dynamically invoked from a dormant mode or put in a dormant mode. The dormant mode may be a sleep mode or a switch-off mode. The COC 112 may determine to put a physical network controller on either mode based on factors, such as time or traffic profile variation of the communication network 102.

[0029] To explain, the functioning of the COC 112 to optimize the number of controllers 110 in the communication network 102, the number of controllers 110 in the communication network 102 may be represented by k, wherein the value of k may vary dynamically. At any instant, the value of k controllers 110 may be within the range of k1 and k2, such that k1≤k≤k2, where k1 may be the minimum number of controller(s) 110 and k2 may be the maximum number of controllers 110 in the communication network 102. In the worst case, k1 = 1 and k2 = M, where M may be the number of network devices 106 in the communication network 102. In one embodiment, the optimized number of controllers 110, i.e., the value of k may be determined based on non-zero sum game based control plane optimization operation and k1 and k2 may be obtained from the statistics of the network load change.

[0030] Further, based on the control plane optimization operation, the system 100 not only provides the optimum number of controller(s) 110 at a given instance of time but also indicates an optimum placement of the respective controller(s) 110 such that delay and utilization of each of the controller(s) 110 is balanced. In this context, determining the placement of a controller 110 may be understood as ascertaining a number of network devices that may be managed by the controller 110 for a given time and identification of such network devices, such that at the load between the various controllers 110 is balanced.

[0031] Placements of controllers 110 may be explained referring to Figure 1 that depicts the controllers 110 in the communication network 102 to be communicatively coupled with at least one of the network device(s) 106 through communication links 108-1, 108-2, ..., 108-N. Each controller 110 may run in a master-slave mode. Accordingly, the controller 110 may be master for a set of network devices 106 and may be a slave for another set of network devices 106 that may be controlled by another master controller 110. A controller 110 would be master for a network device 106, such as a switch, if the switch refers to the controller 110 for a flow table update and routing. Both master and slave controllers 110 may communicate with each other based on inter-SDN communication protocol. Further, in one embodiment of the present subject matter, the master and slave controllers 110 may be interchangeable based on the load in the communication network 102. Change in the number of controllers 110 and/or change in placement of the existing controllers 110 may result in change of state of master or slave controller 110. The control plane optimization operation to determine that change in number and placement of controller(s) may be based on optimization parameters as follows:



where f is a non-linear function of the number of active controllers 110 and cost associated with the number of active controllers 110, k represents the number of active controllers 110, c represents the capital and operational expenditure associated with implementation of the communication network 102, Ui represents the utilization of ith controller, Δti represents the delay associated with ith controller, Δtth represents a pre-defined threshold value for delay constraint of ith controller, and Uth represents a pre-defined threshold value of utilization for ith controller. As load on the communication network 102 varies, equation 2 may be solved such that one or more additional controllers 110 may be added or invoked to active state, or an existing controller may be deleted or put in a dormant state. Solution of equation 2 may be obtained by designing a non-zero sum game, in which each controller 110 performs the game independently and may take their decisions independently. Since the load on the network is dynamic, obtaining an optimal solution of equation 2 applicable for load conditions may not possible. Hence, use of non-zero sum game may be appropriate.

[0032] As the load on the communication network 102 changes, utilization of the controllers 110 also change. Each controller 110 performs the non-zero sum game based control plane optimization operation to maximize its utilization. Each controller 110 obtains peer information from one or more neighbouring controller 110. Further, each controller 110 computes its payload which indicates whether the controller 110 is optimally utilized, underutilized or overutilized. Based on the payload, the controllers 110 manage their load and may send controller deletion message(s), or the controller addition message(s) to the COC 112. Further, based on the payload, an overutilized controller 110 may transfer its excess load to a neighbouring greedy controller 110 while an underutilized controller 110 may take over additional load from one or more neighbouring controller 110. In this context, load of a controller 110 may be based on the number of network devices 106 that may be managed by the controller 110 at any given instance of time and the volume of traffic each of these network devices 106 may be handling. Further, any controller 110 in the communication network 102 having capacity to take more load may be considered as a greedy controller.

[0033] Figure 2 illustrates a network controller 110, according to an implementation of the present subject matter. According to an implementation, the controller 110 may include processor(s) 202, interface(s) 204, and memory 206 coupled to the processor(s) 202. The processor(s) 202 of the controller 110 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) 202 may be configured to fetch and execute computer-readable instructions stored in the memory 206.

[0034] Further, the interface(s) 204 of the controller 110 may include a variety of software and hardware interfaces that allow the controller 110 to interact with other entities of the communication network 102, or with each other. For example, the interface(s) 204 may enable the controller 110 to communicate with network devices 106 and other devices, such as web servers and external repositories. The interface(s) 204 may also facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. For the purpose, the interface(s) 204 may include one or more ports.

[0035] The memory 206 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM), and dynamic random access memory (DRAM), and/or nonvolatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. Further, the controller 110 may include module(s) 208 and data 210. The module(s) 208 include, for example, a communication module 212, a control module 214, and other module(s) 218.

[0036] The data 210 may include network device data 220, and other data 224. The device data 220 may further include peer information 222. The other data 224 amongst other things, may serve as a repository for storing data that is processed, received, or generated as a result of the execution of one or more modules in the module(s) 208.

[0037] According to an implementation, the communication module 212 of the controller 110 may communicate with several network devices 106 of the communication network 102. Further, the communication module 212 may communicate with neighbouring controllers in the communication network 102. Such communication may be based on inter-SDN communication protocols. Communication of the controller 110 with neighbouring controllers 110 may include request for peer information comprising routing updates, self payoff value, or may include information messages, such as offloading message, and state change message. The communication module 212 may communicate with the module(s) 208 of the controller 110 for exchange of controller messages. In one embodiment, the communication module 212 may communicate the control messages to the neighbouring controller(s) 110 at a time instance of routing updates.

[0038] The network controller 110 includes the control module 214 to determine a traffic profile variation, compute a self payoff value, and update the routing tables. The configuration information of the controller 110, such as routing table may be stored in the network device data 220. Further, the peer information received by the communication module 212 from the neighbouring controllers 110 may be stored in the peer information 222.

[0039] In accordance with one implementation of the present subject matter, the control module 214 determines a traffic profile variation in the communication network 102. The control module 214 may receive network traffic and load information from time to time from the communication module 212. The control module 214 analyses the traffic information received at various instances of time and determines a traffic variation profile. The traffic profile variation may be indicative of changes in a current traffic profile of the controller 110 and the neighbouring controller(s) 110 with respect to a previous traffic profile.

[0040] The control module further computes a self payoff value for the controller 110. The self payoff value, also referred to as payoff, may be determined based on equation 3.


where, fi represents the self payoff value of a ith controller, λi represents a non linear function or a constant for ith controller related to the usage of the controller, Ui represents the utilization of ith controller, Uth represents a pre-defined threshold value of utilization for ith controller, δi represents a nonlinear function or a constant for delay payoff computation for a controller i related to the delay experienced by it, Δtth represents a pre-defined threshold value for delay constraint of ith controller, and Δti represents the delay constraint for ith controller. The values of λi, Ui, Uth, δi, Δtth, Δtti and the self payoff value may also be stored in the network device data 220.

[0041] In accordance with one implementation of the present subject matter, the control module 214 includes an optimization module 216 for optimization of the controller(s) 110 in the communication network 102. The optimization module 216 performs the control plane optimization operation with the neighbouring controller(s). Based on the solution achieved by performing the operation, the optimization module 216 may decide the process to be executed to achieve maximum utilization. In one example, the optimization module 216 of an overutilized controller 110 may offload several network device(s) 106 based on the solution of the control plane optimization operation. In another example, the optimization module 216 of an underutilized controller 110, may master several network device(s) 106 of the communication network 102, based on the solution of the control plane optimization operation.

[0042] In an example, based on the non-zero sum game based control plane optimization operation when the controller 110 transfers control of some of the network devices 106 that it may be managing to a neighbouring controller 110 or when the controller 110 acquires control of additional network devices 106 from one or more neighbouring controllers 110, a change in placement of the controller 110 occurs. The communication module 212 may communicate control messages to inform such change in placements or transfer of control to one or more neighbouring controllers 110. In one embodiment, the communication module 212 may also communicate the control messages to the COC 112. Also, in cases where, based on the non-zero sum game based control plane optimization operation, if a decision to activate or deactivate a controller 110 is taken, the communication module 212 may further communicate the request to the COC 112.

[0043] In one implementation of the present subject matter, the various control messages may be communicated asynchronously. In one more implementation of the present subject matter the communication module 212 communicates with the neighbouring controllers in an asynchronous manner to obtain the peer information. Such asynchronous communication wherein not all controllers 110 talk to each other at the same time ensures that, at any given instance of time, the volume of control messages being exchanged between the various controllers 110 of the communication network 102 is within acceptable limits and that the control messages do not overload the controllers 110.

[0044] Figure 3 illustrates a central optimization controller (COC) 112, according to an implementation of the present subject matter. According to an implementation, the COC 112 may include processor(s) 302, interface(s) 304, and memory 306 coupled to the processor(s) 302. The processor(s) 302 of the COC 112 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) 302 may be configured to fetch and execute computer-readable instructions stored in the memory 306.

[0045] Further, the interface(s) 304 of the COC 112 may include a variety of software and hardware interfaces that allow the COC 112 to interact with other entities of the communication network 102, or with each other. For example, the interface(s) 304 may enable the COC 112 to communicate with network devices 106 and other devices, such as web servers and external repositories. The interface(s) 304 may also facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. For the purpose, the interface(s) 304 may include one or more ports.

[0046] The memory 306 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM), and dynamic random access memory (DRAM), and/or nonvolatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. Further, the COC 112 may include module(s) 308 and data 310. The module(s) 308 include, for example, a communication module 312, a controller optimization module 314, and other module(s) 316.

[0047] The data 310 may include controller optimization data 318, and other data 320. The other data 320 amongst other things, may serve as a repository for storing data that is processed, received, or generated as a result of the execution of one or more modules in the module(s) 308.

[0048] According to an implementation, the communication module 312 of the COC 112 may communicate with several controller(s) 110 of the communication network 102. The communication module 312 may receive request from the controllers 110 for optimization of the number of controllers 110 in the communication network 102. The request may include, request for activation of a new controller 110 in the communication network 102, or request for deactivation of an existing controller 110 from the communication network 102.

[0049] The COC 112 includes the controller optimization module 314 for execution of the request received by the communication module 312. The controller optimization module 314 may activate an additional controller in the communication network 102 or may deactivate an existing controller in the communication network 102, based on the request received. Information related to controllers 110, such as the number of controllers 110 in the communication network 102, status of controllers 110, and number of controllers 110 in 'sleep' or switched-off mode, may be stored as the controller optimization data 318.

[0050] Although in the above described embodiment, the COC 112 deactivates existing controllers based on received requests, in other embodiments, deactivation of a controller 110 in the communication network 102 may be executed by the controllers 110 themselves. For example, a controller 110 may perform the control plane optimization operation and upon determining that it is substantially underutilized, the controller 110 may execute a process to enter the inactive mode. Thus, the controller 110 may offload the associated network devices 106 and deactivates itself without intervention of the COC 112. For example, an inactive virtual network controller may be deleted while an inactive physical network controller may become dormant. Such inactive physical controllers may be considered to be in a 'dormant' mode which can be realized by either switching off the controller 110 or by keeping the controller 110 in an idle state with limited operations, such as 'sleep' mode. The controller 110 which is in 'sleep' mode can be activated and made fully operational by control message received from the COC 112 or any other controller 110. Change of state, i.e., active to inactive and vice versa may be stored in the data of the COC 112 and in the data of all neighbouring controllers 110.

[0051] Figure 4 illustrates a control plane optimization method, according to an implementation of the present subject matter. The method 400 may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform particular functions or implement particular abstract data types. The method 400 may also be practiced in a distributed computing environment where functions are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, computer executable instructions may be located in either a local or a remote computer storage media, including memory storage devices.

[0052] The order in which the method 400 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 400, or alternative methods. Additionally, individual blocks may be deleted from the methods without departing from the scope of the subject matter described herein. Furthermore, the method 400 can be implemented in any suitable hardware, software, firmware, or combination thereof.

[0053] Referring to figure 4, at block 402, the method 400 may include obtaining, by a network controller 110, peer information of at least one neighbouring network controller 110. The peer information may be indicative of utilization and delay associated with the performance of the at least one neighbouring network controller 110.

[0054] At block 404, the method 400 includes determining a traffic profile variation by the network controller 110. The traffic profile variation may be indicative of changes in a current traffic profile of the network controller 110 and the least one neighbouring network controller 110 with respect to a previous traffic profile of the network controller 110 and the least one neighbouring network controller 110.

[0055] At block 406, the method 400 includes computing a self payoff value for the network controller 110, by the network controller 110. The self payoff value may be indicative of one of optimum utilization, underutilization and overutilization of the network controller 110. Further, as explained previously, the computing of self payoff value may be based on predefined QoS parameters, which may include parameters, such as maximum and minimum utilization threshold values.

[0056] At block 408, the method 400 includes initiating a non-zero sum game based control plane optimization operation based on the self payoff value, by the network controller 110 and neighbour's payoff values. The non-zero sum game based control plane optimization operation provides for optimizing the number and placement of controllers 110 in the communication network. The control plane optimization operation may include one of activating of at least one additional network controller 110, transferring control of one or more network devices 106 managed by the network controller 110 to a neighbouring network controller 110, deactivating the network controller 110, and transferring control of one or more additional network devices 106 to the network controller 110.

[0057] Figure 5(a) illustrates an SDN topology implementing the non-zero sum game based control plane optimization operation, according to an implementation of the present subject matter. The figure illustrates the SDN topology with 7 active controllers 110 (C1, C2, ..., C7) in the communication network 102 represented by a 'star'. The figure further illustrates 28 network devices 106 represented by 'circles'. For the ease of explanation, the network devices 106 may be assumed to be switches (S1, S2, ..., S28). The switches may be categorized in three categories based on the loads, such as lightly loaded, moderately loaded, and heavily loaded. In the example illustrated in the figure, the switches are uniformly loaded. Further, the figure illustrates communication links 108 for sharing of information amongst switches. According to the example illustrated in the figure, 323 flow requests may be severed by the 7 active controllers 110 and 47% utilization of the controllers 110 may be achieved in the illustrated example. In accordance with one embodiment of the present subject matter, the configuration of the communication network 102 may be modified, i.e., the number of placement of the controllers 110 may be changed to achieve improved utilization of the controllers 110. In one example, the SDN topology as depicted in Figure 5(a) may be modified to the SDN topology as depicted in Figure 5(b) to achieve improved utilization of the controllers 110 since 47% utilization of the controllers 110 as depicted in the example of Figure 5(a) may be considered low.

[0058] Figure 5(b) illustrates an SDN topology implementing the non-zero sum game based control plane optimization operation for a decreasing load, according to an implementation of the present subject matter. The network topology illustrates the same topology as of figure 5(a), with same number of network devices as 28 switches but with lesser number of flow requests. The topology serves 272 flow requests with 5 active controllers and achieving a utilization of 55%.

[0059] According to an implementation of the present subject matter, each controller 110 as illustrated in the figures 5(a) and 5(b) computes a self payoff value and runs the load-optimization process at several instances of time to achieve maximum utilization. For example, controller C4 may obtain peer information of neighbouring controllers C1, C2, and C3 and information of switches of such controllers. The controller C4 determines the change in the traffic profile based on the current traffic profile and previously received traffic profile. In such situation, where the number of flow requests decreases, the controller performs the control plane optimization operation to maximize utilization. Hence, the controller C4 compares its self payoff value with the minimum threshold payoff value and maximum threshold payoff value to determine underutilization or overutilization of the controller. Based on the information about utilization of the controller, each controller performs the control plane optimization operation along with one or more neighbouring controllers.

[0060] In the illustrated example, the process of performing the control plane optimization operation by each of the controllers in the network results in increased utilization by deletion of 2 controllers C2, and C5. The switches mastered by controllers C2 and C5 may be offloaded to other active controllers before deletion. For example, the switches S17, S18, S26, and S27 originally mastered by controller C2, may be mastered by controller C4 on deletion of controller C2. Thus, controller C4 originally mastering 3 switches, may master 7 switches. Similarly, the controller C5 may be deleted for achieving maximum utilization of controllers. The switches S1, S5, S6, and S7 originally mastered by controller C5 may be mastered by controller C6 on deletion of controller C5. Thus, the controller C6 originally mastering 3 switches may master 7 switches. In the above described example, decision of deletion of controllers results an increase in utilization of controllers. The utilization of 47%, as illustrated in Figure 5(a) increases to 55% on deletion of two controllers, as illustrated in Figure 5(b). Such decision of deletion of controllers further results in decrease of operational expenditure of the communication network 102. Optimization of number of controllers for a changing network traffic is further illustrated based on figures 6(a), 6(b), 7(a), and 7(b)

[0061] Figure 6(a) illustrates an SDN topology implementing the non-zero sum game based control plane optimization operation, according to an implementation of the present subject matter. The figure illustrates a SDN topology with 5 active controllers (C2, C3, C5, C6, and C7) in the network represented by a 'star'. The figure further illustrates 28 network devices 106 represented by 'circles'. The network devices 106 may be assumed to be switches (SI, S2 ... S28). The switches may be categorized in three categories based on the loads, such as lightly loaded, moderately loaded, and heavily loaded. Such categories may be represented virtually based on the size of the circle in the figure. Lightly loaded switches may be represented by a circle with a small diameter; moderately loaded switches may be represented by a circle of diameter larger than the diameter of the lightly loaded switch; while the heavily loaded switch may be represented by a circle of largest diameter among the circles of three categories of switch loads. Further, the figure illustrates communication links 108 for sharing of information amongst switches.

[0062] According to the example illustrated in the figure, 367 flow requests may be served by the 5 active controllers and 74% utilization of the controllers may be achieved in the illustrated example. The figure further illustrates non-uniform distribution of load over the switches. In accordance with one embodiment of the present subject matter, the configuration of the communication network may be modified, i.e., the number of controllers and mapping of the controllers to the switches may be changed to achieve optimum utilization of the controllers based on the network load. In one example, the SDN topology as depicted in Figure 6(a) may be modified to the SDN topology as depicted in Figure 6(b) to achieve optimum utilization of the controllers.

[0063] Figure 6(b) illustrates an SDN topology implementing the non-zero sum game based control plane optimization operation for an increasing network load, according to an implementation of the present subject matter. The network topology illustrates a similar topology as of figure 6(a), with same number of network devices as 28 switches but with increased number of flow requests as 416. The topology serves 416 flow requests with 6 active controllers and achieving a utilization of 70%.

[0064] According to the illustrated example, on an increase in the number of flow requests, the load on the switches S1, S2, S3, S4, S7, S9, S15, S16, S17, S22, and S24 increases, the load on S10 decreases, while the load on other switches does not change. The network load on switches causes an imbalance of the load distribution on the 5 active controllers as depicted in figure 6(a). As the traffic profile of the controllers change, each controller performs the control plane optimization operation to optimize the load. Based on the solutions of the control plane optimization operation performed by the controllers, a message for activation of an additional controller may be sent to the COC 112. The COC 112 activates the additional controller C4 in the network. The controllers, based on the results of the control plane optimization operation reallocate the switches mastered by each controller. Thus, the new controller C4 masters switches S1, S5, and S6 offloaded by controller C7. Since the total load on the controller C7 reduces on offloading of switches to C4, the controller C7 performs the control plane optimization operation with C2 and masters switches S16 and S19. Further, as the controller C2 also performs the control plane optimization operation with its neighbouring controllers, controller C2 masters switch S27 offloaded by controller C3. Thus, as illustrated in the example, an increase in the number of flow requests may be served by increasing the number of controllers in the communication network.

[0065] Figure 7(a) illustrates an SDN topology implementing the non-zero sum game based control plane optimization operation, according to an implementation of the present subject matter. The figure illustrates a SDN topology with 6 active controllers (C2, C3, ..., C7) in the communication network represented by a 'star'. The figure further illustrates 28 network devices represented by 'circles'. The network devices 106 may be assumed to be switches (SI, S2, ..., S28). The switches may be categorized in three categories based on the loads, such as lightly loaded, moderately loaded, and heavily loaded. Such categories may be represented virtually based on the size of the circle in the figure. Lightly loaded switches may be represented by a circle with a small diameter; moderately loaded switches may be represented by a circle of diameter larger than the diameter of the lightly loaded switch; while the heavily loaded switch may be represented by a circle of largest diameter among the circles of three categories of switch loads. Further, the figure illustrates communication links 108 for sharing of information amongst switches.

[0066] According to the example illustrated in the figure, 442 flow requests may be severed by the 6 active controllers and 74% utilization of the controllers may be achieved in the illustrated example. The figure further illustrates non-uniform distribution of load over the switches. In accordance with one embodiment of the present subject matter, the configuration of the communication network 102 may be modified, i.e., the number of controllers and the mapping of the controllers to the switches may be changed to achieve optimum utilization of the controllers based on the network load. In one example, the SDN topology as depicted in Figure 7(a) may be modified to the SDN topology as depicted in Figure 7(b) to achieve optimum utilization of the controllers.

[0067] Figure 7(b) illustrates an SDN topology implementing the non-zero sum game based control plane optimization operation for a change in network load, according to an implementation of the present subject matter. The network topology illustrates a similar topology as of figure 7(a), with same number of network devices as 28 switches but with decreased number of flow requests as 438. The topology serves 438 flow requests with 6 active controllers and achieving a utilization of 73%.

[0068] According to the illustrated example, on a decrease in the number of flow requests, the load on the switches S1, S3, S6, S13, S20, and S23 increases, load on switches S2, S7, S9, S10, S11 S12, S14, S21, S24, S25, and S27 decreases, while the load on the other switches does not change. The network load on switches causes an imbalance of the load distribution on the 6 active controllers as depicted in figure 7(a). As the traffic profile for the controllers change, each controller performs the control plane optimization operation to optimize the load. As illustrated, since the decrease in the number of flow requests is small, deactivation of a controller may not provide an optimized solution. Thus, based on the results of the control plane optimization operation performed by the controllers, the controllers reallocate the switches to optimize the load distribution. Switches S7 and S9 are mastered by controller C5 on offloading by controller C4, based on the control plane optimization operation performed by controller C4 with controller C5. Thus, as illustrated in the example, a decrease in the number of flow requests is optimized based on control plane optimization operation to achieve optimum utilization of the controllers.


Claims

1. A method for optimization of a control plane comprising network controllers in a communication network, the method comprising:

obtaining (402), by a network controller (110), peer information of at least one neighbouring network controller, wherein the peer information is indicative of utilization and delay associated with performance of the at least one neighbouring network controller, said peer information comprising at least one of routing updates, self payoff value, an offloading message, and a state change message, wherein said peer information of the at least one neighbouring network controller is used to determine a traffic profile of the at least one neighbouring network controller;

determining (404), by the network controller, a traffic profile variation, wherein the traffic profile variation is indicative of changes in a current traffic profile of the network controller and the at least one neighbouring network controller with respect to a previous traffic profile of the network controller and the at least one neighbouring network controller;

computing (406), by the network controller, a self payoff value for the network controller, wherein the self payoff value is indicative of one of optimum utilization, underutilization and overutilization of the network controller, and wherein the computing is based on predefined QoS parameters, wherein the computing by the network controller indicates utilization, underutilization, or overutilization of a payload; and

initiating (408), by the network controller, a control plane optimization operation based on the self payoff value and the traffic profile of at least one neighbouring network controller, and when a number of flow requests to the network controller decreases, the network controller performs the control plane optimization operation to maximize utilization, wherein the network controller compares the computed self payoff value with a minimum threshold payoff value and a maximum threshold payoff value to determine the underutilization or the overutilization of the network controller, wherein the network control plane optimization operation comprises one of:

activating at least one additional network controller, transferring control of one or more network devices managed by the network controller to the at least one neighbouring network controller, deactivating the network controller, and

transferring control of one or more network devices managed by the at least one neighbouring network controller to the network controller, wherein the at least one additional network controller is activated for the overutilization of the network controller.


 
2. The method as claimed in claim 1, wherein for the self payoff value indicative of overutilization of the network controller, the control plane optimization operation comprises:

generating a request for activation of the at least one additional network controller;

receiving an indication of activation of the at least one additional network controller;

transferring control of one or more network devices managed by the network controller to the at least one additional network controller; and

generating a control message to inform the transferring to the at least one neighbouring network controller.


 
3. The method as claimed in claim 2, wherein the activation of the at least one additional network controller comprises one of adding a virtual network controller and invoking a dormant physical network controller.
 
4. The method as claimed in claim 1, wherein for the self payoff value indicative of overutilization of the network controller, the control plane optimization operation comprises:

identifying a greedy controller from amongst at least one neighbouring network controller, to undertake more load;

requesting the greedy controller to accept control of one or more network devices managed by the network controller;

receiving a response from the greedy controller;

transferring control of one or more network devices managed by the network controller to the greedy controller based on the response; and

generating a control message to inform the transferring to the at least one neighbouring network controller.


 
5. The method as claimed in claim 1, wherein, for the self payoff value indicative of underutilization of the network controller, the control plane optimization operation, comprises deactivating the network controller.
 
6. The method as claimed in claim 5, wherein the deactivating comprises one of deleting a virtual network controller and putting an active physical network controller in a dormant mode, and wherein putting the active physical network controller in the dormant mode further comprises one of switching off the network controller and putting the active physical network controller in a sleep mode.
 
7. The method as claimed in claim 1, wherein, for the self payoff value indicative of underutilization of the network controller, the control plane optimization operation comprises:

identifying a greedy controller from amongst at least one neighbouring network controller, to undertake more load;

off-loading control of one or more network devices managed by the network controller to the greedy controller;

generating a control message to inform the off-loading to the at least one neighbouring network controller; and

initiating a dormant mode for the network controller.


 
8. A network controller (110) comprising:

a processor (202);

a communication module (212) coupled to the processor (202) configured to obtain peer information of at least one neighbouring network controller (110), wherein the peer information is indicative of utilization and delay associated with performance of the at least one neighbouring network controller (110) said peer information comprising at least one of routing updates, self payoff value, an offloading message, and a state change message, wherein said peer information of the at least one neighbouring network controller is used to determine a traffic profile of the at least one neighbouring network controller;

a control module (214) coupled to the processor (202) configured to:

determine a traffic profile variation, wherein the traffic profile variation is indicative of changes in a current traffic profile of the network controller (110) and the at least one neighbouring network controller (110) with respect to a previous traffic profile of the network controller (110) and the at least one neighbouring network controller (110);

compute a self payoff value for the network controller (110), wherein the self payoff value is indicative of one of optimum utilization, underutilization and overutilization of the network controller (110), and wherein the computing is based on a predefined QoS parameters; and

initiate a control plane optimization operation based on the self payoff value and the traffic profile of the at least one neighbouring network controller, and when a number of flow requests to the network controller decreases, the network controller is further configured to perform the control plane optimization operation to maximize utilization, wherein the network controller is further configured to compare the computed self payoff value with a minimum threshold payoff value and a maximum threshold payoff value to determine underutilization or overutilization of the network controller, wherein the control plane optimization operation comprises one of:
activating at least one additional network controller (110), transferring control of one or more network devices (106) managed by the network controller (110) to the at least one neighbouring network controller (110), deactivating the network controller (110), and transferring control of one or more network devices (106) managed by the at least one neighbouring network controllers to the network controller (110), wherein the at least one additional network controller is activated for the overutilization of the network controller.


 
9. The network controller (110) as claimed in claim 8, wherein the communication module (212) is further configured to communicate control messages to the at least one neighbouring network controller (110), wherein the control messages are indicative of activation of the at least one additional network controller (110), transfer of control of the one or more network devices (106) managed by the network controller (110) to the at least one neighbouring network controller (110), deactivation of the network controller (110), and transfer of control of the one or more network devices (106) managed by the at least one neighbouring network controllers to the network controller (110).
 
10. The network controller (110) as claimed in claim 9, wherein the communication module (212) is further configured to obtain peer information of the at least one neighbouring network controller (110) and communicates the control messages to the at least one neighbouring network controller (110) asynchronously.
 
11. The network controller (110) as claimed in claim 10, wherein the communication module (212) is further configured to communicate the control messages to the at least one neighbouring network controller (110) at a time instance of routing updates.
 
12. The network controller (110) as claimed in claim 8, wherein the communication module (212) is further configured to send a request to a central optimization controller (112) for one of activating the at least one additional network controller (110) and deactivating the network controller (110).
 
13. The network controller (110) as claimed in claim 8, wherein the control module (214) is further configured to update the routing table of the network controller (110).
 
14. The network controller (110) as claimed in claim 8, wherein the QoS parameters comprise maximum utilization threshold value of the network controller (110), minimum utilization threshold value of the network controller (110), and minimum delay of the control resolution.
 


Ansprüche

1. Verfahren zur Optimierung einer Steuerebene, die Netzsteuerungen in einem Kommunikationsnetz umfasst, wobei das Verfahren Folgendes umfasst:

Erhalten (402), durch eine Netzsteuerung (110), von Peer-Informationen mindestens einer benachbarten Netzsteuerung, wobei die Peer-Informationen Auslastung und Verzögerung angeben, die der Leistungsfähigkeit der mindestens einen benachbarten Netzsteuerung zugeordnet ist, wobei die Peer-Informationen Routing-Aktualisierungen und/oder einen Self-Payoff-Wert und/oder eine Abladenachricht und/oder eine Zustandsänderungsnachricht umfassen, wobei die Peer-Informationen der mindestens einen benachbarten Netzsteuerung zum Bestimmen eines Verkehrsprofils der mindestens einen benachbarten Netzsteuerung verwendet werden;

Bestimmen (404) einer Verkehrsprofilvariation durch die Netzsteuerung, wobei die Verkehrsprofilvariation Änderungen eines aktuellen Verkehrsprofils der Netzsteuerung und der mindestens einen benachbarten Netzsteuerung mit Bezug auf ein vorheriges Verkehrsprofil der Netzsteuerung und der mindestens einen benachbarten Netzsteuerung angibt;

Berechnen (406) eines Self-Payoff-Werts für die Netzsteuerung durch die Netzsteuerung, wobei der Self-Payoff-Wert optimale Auslastung, Unterauslastung oder Überauslastung der Netzsteuerung angibt und wobei die Berechnung auf vordefinierten QoS-Parametern basiert, wobei das Berechnen durch die Netzsteuerung Auslastung, Unterauslastung oder Überauslastung einer Nutzlast angibt; und

Einleiten (408) einer Steuerebenen-Optimierungsoperation durch die Netzsteuerung auf der Basis des Self-Payoff-Werts und des Verkehrsprofils mindestens einer benachbarten Netzsteuerung, und wenn eine Anzahl von Flussanforderungen an die Netzsteuerung abnimmt, die Netzsteuerung die Steuerebenen-Optimierungsoperation ausführt, um die Auslastung zu maximieren, wobei die Netzsteuerung den berechneten Self-Payoff-Wert mit einem mindesten Schwellen-Payoff-Wert und einem maximalen Schwellen-Payoff-Wert vergleicht, um die Unterauslastung oder die Überauslastung der Netzsteuerung zu bestimmen, wobei die Netzsteuerebenen-Optimierungsoperation Folgendes umfasst:
Aktivieren mindestens einer zusätzlichen Netzsteuerung, Übertragen der Steuerung einer oder mehrerer durch die Netzsteuerung verwalteter Netzvorrichtungen an die mindestens eine benachbarte Netzsteuerung, Deaktivieren der Netzsteuerung oder Übertragen der Steuerung einer oder mehrerer durch die mindestens eine benachbarte Netzsteuerung verwalteter Netzvorrichtungen an die Netzsteuerung, wobei die mindestens eine zusätzliche Netzsteuerung für die Überauslastung der Netzsteuerung aktiviert wird.


 
2. Verfahren nach Anspruch 1, wobei für den Überauslastung der Netzsteuerung angebenden Self-Payoff-Wert die Steuerebenen-Optimierungsoperation Folgendes umfasst:

Erzeugen einer Anforderung der Aktivierung der mindestens einen zusätzlichen Netzsteuerung;

Empfangen einer Angabe der Aktivierung der mindestens einen zusätzlichen Netzsteuerung;

Übertragen der Steuerung einer oder mehrerer durch die Netzsteuerung verwalteter Netzvorrichtungen an die mindestens eine zusätzliche Netzsteuerung; und

Erzeugen einer Steuernachricht, um das Übertragen an die mindestens eine benachbarte Netzsteuerung mitzuteilen.


 
3. Verfahren nach Anspruch 2, wobei die Aktivierung der mindestens einen zusätzlichen Netzsteuerung Hinzufügen einer virtuellen Netzsteuerung oder Aufrufen einer ruhenden physischen Netzsteuerung umfasst.
 
4. Verfahren nach Anspruch 1, wobei für den Überauslastung der Netzsteuerung angebenden Self-Payoff-Wert die Steuerebenen-Optimierungsoperation Folgendes umfasst:

Identifizieren einer gierigen Steuerung aus mindestens einer benachbarten Netzsteuerung zum Übernehmen von mehr Last;

Anfordern von der gierigen Steuerung, die Steuerung einer oder mehrerer durch die Netzsteuerung verwalteter Netzvorrichtungen anzunehmen;

Empfangen einer Antwort von der gierigen Steuerung;

Übertragen der Steuerung einer oder mehrerer durch die Netzsteuerung verwalteter Netzvorrichtungen an die gierige Steuerung auf der Basis der Antwort; und

Erzeugen einer Steuernachricht, um das Übertragen an die mindestens eine benachbarte Netzsteuerung mitzuteilen.


 
5. Verfahren nach Anspruch 1, wobei für den Self-Payoff-Wert, der Unterauslastung der Netzsteuerung angibt, die Steuerebenen-Optimierungsoperation Deaktivierung der Netzsteuerung umfasst.
 
6. Verfahren nach Anspruch 5, wobei das Deaktivieren Löschen einer virtuellen Netzsteuerung oder Versetzen einer aktiven physischen Netzsteuerung in einen Ruhemodus umfasst, wobei Versetzen der aktiven physischen Netzsteuerung in den Ruhemodus ferner Ausschalten der Netzsteuerung oder Versetzen der aktiven physischen Netzsteuerung in einen Schlafmodus umfasst.
 
7. Verfahren nach Anspruch 1, wobei für den Self-Payoff-Wert, der Unterauslastung der Netzsteuerung angibt, die Steuerebenen-Optimierungsoperation Folgendes umfasst:

Identifizieren einer gierigen Steuerung aus mindestens einer benachbarten Netzsteuerung zum Übernehmen von mehr Last;

Abladen der Steuerung einer oder mehrerer durch die Netzsteuerung verwalteter Netzvorrichtungen auf die gierige Steuerung;

Erzeugen einer Steuernachricht, um das Abladen auf die mindestens eine benachbarte Netzsteuerung mitzuteilen; und

Einleiten eines Ruhemodus für die Netzsteuerung.


 
8. Netzsteuerung (110), umfassend:

einen Prozessor (202);

ein mit dem Prozessor (202) gekoppeltes Kommunikationsmodul (212), ausgelegt zum Erhalten von Peer-Informationen mindestens einer benachbarten Netzsteuerung (110), wobei die Peer-Informationen Auslastung und Verzögerung angeben, die der Leistungsfähigkeit der mindestens einen benachbarten Netzsteuerung (110) zugeordnet ist, wobei die Peer-Informationen Routing-Aktualisierungen und/oder einen Self-Payoff-Wert und/oder eine Abladenachricht und/oder eine Zustandsänderungsnachricht umfassen, wobei die Peer-Informationen der mindestens einen benachbarten Netzsteuerung zum Bestimmen eines Verkehrsprofils der mindestens einen benachbarten Netzsteuerung verwendet werden,

ein mit dem Prozessor (202) gekoppeltes Steuermodul (214), ausgelegt zum

Bestimmen einer Verkehrsprofilvariation, wobei die Verkehrsprofilvariation Änderungen eines aktuellen Verkehrsprofils der Netzsteuerung (110) und der mindestens einen benachbarten Netzsteuerung (110) mit Bezug auf ein vorheriges Verkehrsprofil der Netzsteuerung (110) und der mindestens einen benachbarten Netzsteuerung (110) angibt;

Berechnen eines Self-Payoff-Werts für die Netzsteuerung (110), wobei der Self-Payoff-Wert optimale Auslastung, Unterauslastung oder Überauslastung der Netzsteuerung (110) angibt und wobei die Berechnung auf vordefinierten QoS-Parametern basiert; und

Einleiten einer Steuerebenen-Optimierungsoperation auf der Basis des Self-Payoff-Werts und des Verkehrsprofils der mindestens einen benachbarten Netzsteuerung, und wenn eine Anzahl von Flussanforderungen an die Netzsteuerung abnimmt, die Netzsteuerung ferner ausgelegt ist zum Ausführen der Steuerebenen-Optimierungsoperation, um die Auslastung zu maximieren, wobei die Netzsteuerung ferner ausgelegt ist zum Vergleichen des berechneten Self-Payoff-Werts mit einem mindesten Schwellen-Payoff-Wert und einem maximalen Schwellen-Payoff-Wert, um die Unterauslastung oder die Überauslastung der Netzsteuerung zu bestimmen, wobei die Netzsteuerebenen-Optimierungsoperation Folgendes umfasst:
Aktivieren mindestens einer zusätzlichen Netzsteuerung (110), Übertragen der Steuerung einer oder mehrerer durch die Netzsteuerung (110) verwalteter Netzvorrichtungen (106) an die mindestens eine benachbarte Netzsteuerung (110), Deaktivieren der Netzsteuerung (110) oder Übertragen der Steuerung einer oder mehrerer durch die mindestens eine benachbarte Netzsteuerung verwalteter Netzvorrichtungen (106) an die Netzsteuerung (110), wobei die mindestens eine zusätzliche Netzsteuerung für die Überauslastung der Netzsteuerung aktiviert wird.


 
9. Netzsteuerung (110) nach Anspruch 8, wobei das Kommunikationsmodul (212) ferner ausgelegt ist zum Übermitteln von Steuernachrichten zu der mindestens einen benachbarten Netzsteuerung (110), wobei die Steuernachrichten Aktivierung der mindestens einen zusätzlichen Netzsteuerung (110), Übertragung der Steuerung der einen oder mehreren durch die Netzsteuerung (110) verwalteten Netzvorrichtungen (106) an die mindestens eine benachbarte Netzsteuerung (110), Deaktivierung der Netzsteuerung (110) und Übertragung der Steuerung der einen oder mehreren durch die mindestens eine benachbarte Netzsteuerung verwalteten Netzvorrichtungen (106) an die Netzsteuerung (110) angeben.
 
10. Netzsteuerung (110) nach Anspruch 9, wobei das Kommunikationsmodul (212) ferner ausgelegt ist zum Erhalten von Peer-Informationen der mindestens einen benachbarten Netzsteuerung (110) und die Steuernachrichten asynchron zu der mindestens einen benachbarten Netzsteuerung (110) übermittelt.
 
11. Netzsteuerung (110) nach Anspruch 10, wobei das Kommunikationsmodul (212) ferner ausgelegt ist zum Übermitteln der Steuernachrichten zu der mindestens einen benachbarten Netzsteuerung (110) zu einem Zeitpunkt von Routing-Aktualisierungen.
 
12. Netzsteuerung (110) nach Anspruch 8, wobei das Kommunikationsmodul (212) ferner dafür ausgelegt ist, eine Anforderung von Aktivierung der mindestens einen zusätzlichen Netzsteuerung (110) oder Deaktivierung der Netzsteuerung (110) zu einer zentralen Optimierungssteuerung (112) zu senden.
 
13. Netzsteuerung (110) nach Anspruch 8, wobei das Steuermodul (214) ferner ausgelegt ist zum Aktualisieren der Routingtabelle der Netzsteuerung (110).
 
14. Netzsteuerung (110) nach Anspruch 8, wobei die QoS-Parameter Maximal-Auslastungsschwellenwert der Netzsteuerung (110), Mindest-Auslastungsschwellenwert der Netzsteuerung (110) und Mindestverzögerung der Steuerauflösung umfassen.
 


Revendications

1. Procédé d'optimisation d'un plan de commande comprenant des contrôleurs de réseau dans un réseau de communication, le procédé comprenant de :

obtenir (402), par un contrôleur de réseau (110), des informations d'homologue d'au moins un contrôleur de réseau voisin, dans lequel les informations d'homologue indiquent l'utilisation et un retard associés à la performance dudit au moins un contrôleur de réseau voisin, lesdites informations d'homologue comprenant au moins l'un de mises à jour de routage, d'une valeur de gain propre, d'un message de déchargement, et d'un message de changement d'état, dans lequel lesdites informations d'homologue de l'au moins un contrôleur de réseau voisin sont utilisées pour déterminer un profil de trafic de l'au moins un contrôleur de réseau voisin;

déterminer (404), par le contrôleur de réseau, une variation de profil de trafic, dans lequel la variation de profil de trafic indique des modifications d'un profil de trafic actuel du contrôleur de réseau et de l'au moins un contrôleur de réseau voisin par rapport à un profil de trafic précédent du contrôleur de réseau et de l'au moins un contrôleur de réseau voisin;

calculer (406), par le contrôleur de réseau, une valeur de gain propre pour le contrôleur de réseau, dans lequel la valeur de gain propre indique l'un d'une utilisation optimale, d'une sous-utilisation et d'une surutilisation du contrôleur de réseau, et dans lequel le calcul est basé sur des paramètres de qualité de service, QoS, prédéfinis, dans lequel le calcul par le contrôleur de réseau indique l'utilisation, la sous-utilisation ou la surutilisation d'une charge utile; et

initier (408), par le contrôleur de réseau, une opération d'optimisation de plan de commande sur la base de la valeur de gain propre et du profil de trafic de l'au moins un contrôleur de réseau voisin, et lorsqu'un nombre de demandes de flux adressées au contrôleur de réseau diminue, le contrôleur de réseau effectue l'opération d'optimisation de plan de commande pour maximiser l'utilisation, dans lequel le contrôleur de réseau compare la valeur de gain propre calculée à une valeur de gain de seuil minimale et à une valeur de gain de seuil maximale afin de déterminer la sous-utilisation ou la surutilisation du contrôleur de réseau, dans lequel l'opération d'optimisation du plan de commande de réseau comprend l'un de :
activer au moins un contrôleur réseau supplémentaire, transférer la commande d'un ou plusieurs dispositifs de réseau gérés par le contrôleur de réseau à l'au moins un contrôleur de réseau voisin, désactiver le contrôleur de réseau et transférer la commande d'un ou plusieurs dispositifs de réseau gérés par l'au moins un contrôleur de réseau voisin au contrôleur de réseau, dans lequel un contrôleur de réseau supplémentaire est activé pour la surutilisation du contrôleur de réseau.


 
2. Procédé selon la revendication 1, dans lequel, pour la valeur de gain propre indiquant une surutilisation du contrôleur de réseau, l'opération d'optimisation de plan de commande comprend de :

générer une demande d'activation de l'au moins un contrôleur de réseau supplémentaire;

recevoir une indication d'activation de l'au moins un contrôleur de réseau supplémentaire;

transférer la commande d'un ou plusieurs dispositifs de réseau gérés par le contrôleur de réseau à l'au moins un contrôleur de réseau supplémentaire; et

générer un message de commande pour informer du transfert à l'au moins un contrôleur de réseau voisin.


 
3. Procédé selon la revendication 2, dans lequel l'activation de l'au moins un contrôleur de réseau supplémentaire comprend l'un d'ajouter un contrôleur de réseau virtuel et d'invoquer un contrôleur de réseau physique dormant.
 
4. Procédé selon la revendication 1, dans lequel, pour la valeur de gain propre indiquant une surutilisation du contrôleur de réseau, l'opération d'optimisation de plan de commande comprend de :

identifier un contrôleur glouton parmi au moins un contrôleur de réseau voisin, pour assumer plus de charge ;

demander au contrôleur glouton d'accepter la commande d'un ou plusieurs dispositifs de réseau gérés par le contrôleur de réseau;

recevoir une réponse du contrôleur glouton;

transférer la commande d'un ou plusieurs dispositifs de réseau gérés par le contrôleur de réseau au contrôleur glouton en fonction de la réponse; et

générer un message de commande pour informer du transfert à l'au moins un contrôleur de réseau voisin.


 
5. Procédé selon la revendication 1, dans lequel, pour la valeur de gain propre indiquant une sous-utilisation du contrôleur de réseau, l'opération d'optimisation du plan de commande comprend la désactivation du contrôleur de réseau.
 
6. Procédé selon la revendication 5, dans lequel la désactivation comprend l'un de supprimer un contrôleur de réseau virtuel et de placer un contrôleur de réseau physique actif dans un mode dormant, et dans lequel la mise du contrôleur de réseau physique actif dans le mode dormant comprend en outre l'un de déactiver le contrôleur de réseau physique et de mettre le contrôleur de réseau physique actif dans un mode veille.
 
7. Procédé selon la revendication 1, dans lequel, pour la valeur de gain propre indiquant une sous-utilisation du contrôleur de réseau, l'opération d'optimisation de plan de commande comprend de :

identifier un contrôleur glouton parmi au moins un contrôleur de réseau voisin, pour assumer plus de charge ;

décharger la commande d'un ou plusieurs dispositifs de réseau gérés par le contrôleur de réseau vers le contrôleur glouton;

générer un message de commande pour informer du déchargement vers l'au moins un contrôleur de réseau voisin; et

initier un mode dormant pour le contrôleur de réseau.


 
8. Contrôleur de réseau (110) comprenant :

un processeur (202) ;

un module de communication (212) couplé au processeur (202), configuré pour obtenir des informations d'homologue d'au moins un contrôleur de réseau voisin (110), dans lequel les informations d'homologue indiquent l'utilisation et un retard associés à la performance de l'au moins un contrôleur de réseau voisin (110), lesdites informations d'homologue comprenant au moins l'un de mises à jour de routage, d'une valeur de gain propre, d'un message de déchargement, et d'un message de changement d'état, dans lequel lesdites informations d'homologue de l'au moins un contrôleur de réseau voisin sont utilisées pour déterminer un profil de trafic de l'au moins un contrôleur de réseau voisin;

un module de commande (214) couplé au processeur (202),configuré pour :

déterminer une variation de profil de trafic, dans lequel la variation de profil de trafic indique des modifications d'un profil de trafic actuel du contrôleur de réseau (110) et de l'au moins un contrôleur de réseau voisin (110) par rapport à un profil de trafic précédent du contrôleur de réseau (110) et de l'au moins un contrôleur de réseau voisin (110) ;

calculer une valeur de gain propre pour le contrôleur de réseau (110), dans lequel la valeur de gain propre indique l'un d'une utilisation optimale, d'une sous-utilisation et d'une surutilisation du contrôleur de réseau (110), et dans lequel le calcul est basé sur des paramètres de QoS prédéfinis; et

initier une opération d'optimisation du plan de commande sur la base de la valeur de gain propre et du profil de trafic de l'au moins un contrôleur de réseau voisin, et lorsqu'un nombre de demandes de flux adressées au contrôleur de réseau diminue, le contrôleur de réseau est en outre configuré pour effectuer l'opération d'optimisation du plan de commande pour maximiser l'utilisation, dans lequel le contrôleur de réseau est en outre configuré pour comparer la valeur de gain propre calculée à une valeur de gain de seuil minimale et à une valeur de gain de seuil maximale pour déterminer la sous-utilisation ou la surutilisation du contrôleur de réseau, dans lequel l'opération d'optimisation du plan de commande comprend l'un de :
activer au moins un contrôleur de réseau supplémentaire (110), transférer la commande d'un ou plusieurs dispositifs de réseau (106) gérés par le contrôleur de réseau (110) à au moins un contrôleur de réseau voisin (110), désactiver le contrôleur de réseau (110), et transférer la commande d'un ou plusieurs dispositifs de réseau (106) gérés par l'au moins un contrôleur de réseau voisin au contrôleur de réseau (110), dans lequel l'au moins un contrôleur de réseau supplémentaire est activé pour la surutilisation du contrôleur de réseau.


 
9. Contrôleur de réseau (110) selon la revendication 8, dans lequel le module de communication (212) est en outre configuré pour communiquer des messages de commande à l'au moins un contrôleur de réseau voisin (110), dans lequel les messages de commande indiquent l'activation de l'au moins un contrôleur de réseau supplémentaire (110), le transfert de la commande du ou des dispositifs de réseau (106) gérés par le contrôleur de réseau (110) à l'au moins un contrôleur de réseau voisin (110), la désactivation du contrôleur de réseau (110), et le transfert de la commande du ou des dispositifs de réseau (106) gérés par l'au moins un contrôleur de réseau voisin au contrôleur de réseau (110).
 
10. Contrôleur de réseau (110) selon la revendication 9, dans lequel le module de communication (212) est en outre configuré pour obtenir des informations d'homologue de l'au moins un contrôleur de réseau voisin (110) et communique les messages de commande à l'au moins un contrôleur de réseau voisin (110) de manière asynchrone.
 
11. Contrôleur de réseau (110) selon la revendication 10, dans lequel le module de communication (212) est en outre configuré pour communiquer les messages de commande à l'au moins un contrôleur de réseau voisin (110) lors d'une instance temporelle de mise à jour de routage.
 
12. Contrôleur de réseau (110) selon la revendication 8, dans lequel le module de communication (212) est en outre configuré pour envoyer une demande à un contrôleur d'optimisation central (112) pour l'un d'activer l'au moins un contrôleur de réseau supplémentaire (110) et de désactiver le contrôleur de réseau (110).
 
13. Contrôleur de réseau (110) selon la revendication 8, dans lequel le module de commande (214) est en outre configuré pour mettre à jour la table de routage du contrôleur de réseau (110).
 
14. Contrôleur de réseau (110) selon la revendication 8, dans lequel les paramètres de QoS comprennent une valeur de seuil d'utilisation maximale du contrôleur de réseau (110), une valeur de seuil d'utilisation minimale du contrôleur de réseau (110), et un retard minimal de la résolution de commande.
 




Drawing



































Cited references

REFERENCES CITED IN THE DESCRIPTION



This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.

Non-patent literature cited in the description