(19)
(11)EP 2 932 693 B1

(12)EUROPEAN PATENT SPECIFICATION

(45)Mention of the grant of the patent:
14.10.2020 Bulletin 2020/42

(21)Application number: 13821255.0

(22)Date of filing:  17.12.2013
(51)Int. Cl.: 
H04L 29/08  (2006.01)
H04L 12/26  (2006.01)
H04L 29/06  (2006.01)
(86)International application number:
PCT/US2013/075655
(87)International publication number:
WO 2014/099906 (26.06.2014 Gazette  2014/26)

(54)

EXCHANGE OF SERVER STATUS AND CLIENT INFORMATION THROUGH HEADERS FOR REQUEST MANAGEMENT AND LOAD BALANCING

AUSTAUSCH VON SERVERSTATUS- UND CLIENTEN-INFORMATIONEN DURCH HEADER ZUR ANFRAGESTEUERUNG UND LASTVERTEILUNG

ÉCHANGE D'INFORMATION D'ÉTAT DU SERVEUR ET D'INFORMATION DE CLIENTS PAR LES EN-TÊTES POUR LA GESTION DES DEMANDES ET RÉPARTITION DE CHARGE


(84)Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

(30)Priority: 17.12.2012 US 201213716913

(43)Date of publication of application:
21.10.2015 Bulletin 2015/43

(73)Proprietor: Microsoft Technology Licensing, LLC
Redmond, WA 98052 (US)

(72)Inventors:
  • ULUDERYA, Gokhan
    Redmond, Washington 98052-6399 (US)
  • FURTWANGLER, Tyler
    Redmond, Washington 98052-6399 (US)
  • SONI, Bijul
    Redmond, Washington 98052-6399 (US)
  • FOX, Eric
    Redmond, Washington 98052-6399 (US)
  • RAMA, Sanjay
    Redmond, Washington 98052-6399 (US)
  • AMI-AD, Kfir
    Redmond, Washington 98052-6399 (US)
  • SILVA, Roshane
    Redmond, Washington 98052-6399 (US)

(74)Representative: Grünecker Patent- und Rechtsanwälte PartG mbB 
Leopoldstraße 4
80802 München
80802 München (DE)


(56)References cited: : 
US-A1- 2002 002 686
US-A1- 2011 138 053
US-A1- 2002 032 727
US-A1- 2012 084 419
  
  • Anonymous: "In Introduction to HTTP Basics", , 13 May 2012 (2012-05-13), XP055112665, Retrieved from the Internet: URL:http://web.archive.org/web/20120513121 251/http://www3.ntu.edu.sg/home/ehchua/pro gramming/webprogramming/HTTP_Basics.html [retrieved on 2014-04-08]
  
Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give notice to the European Patent Office of opposition to the European patent granted. Notice of opposition shall be filed in a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).


Description

BACKGROUND



[0001] Modern data communication architectures involve commonly "server farms", a collective of servers that manage storage, processing, and exchange of data for a variety of purposes. Many services are increasingly provided as web applications, hosted applications that may be accessed by users through browsers or similar thin clients without burdening the users with local installations, updates, compatibility issues, etc. Thus, a server farm may include up to thousands of servers providing web applications for productivity, communications, data analysis, data storage, and comparable services. Client applications (thin or thick) interact with the hosted applications through "requests". For example, a word processing application provided as a web application may receive a request from a client application to open a document, find the document in a networked store, retrieve its contents, and render at the client application. Another example may be a "Save" request. When the user is done, they may select a "Save" control on the client application, which may send a save request to the web application resulting in updating of the stored document.

[0002] Because a number of servers may be involved with the web application, an incoming request needs to be directed to the proper server(s) such that the requested task can be completed. Request management is one of the management approaches that helps a server farm manage incoming requests by evaluating logic rules against the requests in order to determine which action to take, and which server or servers in the farm (if any) should handle the requests.

[0003] Traditional network load balancer devices are relatively expensive dedicated hardware devices for routing requests. Performing multi-layer routing, traditional routers may create a bottleneck as a network grows, causing the network traffic to slow down. Furthermore, conventional routing is also based on static rules failing to take into account dynamic changes in servers, requests, network loads.

[0004] US 2012/084419 A1 discloses a method and system of a service load balancer in which a service response includes server status information.

SUMMARY



[0005] This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to exclusively identify key features or essential features of the claimed subject matter, nor is it intended as an aid in determining the scope of the claimed subject matter.

[0006] The invention is defined by the appended independent claims. Further embodiments are defined in the dependent claims.

[0007] These and other features and advantages will be apparent from a reading of the following detailed description and a review of the associated drawings. It is to be understood that both the foregoing general description and the following detailed description are explanatory and do not restrict aspects as claimed.

BRIEF DESCRIPTION OF THE DRAWINGS



[0008] 

FIG. 1 illustrates an example network diagram, where server health information may be exchanged through service communication headers between a request management entity and servers according to some embodiments;

FIG. 2 illustrates an example network diagram, where client information may be exchanged through request headers between a request management entity and clients according to other embodiments;

FIG. 3 illustrates a functional breakdown of a request manager according to embodiments;

FIG. 4 illustrates conceptually an example server farm infrastructure in a dedicated mode deployment according to some embodiments;

FIG. 5 illustrates conceptually an example server farm infrastructure in an integrated mode deployment according to other embodiments;

FIG. 6 is a networked environment, where a system according to embodiments may be implemented;

FIG. 7 is a block diagram of an example computing operating environment, where embodiments may be implemented; and

FIG. 8 illustrates a logic flow diagram for a process of exchanging server health and client information through headers for request management according to embodiments.


DETAILED DESCRIPTION



[0009] As briefly described above, headers in Hypertext Transport Protocol (HTTP) or similar protocol based communications between serves and a management module (e.g. a router or a throttler) may be used to exchange server health and client information such that requests can be throttled/routed/load balanced based on server health information, client information, etc.

[0010] In the following detailed description, references are made to the accompanying drawings that form a part hereof, and in which are shown by way of illustrations specific embodiments or examples. These aspects may be combined, other aspects may be utilized, and structural changes may be made. The following detailed description is therefore not to be taken in a limiting sense, and the scope of the present disclosure is defined by the appended claims.

[0011] While the embodiments will be described in the general context of program modules that execute in conjunction with an application program that runs on an operating system on a computing device, those skilled in the art will recognize that aspects may also be implemented in combination with other program modules.

[0012] Generally, program modules include routines, programs, components, data structures, and other types of structures that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that embodiments may be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and comparable computing devices. Embodiments may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.

[0013] Embodiments may be implemented as a computer-implemented process (method), a computing system, or as an article of manufacture, such as a computer program product or computer readable media. The computer program product may be a computer storage medium readable by a computer system and encoding a computer program that comprises instructions for causing a computer or computing system to perform example process(es). The computer-readable storage medium is a computer-readable memory device. The computer-readable storage medium can for example be implemented via one or more of a volatile computer memory, a non-volatile memory, a hard drive, a flash drive, a floppy disk, or a compact disk, and comparable physical media.

[0014] Throughout this specification, the term "platform" may be a combination of software and hardware components for exchanging headers in server communications to convey server health and client information as part of request management. Examples of platforms include, but are not limited to, a hosted service executed over a plurality of servers, an application executed on a single computing device, and comparable systems. The term "server" generally refers to a computing device executing one or more software programs typically in a networked environment. More detail on these technologies and example operations is provided below.

[0015] Referring to FIG. 1, diagram 100 illustrates an example network diagram, where server health information may be exchanged through service communication headers between a request management entity and servers according to some embodiments. The components and environments shown in diagram 100 are for illustration purposes. Embodiments may be implemented in various local, networked, cloud-based and similar computing environments employing a variety of computing devices and systems, hardware and software.

[0016] In an example environment illustrated in diagram 100, servers 112 may be part of a server farm or similar infrastructure (e.g. cloud 108) providing one or more hosted services to users accessing the services through client applications (e.g., browsers) executed on client devices 102 and 104, for example. A server 106 may receive requests from the client applications and forward those to a request management server 104 configured to route the requests to proper servers.

[0017] In an example scenario, a collaboration service such as Sharepoint® by Microsoft Corporation of Redmond, WA may be provided as the service. The collaboration service may enable storage, sharing, and editing of documents of various types, among other things. Thus, a user may access the collaboration service through a browser on their client device view a document, edit the document, and save it at its server location. These actions may be facilitated through requests submitted by the browser to the server 106 and routed by the request management server 104. Different servers in the cloud 108 may be responsible for different aspects of the service. For example, one server may be responsible for storage of certain types of documents, while another may be responsible for facilitating the editing functionality. In addition, multiple servers may be responsible for the same task to provide capacity, redundancy, etc. Request management server 104 may send a request through router 110 to a proper server based on that server's availability, health status, request type, and so on. In some embodiments, the routing (as well as throttling and/or load balancing functionality may be integrated into the router 110 instead of request management server 104).

[0018] In deciding which server to send a request to request management server may take into account, as discussed above, server health status. The server health status may be provided to the request management server 104 by the individual servers in form of a score or more detailed information (118) in headers 114 of service communications 116 as part of the regular communication exchange between the request management server 104 and the servers 112.

[0019] As part of the routine operations, the servers 112 and other components of the service may exchange service communications 116 periodically or on-demand. The service communications 116 may include headers 114. Examples of headers, depending on communication type may include HTTP headers, Sharepoint headers, etc. In an example system, each server may determine its health status (e.g., processor capacity, memory capacity, bandwidth, current load, etc.) and transmit the health status to the request management server 104.

[0020] The health information 118 or score may be customizable by an administrator. For example, the health status may be a single score, multiple scores (for groups or each of the health metrics), or more detailed information (health metric value). The information may be sent to the request management server 104 in the headers. The request management server 104 (or router 110) may take the health information into account in deciding whether to block or forward a request, which server to route the request to, and how to perform load balancing.

[0021] The header based exchange of health information and routing may be application intelligent and request routing (or throttling) prioritized based on an application the request is associated with. For example, requests associated with an application that has higher resource demand compared to another application may be deprioritized. Similarly, if a tenant uses a particular application (e.g., a productivity application), requests directed to servers associated with that tenant may be prioritized if they are associated with the tenant's preferred application.

[0022] In other embodiments, routing (and/or throttling) may be based on a hosted functionality. For example, different functionalities may be hosted on the same server, and requests for one or more of those functionalities may be prioritized. An example implementation may prioritize requests for accessing a team site over requests for downloading a large file, and so on. Similarly, functionality based routing may be performed among multiple servers (i.e., servers providing certain functionality may be selected over others providing other functionality).

[0023] The health score based, application intelligent, and functionality based routing features may be combined in various ways customizable by and administrator, for example. According to some embodiments, the request management server 104 and/or the router 110 may be implemented as software, hardware, or a combination of software and hardware.

[0024] FIG. 2 illustrates an example network diagram, where client information may be exchanged through request headers between a request management entity and clients according to other embodiments. Diagram 200 displays a cloud-based example infrastructure similar to that in FIG. 1. Servers 212 may perform various tasks associated with providing one or more hosted services and communicate with a request management server 204 through router 210 within cloud 208. Request management server 204 may receive requests 220 from a server 206, which in turn may receive multiple requests from client applications executed on computing devices 202 and 204, for example.

[0025] Each request may include a header 214, and each header 214 may include, in addition to other information, client information 222. The client information 222 may identify a client and/or a request. For example, the client may be identified as a real user or a bot. The request may be an "Open" request, a "Save" request, and any other task associated with the service provided by the servers 212.

[0026] The routing/throttling decisions maybe made at the request management server 204 based on rules or a script. If the decision is made based on rules, client applications may be enabled to send information about themselves and/or the request such that the router can make the decision also based on the client information. For example, if the request is a bot request, it may be blocked or sent to a low health serve because it is likely to be repeated. On the other hand, a "Save" request from a word processor to a collaboration server may be high priority because the user may lose their document if the request is not processed correctly and timely.

[0027] According to some embodiments, the routing and throttling components may be configured for each application during its creation. The components may be executed prior to other service (e.g., collaboration) code. An example system may be scalable, where multiple routers manage groups of servers (e.g., hierarchical structure). In a multi-router environment, the routing/throttling rules (or script) may be provided to one and automatically disseminated to others. Similarly, client information discussed above may also be disseminated to all routers.

[0028] In some embodiments, the request management server may internally keep track of two sets of servers (1) a set of servers defined by the farm itself, which may not be removed or added manually, but changes to the farm may be reflected. The availability of each server may be modified per web application. (2) A custom set of machines defined by the administrator, which may be removed or added manually, and the availability of each may be modified per web application.

[0029] This differentiation of machine sets may allow for ease of use, with optional customization. The entry bar for enabling routing may be minimal if the administrator does not need to specify all servers. Additionally, the customization of additional servers may be a need for dedicated routing farm scenarios. The health status of each server in a routing targets list may be used to perform weighted load balancing.

[0030] In order to route the request to a final target, once determined, a new request may be created. The incoming request may be copied to this new request. This may include values such as: headers, cookies, request body, etc. The response from the server may be copied to the outgoing stream, which may also include values such as: headers, cookies, and response body, etc.

[0031] FIG. 3 illustrates a functional breakdown of a request manager according to embodiments. Diagram 300 displays a Request Manager (RM) 330, whose task is to decide whether a request may be allowed into the service, and if so, to which WFE server the request may be sent. These two decisions may be made by the three major functional parts of RM 330, Request Routing (RR) 332, Request Throttling and Prioritizing (RTP) 334, and Request Load Balancing (RLB) 336. In some embodiments, RM 330 may perform request management on a per-web-application basis.

[0032] In an example configuration, RR 332 may select which WFE servers the request may be sent to. RTP 334 may filter WFEs to select those healthy enough to accept the request (e.g., processor capacity, memory capacity, etc.). RLB 336 may select a single WFE server to route the request based on weighting scheme(s). The load balancing feature may also take into account server health, in addition to other considerations such as network bandwidth, request type, and comparable ones.

[0033] FIG. 4 illustrates conceptually an example server farm infrastructure in a dedicated mode deployment according to some embodiments.

[0034] An RM according to embodiments supports two deployment modes: a dedicated mode and an integrated mode. Either mode may be used without restriction depending on an administrator decision regarding which mode is more suitable for a particular configuration. For example, both modes may be used simultaneously. A dedicated RM deployment may statically route to any WFE in a server farm, rather than a specific WFE in the server farm. An RM running on the WFE may then determine which of them within the farm may process a request based on farm-level information, such as WFE health.

[0035] The dedicated mode deployment, as shown in diagram 400, may include a set of WFEs 452 dedicated to performing RM duties only. This is a logical division of labor. The dedicated RM WFEs 452 may be placed in their own RM farm 450 that is located between the Hardware Load Balancers (HLBs) 442, 444, 446 and the regular SP farms 460, 470. The HLBs 442, 444, 446 may send all requests to the RM WFEs 452. RMs executed on these WFEs may decide to which service (SP) WFEs 462, 472 the requests may be sent, if any, and send them there without any more service processing. The SP WFEs 462, 472 may perform their regular tasks in processing the requests and then send responses back through the RM WFEs 452 to the clients.

[0036] The SP farms 460 and 460 may be configured to perform the same work as any other. Their difference from regular service farms may be that the RM WFEs have RM enabled, where, in a regular service farm, the WFEs may have RM disabled. Dedicated mode may be advantageous in larger-scale deployments when physical machines are readily available, since RM processing and service processing may not have to compete over resources as they would if they were executed on the same machine.

[0037] FIG. 5 illustrates conceptually an example server farm infrastructure in an integrated mode deployment according to other embodiments. Diagram 500 displays.

[0038] In an integrated mode deployment, as shown in diagram 500, all regular WFEs 562 within a service (SP) farm 560 may execute RM. Thus, HLBs (542) may send requests to all WFEs 562. When a WFE receives a request, RM may decide whether to allow it to be processed locally, route it to a different WFE, or deny it from being processed at all.

[0039] Integrated mode may be advantageous in smaller-scale deployments, when physical machines are not readily available. This mode may allow RM and the rest of service applications to be executed on all machines, rather than requiring machines dedicated to each. In some embodiments, RM may have several configurable parts, which may be grouped into two main categories: general settings and decision information. General settings may include parameters that encompass the feature as a whole, such as enabling/disabling RR and RTP. Decision information may include the information that is used during the routing and throttling processes, such as routing and throttling rules (or scripts).

[0040] The example scenarios and schemas in FIG. 1 through 5 are shown with specific components, communication protocols, data types, and configurations. Embodiments are not limited to systems according to these example configurations. Other protocols, configurations, headers, and so on may be employed in implementing exchange of server health and client information through headers for request management using the principles described herein.

[0041] FIG. 6 is a networked environment, where a system according to embodiments may be implemented. Local and remote resources may be provided by one or more servers 614 or a single server (e.g. web server) 616 such as a hosted service. A request management application, such as a routing application, may be executed on a management server (e.g., server 614) directing requests from client application on individual computing devices such as a smart phone 613, a tablet device 612, or a laptop computer 611 ('client devices') to proper servers (e.g., database server 618) through network(s) 610.

[0042] As discussed above, server health and client information may be exchanged through headers for request management. Routing / throttling decisions may be made based on rules or a script at the router. If the decision is made based on rules, clients may be enabled to send information about themselves and/or the request such that the router can make the decision also based on the client information. Client devices 611-613 may enable access to applications executed on remote server(s) (e.g. one of servers 614) as discussed previously. The server(s) may retrieve or store relevant data from/to data store(s) 619 directly or through database server 618.

[0043] Network(s) 610 may comprise any topology of servers, clients, Internet service providers, and communication media. A system according to embodiments may have a static or dynamic topology. Network(s) 610 may include secure networks such as an enterprise network, an unsecure network such as a wireless open network, or the Internet. Network(s) 610 may also coordinate communication over other networks such as Public Switched Telephone Network (PSTN) or cellular networks. Furthermore, network(s) 610 may include short range wireless networks such as Bluetooth or similar ones. Network(s) 610 provide communication between the nodes described herein. By way of example, and not limitation, network(s) 610 may include wireless media such as acoustic, RF, infrared and other wireless media.

[0044] Many other configurations of computing devices, applications, data sources, and data distribution systems may be employed to exchange server health and client information through headers in server communications for request management. Furthermore, the networked environments discussed in FIG. 6 are for illustration purposes only. Embodiments are not limited to the example applications, modules, or processes.

[0045] FIG. 7 and the associated discussion are intended to provide a brief, general description of a suitable computing environment in which embodiments may be implemented. With reference to FIG. 7, a block diagram of an example computing operating environment for an application according to embodiments is illustrated, such as computing device 700. In a basic configuration, computing device 700 may include at least one processing unit 702 and system memory 704. Computing device 700 may also include a plurality of processing units that cooperate in executing programs. Depending on the exact configuration and type of computing device, the system memory 704 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.) or some combination of the two. System memory 704 typically includes an operating system 705 suitable for controlling the operation of the platform, such as the WINDOWS® and WINDOWS PHONE® operating systems from MICROSOFT CORPORATION of Redmond, Washington. The system memory 704 may also include one or more software applications such as program modules 706, a request management application 722, and a routing module 724.

[0046] The request management application 722 may manage incoming requests including directing of requests to proper servers, maintenance of server status information, management of routing/throttling/load balancing rules and scripts according to embodiments. The routing module 724 may route incoming requests based on server health status and/or client information received through headers in communications with the servers and client applications. This basic configuration is illustrated in FIG. 7 by those components within dashed line 708.

[0047] Computing device 700 may have additional features or functionality. For example, the computing device 700 may also include additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Such additional storage is illustrated in FIG. 7 by removable storage 709 and non-removable storage 710. Computer readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Computer readable storage media is a computer readable memory device. System memory 704, removable storage 709 and non-removable storage 710 are all examples of computer readable storage media. Computer readable storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 700. Any such computer readable storage media may be part of computing device 700. Computing device 700 may also have input device(s) 712 such as keyboard, mouse, pen, voice input device, touch input device, and comparable input devices. Output device(s) 714 such as a display, speakers, printer, and other types of output devices may also be included. These devices are well known in the art and need not be discussed at length here.

[0048] Computing device 700 may also contain communication connections 716 that allow the device to communicate with other devices 718, such as over a wireless network in a distributed computing environment, a satellite link, a cellular link, and comparable mechanisms. Other devices 718 may include computer device(s) that execute communication applications, storage servers, and comparable devices. Communication connection(s) 716 is one example of communication media. Communication media can include therein computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and includes any information delivery media. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media.

[0049] Example embodiments also include methods. These methods can be implemented in any number of ways, including the structures described in this document. One such way is by machine operations, of devices of the type described in this document.

[0050] Another optional way is for one or more of the individual operations of the methods to be performed in conjunction with one or more human operators performing some. These human operators need not be co-located with each other, but each can be only with a machine that performs a portion of the program.

[0051] FIG. 8 illustrates a logic flow diagram for a process of exchanging server health and client information through headers for request management according to embodiments. Process 800 may be implemented by a request management application such as a router in a server farm management in some examples.

[0052] Process 800 may begin with operation 810, where server health information and/or client information (e.g., client type) may be received in headers of service communications from servers in a service infrastructure or in headers of request communication from one or more clients, respectively.

[0053] At operation 820, a request may be received from a client followed by operation 830, where a server to receive the request may be selected based on evaluating the received server health and/or client information. At operation 840, the request may be routed, throttled, or load balanced to the selected server.

[0054] Server health and/or client information based request management according to some embodiments may enable advanced routing and throttling behaviors. Examples of such behaviors may include routing requests to web servers with a good health score and preventing impact on lower health web servers; prioritizing important requests (e.g., end user traffic) by throttling other types of requests (e.g., crawlers); routing requests to web servers based upon HTTP elements such as host name or client IP Address; routing traffic to specific servers based on type (e.g., search, client applications, etc.); identifying and blocking harmful requests so the web servers never process them; routing heavy requests to web servers with more resources; and allowing for easier troubleshooting by routing to specific machines experiencing problems and/or from particular client computers.

[0055] Some embodiments may be implemented in a computing device that includes a communication module, a memory, and a processor, where the processor executes a method as described above or comparable ones in conjunction with instructions stored in the memory. Other embodiments may be implemented as a computer readable storage medium with instructions stored thereon for executing a method as described above or similar ones.

[0056] The operations included in process 800 are for illustration purposes. Exchange of server health and client information through headers for request management according to embodiments may be implemented by similar processes with fewer or additional steps, as well as in different order of operations using the principles described herein.

[0057] The above specification, examples and data provide a complete description of the manufacture and use of the composition of the embodiments. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims and embodiments.


Claims

1. A method executed on a computing device in a service infrastructure, where the computing device is configured to be a request management server for exchange of server health and client information through headers in request management, the method comprising:

receiving (810), at the request management server, server health information from one or more servers in the service infrastructure in headers of service communication;

receiving (820), at the request management server, a request from a client, wherein the request includes client information in a header of the request; determining a client type from the client information;

selecting (830), by the request management server, a server from the one or more servers for the received request to be routed to based on the received server health information and the client information; and

one or more of routing (840) the request, throttling the request, and load balancing the request to the selected server,

characterized in that the service infrastructure is a server farm infrastructure in a dedicated mode deployment, where a set of servers is dedicated to performing request management duties, or in an integrated mode deployment, in which all servers in the farm execute request management; and

wherein the request management server is configured to support the dedicated mode by being enabled to perform request management duties only, and to support the integrated mode by being enabled to execute request management duties and service applications for processing client requests on the same machine.
 
2. The method of claim 1, further comprising:
receiving the server health information in form of one of a single health score, multiple health scores, and a health metric value.
 
3. The method of claim 1, further comprising:
making at least one from a set of a routing decision, a throttling decision, and a load balancing decision based on one of a rule and a script.
 
4. The method of claim 1, wherein the service infrastructure is a scalable system comprising a plurality of request managers receiving the server health information and the client information.
 
5. A computing device (700) connectable to a service infrastructure, where the computing device is configured to be a request management server for employing exchange of server health and client information through headers in request management, the computing device comprising:

a communication module configured to communicate with one or more servers in the service infrastructure and one or more clients;

a memory (704) configured to store instructions; and

a processor (702) coupled to the memory, the processor configured to execute a request management application (722) in conjunction with the instructions stored in the memory, wherein the request management application is configured to:

receive, at the request management server, server health information in headers of service communication from the one or more servers;

receive; at the request management server, a request from a client;

receive client information associated with a client type in a header of communication providing the request from the client; and

make at least one from a set of a routing decision, a throttling decision, and a load balancing decision based on the received client information,

select, by the request management server, a server from the one or more servers for the received request to be routed to based on the received server health information and the client information; and

one or more of routing the request, throttling the request, and load balancing the request to the selected server;

characterized in that the service infrastructure is a server farm infrastructure in

a dedicated mode deployment, where a set of servers is dedicated to performing request management duties, or in

an integrated mode deployment, in which all servers in the farm execute request management;

wherein the request management server is configured to support the dedicated mode by being enabled to perform request management duties only, and to support the integrated mode by being enabled to execute request management duties and service applications for processing client requests on the same machine.
 
6. The computing device of claim 5, wherein the request management application is further configured to:

receive client information associated with a request type in a header of communication providing the request from the client; and

make at least one from a set of a routing decision, a throttling decision, and a load balancing decision based on the received client information.


 
7. The computing device of claim 6, wherein the request management application is further configured to:
employ one or more rules to make the decision, wherein the one or more rules are received from one of an administrator and the client, and disseminated to a plurality of request managers in the service infrastructure.
 
8. The computing device of claim 5, wherein the service communication is one of a periodic service communication and an on-demand service communication.
 
9. A computer-readable memory device comprising instructions stored thereon for exchange of server health and client information through headers in request management, where the instructions, when executed by a computer in a service infrastructure cause a computer to be a request management server by carrying out the steps of:

receiving, at the request management server, server health information from one or more servers in the service infrastructure in headers of service communication between the servers and the request management server;

receiving, at the request management server, a request from a client, wherein the request includes client information identifying a client type in a header of the request;

making at least one from a set of a routing decision, a throttling decision, and a load balancing decision based on the received server health information and the client type;

selecting, by the request management server, a server from the one or more servers for the received request to be routed to based on the received server health information and the client information; and

one or more of routing the request, throttling the request, and load balancing the request to the selected server,

characterized in that the service infrastructure is a server farm infrastructure in a dedicated mode deployment, where a set of servers is dedicated to performing request management duties, or in an integrated mode deployment, in which all servers in the farm execute request management;

wherein the request management server is configured to support the dedicated mode by being enabled to perform request management duties only, and to support the integrated mode by being enabled to execute request management duties and service applications for processing client requests on the same machine.


 
10. The computer-readable memory device of claim 9, wherein the server health information is in form of one of a single health score, multiple health scores, and a health metric value.
 
11. The computer-readable memory device of claim 9 wherein the at least one from a set of a routing decision, a throttling decision, and a load balancing decision is made using one of a rule and a script.
 


Ansprüche

1. Auf einer Computervorrichtung in einer Dienstinfrastruktur ausgeführtes Verfahren, wobei die Computervorrichtung als ein Anforderungsmanagementserver zum Austausch von Serverintegritäts- und Clientinformationen durch Header im Anforderungsmanagement ausgebildet ist, wobei das Verfahren umfasst:

Empfangen (810) einer Serverintegritätsinformation von einem oder mehreren Servern in der Dienstinfrastruktur in Headern der Dienstkommunikation auf dem Anforderungsmanagementserver,

Empfangen (820) einer Anforderung von einem Client auf dem Anforderungsmanagementserver, wobei die Anforderung eine Clientinformation in einem Header der Anforderung umfasst; Bestimmen eines Clienttyps aus der Clientinformation;

Auswählen (830) eines Servers von den einen oder mehreren Servern, an den die empfangene Anforderung weiterzuleiten ist, durch den Anforderungsmanagementserver auf der Basis der empfangenen Serverintegritätsinformation und der Clientinformation; und

Weiterleiten (840) der Anforderung, Drosseln der Anforderung oder/und Lastenausgleich der Anforderung zum ausgewählten Server;

dadurch gekennzeichnet, dass die Dienstinfrastruktur eine Serverfarm-Infrastruktur in einer Bereitstellung im dedizierten Modus ist, wobei eine Menge von Servern zum Ausführen von Anforderungsmanagementaufgaben vorgesehen ist, oder in einer Bereitstellung im integrierten Modus ist, in dem alle Server in der Farm das Anforderungsmanagement ausführen; und

wobei der Anforderungsmanagementserver zum Unterstützen des dedizierten Modus durch Befähigung zum Ausführen von ausschließlich Anforderungsmanagementaufgaben und zum Unterstützen des integrierten Modus durch Befähigung zum Ausführen von Anforderungsmanagementaufgaben und Dienstanwendungen zum Verarbeiten von Clientanforderungen auf dem gleichen Gerät ausgebildet ist.


 
2. Verfahren nach Anspruch 1, ferner umfassend:
Empfangen der Serverintegritätsinformation in der Form eines einzelnen Integritätswerts, von mehreren Integritätswerten oder eines Integritätsmesswerts.
 
3. Verfahren nach Anspruch 1, ferner umfassend:
Treffen wenigstens einer von einer Menge einer Weiterleitungsentscheidung, einer Drosselungsentscheidung und einer Lastenausgleichsentscheidung auf der Basis einer Regel oder eines Skripts.
 
4. Verfahren nach Anspruch 1, wobei die Dienstinfrastruktur ein skalierbares System umfassend eine Vielzahl von die Serverintegritätsinformation und die Clientinformation empfangenden Anforderungsmanagern ist.
 
5. Mit einer Dienstinfrastruktur verbindbare Computervorrichtung (700), wobei die Computervorrichtung als ein Anforderungsmanagementserver zum Verwenden eines Austauschs von Serverintegritäts- und Clientinformationen durch Header im Anforderungsmanagement ausgebildet ist, wobei die Computervorrichtung umfasst:

ein zum Kommunizieren mit einem oder mehreren Servern in der Dienstinfrastruktur und einem oder mehreren Clients ausgebildetes Kommunikationsmodul;

einen zum Speichern von Anweisungen ausgebildeten Speicher (704); und

einen mit dem Speicher gekoppelten Prozessor (702), wobei der Prozessor zum Ausführen einer Anforderungsmanagementanwendung (722) in Verbindung mit den im Speicher gespeicherten Anweisungen ausgebildet ist, wobei die Anforderungsmanagementanwendung ausgebildet ist zum:

Empfangen einer Serverintegritätsinformation in Headern der Dienstkommunikation von den einen oder mehreren Servern auf dem Anforderungsmanagementserver;

Empfangen einer Anforderung von einem Client auf dem Anforderungsmanagementserver;

Empfangen einer mit einem Clienttyp verknüpften Clientinformation in einem Header der die Anforderung vom Client bereitstellenden Kommunikation; und

Treffen wenigstens einer von einer Menge einer Weiterleitungsentscheidung, einer Drosselungsentscheidung und einer Lastenausgleichsentscheidung auf der Basis der empfangenen Clientinformation,

Auswählen eines Servers von den einen oder mehreren Servern, an den die empfangene Anforderung weiterzuleiten ist, durch den Anforderungsmanagementserver auf der Basis der empfangenen Serverintegritätsinformation und der Clientinformation; und

Weiterleiten der Anforderung, Drosseln der Anforderung oder/und Lastenausgleich der Anforderung zum ausgewählten Server;

dadurch gekennzeichnet, dass die Dienstinfrastruktur eine Serverfarm-Infrastruktur in einer Bereitstellung im dedizierten Modus ist, wobei eine Menge von Servern zum Ausführen von Anforderungsmanagementaufgaben vorgesehen ist, oder in einer Bereitstellung im integrierten Modus ist, in dem alle Server in der Farm das Anforderungsmanagement ausführen;

wobei der Anforderungsmanagementserver zum Unterstützen des dedizierten Modus durch Befähigung zum Ausführen von ausschließlich Anforderungsmanagementaufgaben und zum Unterstützen des integrierten Modus durch Befähigung zum Ausführen von Anforderungsmanagementaufgaben und Dienstanwendungen zum Verarbeiten von Clientanforderungen auf dem gleichen Gerät ausgebildet ist.


 
6. Computervorrichtung nach Anspruch 5, wobei die Anforderungsmanagementanwendung ferner ausgebildet ist zum:

Empfangen einer mit einem Anforderungstyp verknüpften Clientinformation in einem Header der die Anforderung vom Client bereitstellenden Kommunikation; und

Treffen wenigstens einer von einer Menge einer Weiterleitungsentscheidung, einer Drosselungsentscheidung und einer Lastenausgleichsentscheidung auf der Basis der empfangenen Clientinformation.


 
7. Computervorrichtung nach Anspruch 6, wobei die Anforderungsmanagementanwendung ferner ausgebildet ist zum:
Verwenden von einer oder mehreren Regeln zum Treffen der Entscheidung, wobei die eine oder mehreren Regeln von einem Administrator oder vom Client empfangen werden und an eine Vielzahl von Anforderungsmanagern in der Dienstinfrastruktur verteilt werden.
 
8. Computervorrichtung nach Anspruch 5, wobei die Dienstkommunikation eine periodische Dienstkommunikation oder eine Dienstkommunikation auf Anforderung ist.
 
9. Computerlesbare Speichervorrichtung umfassend darauf gespeicherte Anweisungen für den Austausch von Serverintegritäts- und Clientinformationen durch Header im Anforderungsmanagement, wobei die Anweisungen, wenn von einem Computer in einer Dienstinfrastruktur ausgeführt, einen Computer veranlassen, ein Anforderungsmanagementserver zu sein, durch Ausführen der Schritte zum:

Empfangen einer Serverintegritätsinformation von einem oder mehreren Servern in der Dienstinfrastruktur in Headern der Dienstkommunikation zwischen den Servern und dem Anforderungsmanagementserver auf dem Anforderungsmanagementserver;

Empfangen einer Anforderung von einem Client auf dem Anforderungsmanagementserver, wobei die Anforderung eine Clientinformation zum Identifizieren eines Clienttyps in einem Header der Anforderung umfasst;

Treffen wenigstens einer von einer Menge einer Weiterleitungsentscheidung, einer Drosselungsentscheidung und einer Lastenausgleichsentscheidung auf der Basis der empfangenen Serverintegritätsinformation und des Clienttyps;

Auswählen eines Servers von den einen oder mehreren Servern, an den die empfangene Anforderung weiterzuleiten ist, durch den Anforderungsmanagementserver auf der Basis der empfangenen Serverintegritätsinformation und der Clientinformation; und

Weiterleiten der Anforderung, Drosseln der Anforderung oder/und Lastenausgleich der Anforderung zum ausgewählten Server,

dadurch gekennzeichnet, dass die Dienstinfrastruktur eine Serverfarm-Infrastruktur in einer Bereitstellung im dedizierten Modus ist, wobei eine Menge von Servern zum Ausführen von Anforderungsmanagementaufgaben vorgesehen ist, oder in einer Bereitstellung im integrierten Modus ist, in dem alle Server in der Farm das Anforderungsmanagement ausführen;

wobei der Anforderungsmanagementserver zum Unterstützen des dedizierten Modus durch Befähigung zum Ausführen von ausschließlich Anforderungsmanagementaufgaben und zum Unterstützen des integrierten Modus durch Befähigung zum Ausführen von Anforderungsmanagementaufgaben und Dienstanwendungen zum Verarbeiten von Clientanforderungen auf dem gleichen Gerät ausgebildet ist.


 
10. Computerlesbare Speichervorrichtung nach Anspruch 9, wobei die Serverintegritätsinformation in der Form eines einzelnen Integritätswerts, von mehreren Integritätswerten oder eines Integritätsmesswerts vorliegt.
 
11. Computerlesbare Speichervorrichtung nach Anspruch 9, wobei die wenigstens eine von einer Menge einer Weiterleitungsentscheidung, einer Drosselungsentscheidung und einer Lastenausgleichsentscheidung unter Verwendung einer Regel oder eines Skripts erfolgt.
 


Revendications

1. Procédé exécuté sur un dispositif informatique dans une infrastructure de services, le dispositif informatique étant configuré pour être un serveur de gestion de requêtes concernant l'état de santé de serveurs et des informations sur des clients par l'intermédiaire d'en-têtes dans la gestion des requêtes, le procédé comprenant :

la réception (810), au niveau du serveur de gestion des requêtes, d'informations sur l'état de serveurs en provenance d'un ou plusieurs serveurs dans l'infrastructure de services dans des en-têtes de communication de service,

la réception (820), au niveau du serveur de gestion de requêtes, d'une requête provenant d'un client, la requête incluant des informations sur le client dans un en-tête de la requête,

la détermination d'un type de client à partir des informations sur le client,

la sélection (830), par le serveur de gestion de requêtes, d'un serveur parmi le ou les serveurs vers lequel la requête reçue est à acheminer sur la base des informations d'état de serveurs reçues et des informations sur le client, et

une ou plusieurs actions choisies parmi l'acheminement (840) de la requête, la limitation de la requête et l'équilibrage de charge de la requête pour le serveur sélectionné,

caractérisé en ce que l'infrastructure de services est une infrastructure de grappe de serveurs selon un déploiement en mode spécialisé dans lequel un ensemble de serveurs est spécialisé à la réalisation de tâches de gestion de requêtes, ou bien selon un déploiement en mode intégré dans lequel tous les serveurs dans la grappe exécutent la gestion de requêtes, et

dans lequel le serveur de gestion de requêtes est configuré pour prendre en charge le mode spécialisé en étant activé pour n'effectuer que des tâches de gestion de requêtes, et pour prendre en charge le mode intégré en étant activé pour exécuter des tâches de gestion de requêtes et des applications de service destinées à traiter des requêtes de clients sur la même machine.


 
2. Procédé selon la revendication 1, comprenant en outre :
la réception des informations sur l'état de serveurs sous la forme de l'une parmi une note unique sur l'état, des notes multiples sur les états et une valeur de mesure de l'état.
 
3. Procédé selon la revendication 1, comprenant en outre :
la réalisation d'au moins l'une parmi un ensemble constitué d'une décision d'acheminement, d'une décision de limitation et d'une décision d'équilibrage de charge sur la base de l'une parmi une règle et une séquence type.
 
4. Procédé selon la revendication 1, dans lequel l'infrastructure de services est un système évolutif comprenant une pluralité de gestionnaires de requêtes recevant les informations sur l'état du serveur et les informations sur le client.
 
5. Dispositif informatique parenthèses 700) pouvant être connecté à une infrastructure de services, le dispositif informatique étant configuré pour être un serveur de gestion de requêtes destiné à utiliser l'échange d'informations sur l'état de santé de serveurs et sur le client par l'intermédiaire d'en-têtes dans une gestion de requêtes, le dispositif informatique comprenant :

un module de communication configuré pour communiquer avec un ou plusieurs serveurs dans l'infrastructure de services et avec un ou plusieurs clients,

une mémoire (704) configurée pour stocker des instructions, et

un processeur (702) couplé à la mémoire, le processeur étant configuré pour exécuter une application de gestion de requêtes (722) conjointement avec les instructions stockées dans la mémoire, l'application de gestion de requêtes étant configurée pour :

recevoir, au niveau du serveur de gestion des requêtes, des informations sur l'état de serveurs dans des en-têtes de communication de service provenant du ou des serveurs,

recevoir, au niveau du serveur de gestion de requêtes, une requête provenant d'un client,

recevoir des informations sur le client associées à un type de client dans un en-tête de communication provenant de la requête émise par le client, et

la détermination d'un type de client à partir des informations sur le client,

prendre au moins l'une parmi un ensemble constitué d'une décision d'acheminement, d'une décision de limitation et d'une décision d'équilibrage de la charge sur la base des informations reçues sur le client,

sélectionner, par le serveur de gestion de requêtes, un serveur parmi le ou les serveurs pour la requête reçue à acheminer sur la base des informations reçues sur l'état du serveur et des informations sur le client, et

une ou plusieurs actions choisies parmi l'acheminement de la requête, la limitation de la requête et l'équilibrage de charge de la requête pour le serveur sélectionné,

caractérisé en ce que l'infrastructure de services est une infrastructure de grappe de serveurs selon un déploiement en mode spécialisé dans lequel un ensemble de serveurs est spécialisé à la réalisation de tâches de gestion de requêtes, ou bien selon un déploiement en mode intégré dans lequel tous les serveurs dans la grappe exécutent la gestion de requêtes, et

dans lequel le serveur de gestion de requêtes est configuré pour prendre en charge le mode spécialisé en étant activé pour n'effectuer que des tâches de gestion de requêtes, et pour prendre en charge le mode intégré en étant activé pour exécuter des tâches de gestion de requêtes et des applications de service destinées à traiter des requêtes de clients sur la même machine.


 
6. Dispositif informatique selon la revendication 5, dans lequel l'application de gestion de requêtes est en outre configurée pour :

recevoir des informations sur le client associées à un type de requête dans un en-tête question fournissant la requête émise par le client, et

prendre au moins une décision parmi un ensemble constitué d'une décision d'acheminement, d'une décision de limitation et d'une décision d'équilibrage de la charge sur la base des informations reçues sur le client.


 
7. Dispositif informatique selon la revendication 6, dans lequel l'application de gestion de requêtes est en outre configurée pour :
utiliser une ou plusieurs règles pour prendre la décision, la ou les règles étant reçues en provenance de l'un d'un administrateur et du client, et étant disséminées sur une pluralité de gestionnaires de requêtes dans l'infrastructure de services.
 
8. Dispositif informatique selon la revendication 5, dans lequel la communication de service est l'une d'une communication de service périodique et d'une communication de service à la demande.
 
9. Composant mémoire pouvant être lu par ordinateur comprenant des instructions stockées en vue de l'échange d'informations sur la santé de serveurs et sur le client par l'intermédiaire d'en-têtes dans une gestion de requêtes, les instructions, lorsqu'elles sont exécutées par un ordinateur dans une infrastructure de services, amenant un ordinateur à être un serveur de gestion de requêtes en exécutant les étapes suivantes :

la réception, au niveau du serveur de gestion de requêtes, d'informations sur l'état de serveurs en provenance d'un ou plusieurs serveurs dans l'infrastructure de services dans des en-têtes de communication de service entre les serveurs et le serveur de gestion de requêtes,

la réception, au niveau du serveur de gestion de requêtes, d'une requête émise par un client, la requête incluant des informations sur le client identifiant le type de client dans un en-tête de la requête,

la prise d'au moins une décision choisie à partir d'un ensemble constitué d'une décision d'acheminement, d'une décision de limitation et d'une décision d'équilibrage de la charge sur la base des informations reçues sur l'état du serveur et sur le type de client,

la sélection, par le serveur de gestion de requêtes, d'un serveur choisi parmi le ou les serveurs vers lequel acheminer la requête reçue sur la base des informations reçues sur l'état du serveur et des informations sur le client, et

une ou plusieurs actions parmi l'acheminement de la requête, la limitation de la requête et l'équilibrage la charge de la requête pour le serveur sélectionné,

caractérisé en ce que l'infrastructure de services est une infrastructure de grappe de serveurs selon un déploiement en mode spécialisé dans lequel un ensemble de serveurs est spécialisé à la réalisation de tâches de gestion de requêtes, ou bien selon un déploiement en mode intégré dans lequel tous les serveurs dans la grappe exécutent la gestion de requêtes, et

dans lequel le serveur de gestion de requêtes est configuré pour prendre en charge le mode spécialisé en étant activé pour n'effectuer que des tâches de gestion de requêtes, et pour prendre en charge le mode intégré en étant activé pour exécuter des tâches de gestion de requêtes et des applications de service destinées à traiter des requêtes de clients sur la même machine.


 
10. Composant mémoire pouvant être lu par ordinateur selon la revendication 9, dans lequel les informations sur l'état de serveurs se trouvent sous la forme de l'une parmi une note unique sur l'état, des notes multiples sur les états et une valeur de mesure de l'état.
 
11. Composant mémoire pouvant être lu par ordinateur selon la revendication 9, dans lequel la ou les décisions prises parmi un ensemble constitué d'une décision d'acheminement, d'une décision de limitation et d'une décision d'équilibrage de la charge, sont prises en utilisant l'une d'une règle et d'une séquence type.
 




Drawing
























REFERENCES CITED IN THE DESCRIPTION



This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.

Patent documents cited in the description