(19)
(11)EP 3 518 504 B1

(12)EUROPEAN PATENT SPECIFICATION

(45)Mention of the grant of the patent:
16.09.2020 Bulletin 2020/38

(21)Application number: 19163542.4

(22)Date of filing:  30.12.2011
(51)International Patent Classification (IPC): 
H04L 29/08(2006.01)
H04L 12/761(2013.01)
H04L 29/12(2006.01)
H04L 12/66(2006.01)
H04L 12/707(2013.01)
H04L 29/06(2006.01)

(54)

METHODS AND SYSTEMS FOR TRANSMISSION OF DATA OVER COMPUTER NETWORKS

VERFAHREN UND SYSTEME ZUM ÜBERTRAGEN VON DATEN ÜBER RECHNERNETZWERKE

PROCÉDÉS ET SYSTÈMES DE TRANSMISSION DE DONNÉES SUR DES RÉSEAUX INFORMATIQUES


(84)Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

(30)Priority: 30.12.2010 US 201061428527 P

(43)Date of publication of application:
31.07.2019 Bulletin 2019/31

(62)Application number of the earlier application in accordance with Art. 76 EPC:
11854377.6 / 2659623

(73)Proprietor: Peerapp, Ltd.
Newton Upper Falls, MA 02464 (US)

(72)Inventors:
  • AROLOVITCH, Alan
    Brookline, MA Massachusetts 02446 (US)
  • BACHAR, Shmuel
    Hertzliya (IL)
  • GAVISH, Dror, Moshe
    Shoham (IL)
  • GRIN, Shahar, Guy
    Ramat Hasharon (IL)
  • SHEMER, Shay
    Hod Hashron (IL)

(74)Representative: Gill, David Alan 
WP Thompson 138 Fetter Lane
London EC4A 1BT
London EC4A 1BT (GB)


(56)References cited: : 
US-A1- 2010 318 665
  
  • Björn Knutsson et al: "Transparent proxy signalling", Journal of Communications and Networks, 1 January 2001 (2001-01-01), page 164, XP055379740, Retrieved from the Internet: URL:https://pdfs.semanticscholar.org/5787/ 7aa83767a4a31c9e05ba1c38df24e0f676a6.pdf [retrieved on 2017-06-09]
  • ARIEL COHEN ET AL: "Supporting Transparent Caching with Standard Proxy Caches", INTERNET CITATION, 31 March 1999 (1999-03-31), XP002166031, Retrieved from the Internet: URL:http://www.ircache.net/CACHE/workshop9 9/ [retrieved on 2001-04-25]
  
Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give notice to the European Patent Office of opposition to the European patent granted. Notice of opposition shall be filed in a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).


Description

Cross Reference to Related Application



[0001] This application claims priority from U.S. Provisional Patent Application No. 61/428,527, filed on December 30, 2010, entitled METHODS AND SYSTEMS FOR TRANSMISSION OF DATA OVER COMPUTER NETWORKS.

Background



[0002] The present application relates generally to transmissions of data over a computer network such as, e.g., the Internet, a local area network, a wide area network, a wireless network, and others.

[0003] Both enterprise and consumer broadband networks have undergone significant and continuous growth of traffic volumes for the last 3-5 years. The traffic growth is driven by introduction of faster end user connectivity options, adoption of various bandwidth-intensive applications, and introduction of various Internet-connected consumer electronics products.

[0004] To respond to network congestion, degradation of application performance, and, the need to continuously upgrade their networks, caused by the broadband growth, broadband network operators have introduced various network optimization solutions and services aimed at controlling their network costs, containing growth of network scale, improving performance and security of Internet applications, and creating new revenue sources for the operators.

[0005] Such solutions include content caching, video transcoding and transrating, content adaptation, content filtering, intrusion detection and prevention, among others.

[0006] All these solution classes share several common deployment requirements. They should be deployed in a transparent way, so that Internet applications may operate without change.

[0007] It is also common for some network optimization solutions to modify the Internet content flow and/or content payload itself.

[0008] Furthermore, the network optimization solutions should address the scale requirements of modern broadband networks that frequently operate on 10 Gbps, 40 Gbps and 100 Gbps scale.

[0009] Common solution architecture for network-based optimization involves a network optimization platform deployed in conjunction with a network element (e.g., routing, switching or dedicated DPI equipment) that sits in data path and redirects traffic to the network optimization platform.

[0010] Network elements typically employ selective redirection of network traffic, matching types of traffic flows to the network optimization service used.

[0011] Network optimization services commonly use application proxy architecture. A connection that otherwise would be established between two endpoints 'A' and 'B' (e.g., an Internet browser and a Web server), is terminated by proxy 'P' and two distinct transport sessions (TCP or UDP) are created between A and P on one hand, and P and B on the other. Following the connection setup, the proxy P relays data between the two sessions at application level.

[0012] The proxy architecture carries significant performance penalties due to the need to maintain transport (TCP or UDP) stack for all sessions flowing across the network, to copy data to relay all data at application level, and perform conversion from data frames to application buffers and back.

[0013] As a result of these limitations, the proxy architecture limits throughput of network optimization applications to 1-2 Gbps per standard Intel-based server, and number of concurrently supported flows to tens of thousands. The performance limitation effectively blocks the network optimization solutions from scaling to 10/40/100 Gbps network scale in an economical fashion.

[0014] Thus, there exists a need in for an alternative architecture for network optimization platforms that would eliminate the above bottlenecks of the application proxy architecture.

Brief Summary of the Disclosure



[0015] In accordance with one or more embodiments, a computer-implemented method is provided for transparently optimizing data transmission between a first endpoint and a second endpoint in a computer network. The endpoints have a directly established data session therebetween. The data session is identified by each endpoint at least to itself in the same way throughout the session. The method includes the steps of: relaying data between the endpoints transparently in the session using a network optimization service; and transparently modifying or storing at least some of the data transmitted from the second endpoint to the first endpoint using the network optimization service in order to optimize data communications between the endpoints, wherein transparently modifying at least some of the data comprises changing the data, replacing the data, or inserting additional data such that the first endpoint receives different data than was sent by the second endpoint.

[0016] In accordance with one or more further embodiments, an optimization service is provided for transparently optimizing data transmission between a first endpoint and a second endpoint in a computer network. The endpoints have a directly established data session therebetween. The data session is identified by each endpoint at least to itself in the same way throughout the session. The optimization service is configured to: relay data between the endpoints transparently in the session using a network optimization service; and transparently modify or store at least some of the data transmitted from the second endpoint to the first endpoint using the network optimization service in order to optimize data communications between the endpoints, wherein modification of data comprises changing the data, replacing the data, or inserting additional data such that the first endpoint receives different data than was sent by the second endpoint.

[0017] Various embodiments of the invention are provided in the following detailed description. As will be realized, the invention is capable of other and different embodiments, and its several details may be capable of modifications in various respects, all without departing from the invention. Accordingly, the drawings and description are to be regarded as illustrative in nature and not in a restrictive or limiting sense, with the scope of the application being indicated in the claims.

Brief Description of the Drawings



[0018] 

FIG. 1 is a flow diagram illustrating creation of a session between endpoints in accordance with one or more embodiments.

FIGS. 2A and 2B (collectively FIG. 2) are simplified diagrams illustrating deployment of an optimization service in accordance with one or more embodiments.

FIG. 3 is a simplified diagram illustrating deployment of an optimization service operating in a tunnel in accordance with one or more embodiments.

FIG. 4 is a flow diagram illustrating response caching in accordance with one or more embodiments.

FIG. 5 is a flow diagram illustrating data modification in accordance with one or more embodiments.

FIG. 6 is a flow diagram illustrating new request introduction in accordance with one or more embodiments.

FIG. 7 is a simplified diagram illustrating an exemplary network architecture in accordance with one or more embodiments.



[0019] Like or identical reference characters are used to identify common or similar elements.

Detailled Description



[0020] Various embodiments disclosed herein are directed to a service for optimizing data transmission in a computer network between endpoints having a directly established session therebetween. The optimization service transparently modifies or stores at least some of the data transmitted between the endpoints or introduces a new request to an endpoint in order to optimize data communications between the endpoints. Each endpoint identifies the session to itself in the same way throughout the session.

[0021] As used herein, the term "network node" refers to any device, connected to an IP-based network, including, without limitation, computer servers, personal computers (including desktop, notebook, and tablet computers), smart phones, and other network connected devices.

[0022] As used herein, the term "endpoint" refers to an end point of a bi-directional inter-process communication flow across an IP-based network, residing on a network node connected to such network. Examples of endpoints include, without limitation, TCP sockets, SCTP sockets, UDP sockets, and raw IP sockets.

[0023] The optimization service operates as part of device involved in relaying of data between network nodes on an IP-based network. Examples of such devices include, without limitation, residential home gateways, WiFi hotspots, firewalls, routers, Metro Ethernet switches, optical switches, DPI devices, computer servers, application gateways, cable modem termination systems (CMTS), optical line terminals (OLT), broadband network gateways (BNG), broadband access servers (BRAS), DSL access multiplexers (DSLAM), gateway GPRS support nodes (GGSN), and PDN gateways (PGW).

[0024] As shown in FIGS. 1 and 2, two endpoints 'A' and 'B' on an IP-based computer network, e.g., ISP subscriber and Internet-based web server, establish data session 'S' between each other. In case of TCP protocol, the session setup phase involves a TCP session handshake, including negotiation of network and transport parameters.

[0025] The session S between endpoints A and B involves data queries sent by A to B and, in some cases, data responses sent to B by A. The session may optionally include queries and responses sent by both endpoints.

[0026] Each respective endpoint typically identifies the data session S with at least a 5-tuple: IP address and port of the local endpoint, IP address and port of the remote endpoint and protocol used (e.g., TCP, UDP, or other). The definition of the session S by the endpoints A and B may not be identical in case of network address translation (NAT) taking place in the network between A and B.

[0027] The endpoints A and B optionally keep track of the data sent and received, by counting bytes and/or frames sent and received. The endpoints A and B may further keep track of the data sent and received by the remote endpoint, for purposes of packet loss detection and retransmission, congestion avoidance, congestion control, among others.

[0028] The identification of the session S by each respective endpoint does not change throughout the session lifetime.

[0029] In accordance with one or more embodiments, following an establishment of the session S between endpoints A and B, an optimization service 'C' creates a transparent endpoint CA facing the endpoint A, that would be appear to endpoint A as endpoint B, at both network and transport levels, as defined by TCP/IP model per RFC1122.

[0030] As illustrated in FIG. 1, the service C may optionally create two transparent endpoints CA and CB, with the endpoint CA appearing to endpoint A as an endpoint B, and the endpoint CB appearing to endpoint B as an endpoint A.

[0031] In accordance with one or more embodiments, the service C creates transparent endpoints per [0035-0036] in only some sessions it processes, with the decision being taken by C based on at least one variable including, e.g., temporal information, ordinal information, frequency information, endpoint identification information, session identification information, network state information, and external policy information.

[0032] In accordance with one or more embodiments, the service C may relay all data frames in the session S between A and B, either by being in data path between the session endpoints, or through use of one or more dedicated redirection devices (e.g., a load balancer, router, DPI device, etc.) that sit in data path and redirect specific data sessions to the service C, as depicted in FIGS. 2A and 2B.

[0033] In accordance with other embodiments, service C may relay only portion of data frames in the session between A and B. For example, the redirection device may redirect the session S to the service C starting from certain frame within the session, using Layer7 analysis of the session to determine whether the session should be redirected to service C.

[0034] When the service C relays data frames between endpoint A and endpoint B without creating transparent endpoints, it may do so at a physical level (e.g., by switching data frames from port to port), or at link level (e.g., by changing MAC addresses and/or VLAN tags), or by combination of the above.

[0035] In accordance with one or more embodiments, the service C may optionally perform a network address translation (NAT) of the session it processes.

[0036] As part of the relaying, C is continuously tracking and storing the state of the connection, including all or some variables from the following group:
  • static session identifiers (endpoint addresses and port numbers, transport protocol used)
  • dynamic transport state of each endpoint, including but not limited to sequence identifiers of data sent and acknowledged by each endpoint
  • negotiated transport attributes of the session and individual endpoints, including but not limited to TCP options, such as selective ACK, timestamp, scaled window and others
  • dynamic network-level attributes of data frames sent in each direction, including but not limited to IP DSCP, IP TOS, IPv6 flow label
  • dynamic link-level attributes of data frames sent in each direction, including but not limited to source and destination MAC addresses, 802.1Q VLAN tags, 802.1P priority bits, QinQ stacked VLAN tags
  • dynamic circuit -level attributes of data frames sent in each direction, including but not limited to identity of ingress and egress ports, physical port properties


[0037] According to one or more embodiments, the service C provides data modification and caching services for one or more data sessions S' between A and B, that traverse the service C in a tunnel established between two endpoints T1 and T2, as illustrated in FIG. 3.

[0038] Tunnel protocols supported by the service C can include, but are not limited to, L2TP, PPPoE, PPPoA, L2TP, GRE, GTP-U, IP in IP, MPLS, Teredo, 6RD, 6to4, and PMIP.

[0039] According to one or more embodiments, as shown in FIG. 3, the service C tracks the state of the tunneled session between endpoints T1 and T2, across multiple connections between endpoints A and B that traverse the tunnel.

[0040] As discussed in further detail below, the service C in accordance with various embodiments provides a number of session modification and other capabilities, including (a) data response caching (b) modification of data queries and data responses (c) introduction of new requests.

(a) Response Caching



[0041] Following an establishment of data session S between endpoints A and B as described above in [0030-0034], the data query from endpoint A to endpoint B reaches the service C.

[0042] According to one or more embodiments, the service C analyzes the data query to match it with previously stored data responses. To do so, C analyzes the query received from endpoint A based on at least one variable, selected from the group consisting of temporal information, ordinal information, frequency information, client information, and identification information.

[0043] If a matching response is found in storage, C delivers the stored response to the endpoint A by itself.

[0044] According to one or more embodiments, should a matching response be previously stored by the service C, the service C does not relay the query to the endpoint B, but rather responds to the query by itself.

[0045] In accordance with one or more alternate embodiments, the service C relays the query received from endpoint A to endpoint B, receives a response or portion of it from endpoint B, and matches the data query received from endpoint A and the data response, or portion of it, received from endpoint B, against data responses previously stored by C.

[0046] In this case, should a matching stored response be identified by the service C, it delivers the stored response, or portion of it, to the endpoint A. In this case, service C blocks relaying of the response received from endpoint B.

[0047] In case of the data response being delivered by C, it may cause endpoint B to terminate the data session S on its end or stall delivery of the response.

[0048] When sending new data frames to endpoints A and B within session S, that were not received from the opposite endpoint, the service C utilizes the IP and port address of the opposite endpoint as well as the session state that is continuously stored by it, as described above in [0042].

[0049] Assume that endpoints A and B started sequencing their data streams in session S starting with X0 and Y0, respectively, as depicted in FIG. 4. By the time service C receives the data query from endpoint A that C responds to, service C may have relayed NA bytes of data from A to B and NB bytes of data from B to A, where NA and NB can be larger than or equal to zero.

[0050] Service C keeps track of the sequences of both endpoints A and B and data acknowledged by each endpoint. When service C starts delivering its response to endpoint A, it starts sequencing its data with Y0+NB, in continuation of data sequences used by endpoint B earlier, while expecting new data from endpoint A starting from X0+NA, in continuation to sequences sent by endpoint A earlier. It can be said that C initializes an endpoint CA with TCP sequence number Y0+NB and acknowledgement number X0+NA.

[0051] As a result, the data delivered by C appears to endpoint A as a seamless continuation of the session S between A and B.

[0052] In case of packets sent by endpoint B to endpoint A sequenced between Y0 and Y0+Nb are lost in the network segment between C and A, the endpoint A responds by sending back frames with acknowledgment sequence smaller than Y0+NB after service C started sending its own data sequenced Y0+NB and higher. According to one or more embodiments, C relays such packets to endpoint B, causing endpoint B to re-transmit the lost packets. In this case, C shall only relay back to endpoint A the data in the range between Y0 and Y0+NB.

[0053] Similarly, when sending its own data (i.e., not data received from the other endpoint) to endpoint B, the service C utilizes the current state of endpoint A within session S, as seen by service C.

[0054] The description of sequencing of sent and received data done by service C in 0054-0059] applies equally to TCP-like semantics based on individual bytes of data as well as other semantics, including but not limited to sequencing of individual frames exchanged between the two endpoints.

[0055] Service C can apply same method of sequencing data as described in [[0054-0059], to multiple protocol layers within same session, including but not limited to TCP/IP session over PPP and PPP-like protocols, TCP/IP session over UDP/IP tunnel etc., session created in IPv6 over IPv4 tunnel, utilizing the data stored using multi-level session tracking as described above in [0043-0044].

[0056] According to one or more embodiments, to deliver the previously stored data response or other data, the service C transparently creates a transport endpoint CA (e.g., TCP/IP or UDP/IP socket), allowing it to deal with packet loss and retransmission, congestion detection and avoidance, and other aspects of transport data transmission, as done by the endpoints A and B.

[0057] Service C may create a single endpoint CA facing endpoint A, or a pair of endpoints CA and CB, facing A and B respectively. The endpoint CA facing endpoint A hall have an address of an opposite endpoint B (IP address IPB and port PB per [0054]) and the transport state of endpoint B as stored by service C as a result of session tracking prior to creation of endpoint CA. In the same way, the endpoint CB shall have the attributes of endpoint A (IP address IPA and port PA).

[0058] According to one or more embodiments, the service C stores data queries and data responses as they are relayed between endpoints A and B, without becoming a transport-level endpoint.

[0059] According to other embodiments, the service C may retrieve the data responses from one of the endpoints, or receive it from another data source.

[0060] According to one or more embodiments, the service C may respond to data queries from both endpoints A and B.

[0061] According to one or more embodiments, the service C responds to data queries from endpoints A and/or B, based on at least one variable from the following group: configuration information, temporal information, frequency information, ordinal information, system load information, network state information, client information and identification information.

[0062] According to one or more embodiments, endpoint A sends query QA1 to endpoint B to which service C responds by sending previously stored response RC1. Upon receiving response RC1, endpoint A sends another data query QA2. If service C does not have a matching response to the query QA2 stored, it relays the query to endpoint B, receives response RB2 and relays it to endpoint A.

[0063] As a result of response RC1 delivered to endpoint A by service C, the counters of sent and received data of endpoint A and B may be in disagreement. To allow switch back to relay mode, where queries and responses are again relayed between A and B, service C performs an ongoing modification of sequences for data frames it relays between A and B as shown, e.g., in FIG. 4. For example, when request QA2 is received from endpoint A, endpoint A reports receiving data up to Y3' that reflects the data received from endpoint CA, as part of RC1 response. At the same time, endpoint B has sent data up to sequence Y2, as part of its communication with endpoint CB.

[0064] Similarly the counters of data that endpoint B received and endpoint A sent (X3' and X4 respectively), do not match either.

[0065] To eliminate the delta between Y3' and Y2 as well, as between X3' and X4, service C modifies sequences of sent and received data when relaying data between A and B, in both directions.

[0066] Furthermore, endpoint A may initiate another query QA3, which can be replied to by service C, using previously stored response RC3.

[0067] In other words, service C may alternate between responding to endpoint queries from one or both endpoints, and relaying queries and responses between two endpoints.

(b) Queries and Responses Modification



[0068] According to one or more embodiments, following establishment of session S, as described above in [0028-0031], the service C modifies data queries and/or data responses as relayed between two endpoints A and B, as illustrated by way of example in FIG. 5.

[0069] In accordance with one or more embodiments, service C does not utilize a transport endpoint for purposes of sending the modified data, but rather continues to track the transport state of endpoint A and B and relies on the sending endpoint to re-send the data in case of packet loss.

[0070] As part of modification of the relayed data, service C may need to change protocol checksums of the frames to reflect the new payload.

[0071] In case of modification by C of data sent from A to B, the packet loss of the modified data may occur between C and B. In this case, service C relays the data frames reflecting such loss from B to A, causing endpoint A to retransmit the lost frames and service C to re-apply modification again. C tracks the re-transmitted frames using the stored session status information and re-applies the modification again.

[0072] According to other embodiments, to deliver modified data to endpoint B, service C creates a new transport endpoint CB facing endpoint B. Such endpoint CB utilizes IP address IPA and port PA of endpoint A, and relays the modified data in continuation of the frames previously relayed from endpoint A to endpoint B.

[0073] When service C creates an endpoint CB to deliver modified data to endpoint B, service C may optionally create an endpoint CA to facilitate communication with endpoint A, for example, for purposes of receiving data responses from it. Similarly to the endpoint CB, the endpoint CA utilizes IP address IPB and port PB of opposite endpoint B, and communicates with endpoint B in continuation of the frames previously relayed from endpoint B to endpoint A.

[0074] Upon completion of delivery of the modified data, service C may fall back to relaying frames between A and B, while making necessary adjustments for sequences of sent and received data, as described in [0069-0071].

[0075] According to one or more embodiments, service C modifies data queries and/or responses relayed from endpoint A to endpoint B, as part of negotiation of endpoint capabilities, in order to affect format, protocol, or other attribute of session S.

[0076] According to one or more embodiments, service C modifies parameters of a data query sent by endpoint A to negate a capability reported by endpoint A. For example, service C may modify the capability to receive a response in a compressed format reported by A, causing the opposite endpoint B to transmit its response in a compressed format.

[0077] Service C subsequently receives the compressed response RB10, modifies it by decompressing the payload and delivers to endpoint A in a modified form, resulting in optimization of network between B and C and improved performance.

[0078] According to one or more embodiments, service C modifies response RB11 received from endpoint B, including but not limited to rendering of the textual data in different format, image adaptation to endpoint device capabilities, change in video quality, and transcoding of audio and/or video data into different format, among others.

[0079] The modification of responses as described in [0084] can be done for a number of purposes, including improving utilization of network resources between the service C and the endpoint receiving the modified data, adapting the data responses to the endpoint application capabilities, improving application performance, among others.

[0080] According to one or more embodiments, the service C may modify data responses relayed between endpoints A and B, that pertain to data items or portions of data items available at one of or both endpoints, e.g., as utilized in peer-to-peer protocols like Bittorrent, eDonkey, and others.

(c) Introduction of New Requests



[0081] According to one or more embodiments, the service C may introduce new requests to endpoint A and/or endpoint B within session S, in addition and/or instead of queries sent by respective endpoints, as depicted in FIG. 6.

[0082] According to one or more embodiments, the service C may utilize an endpoint approach to transmission of new queries and reception of responses from endpoints A and B, as described in [0035-0037] and [0054-0059].

[0083] According to one or more embodiments, the service C combines caching of responses and response modification, introduction of new requests with response caching, and relaying of data between endpoints in the same session S.

[0084] According to one or more embodiments, the service C modifies data availability responses in combination with as reported by one or both endpoints to improve the cache hit ratio of service C by including in it such data items (or portions of items) that are stored by the service C, and/or excluding such data items (or portions of item) that are not.

[0085] According to additional embodiments, the service C modifies the data availability information as reported by one or both endpoints to force the endpoints to transfer such data items (or portions of items) that are currently not stored by the service C, as a way to populate the cache managed by the service C.

[0086] According to one or more embodiments, the service C modifies data queries between endpoint A and endpoint B to disable use of end-to-end encryption, to allow subsequent caching of data responses.

[0087] According to one or more embodiments, the service C stores modified data responses as delivered by it to the endpoints, and may retrieve a stored copy of modified data response, rather than perform the modification on the fly.

[0088] According to one or more embodiments, the service C may deliver a data response stored through response caching mechanism as described above, if the stored copy of data response matches the needs for modification.

[0089] According to one or more embodiments, the service C may utilize a stored copy of a data response, stored through a response storing mechanism as described above, as an input for data modification, rather than allowing the full data response to be delivered from the endpoint B.

[0090] According to one or more embodiments, the service C may introduce new requests into session S, in order to trigger endpoints responses needed for optimal response caching.

[0091] Such data responses may include, but are not limited to, missing portions of content objects already stored in service C, content objects that have been identified as popular, however have not been stored by service C yet, content objects associated with other objects known to the service C (e.g., objects referenced by HTML page or additional playback levels for adaptive bitrate video etc.).

[0092] According to one or more embodiments, a system is provided for transparent modification of at least one data communications session between two endpoints A and B, in a way that requires endpoints A and B establish a data session between each other first, which includes at least one node of an IP network, designed and configured to provide at least one of services (a) to (c), as described above.

[0093] According to one or more embodiments, the optimization system can reside in single or multiple service provider networks, dedicated hosting location, datacenters, and enterprise or at residential premises as described in FIG. 7 below.

[0094] According to one or more embodiments, the system comprises multiple components in different physical locations.

[0095] According to one or more embodiments, multiple systems can reside in the data path of same connection S between two endpoints A and B in series.

[0096] According to one or more embodiments, the optimization service can operate at the same network node, on which one of the endpoints resides.

[0097] According to one or more embodiments, multiple optimization services can operate in series, as illustrated in Fig. 7.

[0098] According to one or more embodiments, multiple optimization services can operate in parallel, e.g., as part of load balancing of sessions done by redirecting device.

[0099] According to one or more embodiments, multiple instances of optimization services can operate in series and/or in parallel, wherein each instance of optimization carries out different and/or same data modification and storing operations.

[0100] According to one or more embodiments, multiple instances of optimization services can operate in series and/or in parallel, wherein each instance of optimization carries out different data modification and storing operations, in coordination with one another.

[0101] The processes of the optimization service described above may be implemented in software, hardware, firmware, or any combination thereof. The processes are preferably implemented in one or more computer programs executing on a programmable device including a processor, a storage medium readable by the processor (including, e.g., volatile and non-volatile memory and/or storage elements), and input and output devices. Each computer program can be a set of instructions (program code) in a code module resident in the random access memory of the device. Until required by the device, the set of instructions may be stored in another computer memory (e.g., in a hard disk drive, or in a removable memory such as an optical disk, external hard drive, memory card, or flash drive) or stored on another computer system and downloaded via the Internet or other network.


Claims

1. A computer-implemented method for transparently optimizing data transmission between a first endpoint (A) and a second endpoint (B) in a computer network, the endpoints (A, B) having a directly established data session (S) therebetween, the data session being identified by each endpoint at least to itself in the same way throughout the session, the method comprising:

relaying application level data between the endpoints (A, B) transparently in the session using a network optimization service (C), wherein the step of relaying data includes creating a first transparent endpoint (Ca) facing the first endpoint (A), wherein first transparent endpoint (Ca) appears to the first endpoint (A) as the second endpoint (B), such that the first endpoint (A) communicates with the first transparent endpoint (Ca); and

transparently modifying parameters of at least some of the application level data transmitted from the second endpoint (B) to the first endpoint (A) using the network optimization service (C) to modify a capability to receive a response in a compressed format reported by the first endpoint (A) to cause the second endpoint (B) to transmit a response in a compressed format and modifying a compressed response from the second endpoint (B) to the first endpoint (A) by decompressing the response in order to optimize data communications between the endpoints (A, B), wherein transparently modifying at least some of the application level data further comprises changing the application level data, replacing the application level data, or inserting additional application level data such that the first endpoint (A) receives different application level data than was sent by the second endpoint (B).


 
2. The method of claim 1, wherein the session is identified by identifiers including an IP address and transport port of the first endpoint (A), an IP address and transport port of the second endpoint (B), and a transport protocol used, and wherein the identifiers do not change throughout a lifetime of the session.
 
3. The method of claim 1 or 2, wherein the step of relaying data is performed by tracking the transport state of the endpoints (A, B), and relying on the endpoint sending data to resend data in case of packet loss, and modifying the resent data to be sent to the second endpoint (B).
 
4. The method as claimed in any one or more of the preceding claims, wherein the step of relaying data includes receiving data communicated between the endpoints (A, B) through a redirection device, and wherein the step of relaying data comprises relaying only a portion of data frames in the session between the first and second endpoints (A, B).
 
5. The method as claimed in any one or more of the preceding claims, wherein, the step of relaying data generally includes continuously tracking and storing a state of the session, including tracking what data has been sent and received by each endpoint and transport-level attributes, link level attributes, or network-level attributes of the session,
wherein tracking includes monitoring multiple tiers of tunneling.
 
6. The as claimed in any one or more of the preceding claims, wherein the step of transparently modifying comprises analyzing a data query from the first endpoint (A) directed to the second endpoint (B) and/or a data response to the data query received from the second endpoint (B), matching the data query and the data response with a previously stored data response, and delivering the previously stored data response to the first endpoint (A), while blocking relaying of the data response from the second endpoint (B),
herein the method further comprises causing the second endpoint (B) to terminate the session on its end or to stall delivery of the data response, and
wherein the step of delivering the previously stored data response comprises utilizing the IP and port address of the second endpoint (B) and the session state of the session as a continuation of the session between the first and second endpoint (B)s.
 
7. The method of claim 6, wherein the step of matching is based on at least one variable selected from the group consisting of temporal information, ordinal information, frequency information, client information, and identification information.
 
8. The method as claimed in any one or more of the preceding claims, wherein the step of transparently modifying comprises:

analyzing a first data query from the first endpoint (A) directed to the second endpoint (B), matching the first data query with a previously stored data response, and delivering the previously stored data response to the first endpoint (A) through a separate transparent endpoint; and

receiving a second data query from the first endpoint (A) directed to the second endpoint (B), determining that a data response responding to the second data query has not been stored at the optimization service, relaying the second data query to the second endpoint (B), receiving the data response corresponding to the second query from the second endpoint (B), and relaying the data response from the second endpoint (B) to the first endpoint (A) without use of a separate transparent endpoint, while modifying the transport parameters of such response so that it appears as a continuation of the session between the first and second endpoint (B)s.


 
9. The method as claimed in any one or more of the preceding claims, wherein the step of transparently modifying data comprises rendering textual data in a different format, adapting data to endpoint device capabilities, changing video quality, or transcoding audio and/or video data into a different format.
 
10. The method as claimed in any one or more of the preceding claims, wherein the step of relaying data is performed by tracking the transport state of the endpoints (A, B), and relying on the endpoint sending data to resend data in case of packet loss, and modifying the resent data to be sent to the second endpoint (B).
 
11. The method as claimed in any one or more of the preceding claims, where the step of transparently modifying data comprises:

modifying parameters of a data query or response from one endpoint to another to affect an attribute of subsequent data queries or responses so that the cacheability of data queries in the session is improved; and

utilizing previously stored data responses for subsequent modification of a data response; or optionally

wherein the step of transparently modifying data comprises modifying data availability responses reported by an endpoint to improve a cache hit ratio by the optimization service or to cause an endpoint to transfer such data items that are currently not stored at the optimization service, if the endpoints (A, B) exchange information on data availability.
 
12. An optimization service for transparently optimizing data transmission between a first endpoint (A) and a second endpoint (B) in a computer network, the endpoints (A, B) having a directly established data session (S) therebetween, the data session being identified by each endpoint at least to itself in the same way throughout the session, the optimization service being configured to:

relay application level data between the endpoints (A, B) transparently in the session using a network optimization service (C), wherein the step of relaying data includes creating a first transparent endpoint (Ca) facing the first endpoint (A), wherein first transparent endpoint (Ca) appears to the first endpoint (A) as the second endpoint (B), such that the first endpoint (A) communicates with the first transparent endpoint (Ca); and

transparently modify at least some of the application level data transmitted from the second endpoint (B) to the first endpoint (A) using the network optimization service (C) to modify a capability to receive a response in a compressed format reported by the first endpoint (A) to cause the second endpoint (B) to transmit a response in a compressed format and modify a compressed response from the second endpoint (B) to the first endpoint (A) by decompressing the response in order to optimize data communications between the endpoints (A, B), wherein to transparently modify at least some of the application level data further comprises to change the application level data, to replace the application level data, or to insert additional application level data such that the first endpoint (A) receives different application level data than was sent by the second endpoint (B).


 
13. The optimization service of claim 12 wherein:

the optimization service is implemented in a device involved in relaying of data between network nodes on an IP-based network;

the optimization service operates on a network node, on which one of the endpoints (A, B) resides;

the optimization service comprises one or more optimization service instances operating in series or operating in parallel; and

each instance of the optimization service carries out different and/or same data modification and storing operations, in coordination with one another.


 


Ansprüche

1. Computerimplementiertes Verfahren zum transparenten Optimieren von Datenübertragungen zwischen einem ersten Endpunkt (A) und einem zweiten Endpunkt (B) in einem Computernetzwerk, wobei die Endpunkte (A, B) eine direkt aufgebaute Datensitzung (S) dazwischen haben, wobei die Datensitzung wenigstens sich selbst von jedem Endpunkt auf dieselbe Weise während der Sitzung identifiziert wird, wobei das Verfahren Folgendes beinhaltet:

Weiterleiten von Daten auf Anwendungsebene zwischen den Endpunkten (A, B) auf transparente Weise in der Sitzung mit einem Netzwerkoptimierungsdienst (C), wobei der Schritt des Weiterleitens von Daten das Erzeugen eines ersten transparenten Endpunkts (Ca) beinhaltet, der dem ersten Endpunkt (A) zugewandt ist, wobei der erste transparente Endpunkt (Ca) dem ersten Endpunkt (A) als zweiter Endpunkt (B) erscheint, so dass der erste Endpunkt (A) mit dem ersten transparenten Endpunkt (Ca) kommuniziert; und

transparentes Modifizieren von Parametern von wenigstens einigen der mit dem Netzwerkoptimierungsdienst (C) vom zweiten Endpunkt (B) zum ersten Endpunkt (A) übertragenen Daten auf Anwendungsebene, um eine Fähigkeit zum Empfangen einer Antwort in einem komprimierten Format wie vom ersten Endpunkt (A) gemeldet zu modifizieren, um zu bewirken, dass der zweite Endpunkt (B) eine Antwort in einem komprimierten Format überträgt, und Modifizieren einer komprimierten Antwort vom zweiten Endpunkt (B) an den ersten Endpunkt (A) durch Dekomprimieren der Antwort, um Datenkommunikationen zwischen den Endpunkten (A, B) zu optimieren, wobei das transparente Modifizieren von wenigstens einigen der Daten auf Anwendungsebene ferner das Ändern der Daten auf Anwendungsebene, das Ersetzen der Daten auf Anwendungsebene oder das Einfügen zusätzlicher Daten auf Anwendungsebene beinhaltet, so dass der erste Endpunkt (A) andere Daten auf Anwendungsebene empfängt als die, die vom zweiten Endpunkt (B) gesendet wurden.


 
2. Verfahren nach Anspruch 1, wobei die Sitzung durch Kennungen einschließlich einer IP-Adresse und eines Transport-Ports des ersten Endpunkts (A), einer IP-Adresse und eines Transport-Ports des zweiten Endpunkts (B) und eines benutzten Transportprotokolls identifiziert wird und wobei sich die Kennungen während einer Lebensdauer der Sitzung nicht ändern.
 
3. Verfahren nach Anspruch 1 oder 2, wobei der Schritt des Weiterleitens von Daten durch Verfolgen des Transportzustands der Endpunkte (A, B) und Verlassen darauf, dass der Endpunkt Daten zum Neusenden von Daten im Falle von Paketverlust sendet, und Modifizieren der neugesendeten Daten durchgeführt wird, die zum zweiten Endpunkt (B) zu senden sind.
 
4. Verfahren nach einem oder mehreren der vorherigen Ansprüche, wobei der Schritt des Weiterleitens von Daten das Empfangen von Daten beinhaltet, die zwischen den Endpunkten (A, B) durch ein Umleitungsgerät übermittelt wurden, und wobei der Schritt des Weiterleitens von Daten das Weiterleiten nur eines Teils von Daten-Frames in der Sitzung zwischen dem ersten und zweiten Endpunkt (A, B) beinhaltet.
 
5. Verfahren nach einem oder mehreren der vorherigen Ansprüche, wobei der Schritt des Weiterleitens von Daten allgemein das kontinuierliche Verfolgen und Speichern eines Zustands der Sitzung beinhaltet, einschließlich des Verfolgens, welche Daten gesendet und von jedem Endpunkt empfangen wurden, sowie von Transport-Ebenen-Attributen, Link-Ebenen-Attributen oder Netzwerkebenen-Attributen der Sitzung,
wobei das Verfolgen das Überwachen mehrerer Tunnelungsstufen beinhaltet.
 
6. Verfahren nach einem oder mehreren der vorherigen Ansprüche, wobei der Schritt des transparenten Modifizierens Folgendes beinhaltet: Analysieren einer Datenabfrage vom ersten Endpunkt (A), die an den zweiten Endpunkt (B) gerichtet ist, und/oder einer Datenantwort auf die vom zweiten Endpunkt (B) empfangene Datenabfrage, Abgleichen der Datenabfrage und der Datenantwort mit einer zuvor gespeicherten Datenantwort, und Liefern der zuvor gespeicherten Datenantwort zum ersten Endpunkt (A), während das Weiterleiten der Datenantwort vom zweiten Endpunkt (B) blockiert wird,
wobei das Verfahren ferner das Bewirken beinhaltet, dass der zweite Endpunkt (B) die Sitzung an seinem Ende beendet oder die Lieferung der Datenantwort stoppt, und
wobei der Schritt des Lieferns der zuvor gespeicherten Datenantwort das Benutzen der IP- und Port-Adresse des zweiten Endpunkts (B) und des Sitzungszustands der Sitzung als eine Fortsetzung der Sitzung zwischen dem ersten und zweiten Endpunkt (B) beinhaltet.
 
7. Verfahren nach Anspruch 6, wobei der Abgleichschritt auf wenigstens einer Variablen basiert, ausgewählt aus der Gruppe bestehend aus zeitlichen Informationen, ordinalen Informationen, Frequenzinformationen, Client-Informationen und Identifikationsinformationen.
 
8. Verfahren nach einem oder mehreren der vorherigen Ansprüche, wobei der Schritt des transparenten Modifizierens Folgendes beinhaltet:

Analysieren einer ersten Datenabfrage vom ersten Endpunkt (A), die an den zweiten Endpunkt (B) gerichtet ist, Abgleichen der ersten Datenabfrage mit einer zuvor gespeicherten Datenantwort und Liefern der zuvor gespeicherten Datenantwort zum ersten Endpunkt (A) durch einen separaten transparenten Endpunkt; und

Empfangen einer zweiten Datenabfrage vom ersten Endpunkt (A), die an den zweiten Endpunkt (B) gerichtet ist, Feststellen, dass eine auf die zweite Datenabfrage antwortende Datenantwort nicht am Optimierungsdienst gespeichert wurde, Weiterleiten der zweiten Datenabfrage zum zweiten Endpunkt (B), Empfangen der Datenantwort entsprechend der zweiten Abfrage vom zweiten Endpunkt (B) und Weiterleiten der Datenantwort vom zweiten Endpunkt (B) zum ersten Endpunkt (A), ohne einen separaten transparenten Endpunkt zu benutzen, während die Transportparameter einer solchen Antwort modifiziert werden, so dass sie als Fortsetzung der Sitzung zwischen dem ersten und zweiten Endpunkt (B) erscheint.


 
9. Verfahren nach einem oder mehreren der vorherigen Ansprüche, wobei der Schritt des transparenten Modifizierens von Daten das Rendern von Textdaten in einem anderen Format, das Adaptieren von Daten an Endpunktgerätefähigkeiten, das Ändern von Videoqualität oder das Transcodieren von Audio- und/oder Videodaten in ein anderes Format beinhaltet.
 
10. Verfahren nach einem oder mehreren der vorherigen Ansprüche, wobei der Schritt des Weiterleitens von Daten durch Verfolgen des Transportzustands der Endpunkte (A, B) und Verlassen darauf, dass der Endpunkt Daten zum Neusenden von Daten im Falle von Paketverlust sendet, und Modifizieren der neugesendeten Daten durchgeführt wird, die zum zweiten Endpunkt (B) zu senden sind.
 
11. Verfahren nach einem oder mehreren der vorherigen Ansprüche, wobei der Schritt des transparenten Modifizierens von Daten Folgendes beinhaltet:

Modifizieren von Parametern einer Datenabfrage oder Antwort von einem Endpunkt zum anderen, um ein Attribut von nachfolgenden Datenabfragen oder Antworten zu beeinflussen, so dass die Cache-Fähigkeit von Datenabfragen in der Sitzung verbessert wird; und

Benutzen von zuvor gespeicherten Datenantworten für eine nachfolgende Modifikation einer Datenantwort; oder wobei optional

der Schritt des transparenten Modifizierens von Daten das Modifizieren von Datenverfügbarkeitsantworten beinhaltet, gemeldet von einem Endpunkt zum Verbessern eines Cache-Hit-Verhältnisses durch den Optimierungsdienst, oder zum Bewirken, dass ein Endpunkt solche Datenelemente überträgt, die derzeit nicht am Optimierungsdienst gespeichert sind, wenn die Endpunkte (A, B) Informationen über Datenverfügbarkeit austauschen.


 
12. Optimierungsdienst zum transparenten Optimieren von Datenübertragungen zwischen einem ersten Endpunkt (A) und einem zweiten Endpunkt (B) in einem Computernetzwerk, wobei die Endpunkte (A, B) eine direkt aufgebaute Datensitzung (S) dazwischen haben, wobei die Datensitzung wenigstens sich selbst von jedem Endpunkt auf dieselbe Weise während der gesamten Sitzung identifiziert wird, wobei der Optimierungsdienst konfiguriert ist zum:

Weiterleiten von Daten auf Anwendungsebene zwischen den Endpunkten (A, B) auf transparente Weise in der Sitzung mit einem Netzwerkoptimierungsdienst (C), wobei der Schritt des Weiterleitens von Daten das Erzeugen eines ersten transparenten Endpunkts (Ca) beinhaltet, der dem ersten Endpunkt (A) zugewandt ist, wobei der erste transparente Endpunkt (Ca) dem ersten Endpunkt (A) als der zweite Endpunkt (B) erscheint, so dass der erste Endpunkt (A) mit dem ersten transparenten Endpunkt (Ca) kommuniziert; und

transparentes Modifizieren wenigstens einiger der vom zweiten Endpunkt (B) zum ersten Endpunkt (A) mit dem Netzwerkoptimierungsdienst (C) übertragenen Daten auf Anwendungsebene, um eine Fähigkeit zum Empfangen einer Antwort in einem komprimierten Format wie vom ersten Endpunkt (A) gemeldet zu modifizieren, um zu bewirken, dass der zweite Endpunkt (B) eine Antwort in einem komprimierten Format überträgt, und Modifizieren einer komprimierten Antwort vom zweiten Endpunkt (B) an den ersten Endpunkt (A) durch Dekomprimieren der Antwort, um Datenkommunikationen zwischen den Endpunkten (A, B) zu optimieren, wobei das transparente Modifizieren wenigstens einiger der Daten auf Anwendungsebene ferner das Ändern der Daten auf Anwendungsebene, das Ersetzen der Daten auf Anwendungsebene oder das Einfügen zusätzlicher Daten auf Anwendungsebene beinhaltet, so dass der erste Endpunkt (A) andere Daten auf Anwendungsebene empfängt als die, die vom zweiten Endpunkt (B) gesendet wurden.


 
13. Optimierungsdienst nach Anspruch 12, wobei:

der Optimierungsdienst in einem Gerät implementiert wird, das in der Weiterleitung von Daten zwischen Netzwerkknoten auf einem IP-gestützten Netzwerk involviert ist;

der Optimierungsdienst an einem Netzwerkknoten arbeitet, an dem sich einer der Endpunkte (A, B) befindet;

der Optimierungsdienst eine oder mehrere Optimierungsdienstinstanzen umfasst, die in Serie oder parallel arbeiten; und

jede Instanz des Optimierungsdienstes unterschiedliche und/oder gleiche Datenmodifikations- und -speicheroperationen in Koordination miteinander durchführt.


 


Revendications

1. Un procédé mis en œuvre par ordinateur d'optimisation transparente d'une transmission de données entre un premier point d'extrémité (A) et un deuxième point d'extrémité (B) dans un réseau informatique, les points d'extrémité (A, B) possédant une session de données établie directement (S) entre eux, la session de données étant identifiée par chaque point d'extrémité au moins à elle-même de la même manière tout au long de la session, le procédé comprenant :

le relais de données de niveau application entre les points d'extrémité (A, B) de manière transparente dans la session au moyen d'un service d'optimisation de réseau (C), où l'opération de relais de données comprend la création d'un premier point d'extrémité transparent (Ca) tourné vers le premier point d'extrémité (A), où le premier point d'extrémité transparent (Ca) apparaît au premier point d'extrémité (A) sous la forme du deuxième point d'extrémité (B), de sorte que le premier point d'extrémité (A) communique avec le premier point d'extrémité transparent (Ca), et

la modification transparente de paramètres d'au moins certaines des données de niveau application transmises à partir du deuxième point d'extrémité (B) au premier point d'extrémité (A) au moyen du service d'optimisation de réseau (C) de façon à modifier une capacité de réception d'une réponse dans un format compressé signalé par le premier point d'extrémité (A) de façon à amener le deuxième point d'extrémité (B) à transmettre une réponse dans un format compressé, et la modification d'une réponse compressée du deuxième point d'extrémité (B) au premier point d'extrémité (A) par la décompression de la réponse afin d'optimiser des communications de données entre les points d'extrémité (A, B), où la modification transparente d'au moins certaines des données de niveau application comprend en outre la modification des données de niveau application, le remplacement des données de niveau application ou l'insertion de données de niveau application additionnelles de sorte que le premier point d'extrémité (A) reçoive des données de niveau application différentes de celles qui ont été envoyées par le deuxième point d'extrémité (B).


 
2. Le procédé selon la Revendication 1, où la session est identifiée par des identifiants comprenant une adresse IP et un port de transport du premier point d'extrémité (A), une adresse IP et un port de transport du deuxième point d'extrémité (B) et un protocole de transport utilisé, et où les identifiants ne changent pas au long de la durée de vie de la session.
 
3. Le procédé selon la Revendication 1 ou 2, où l'opération de relais de données est exécutée par le suivi de l'état de transport des points d'extrémité (A, B), et en comptant sur l'envoi par le point d'extrémité de données de façon à renvoyer des données en cas de perte de paquets, et la modification des données renvoyées à envoyer au deuxième point d'extrémité (B).
 
4. Le procédé selon l'une quelconque ou plusieurs des Revendications précédentes, où l'opération de relais de données comprend la réception de données communiquées entre les points d'extrémité (A, B) par l'intermédiaire d'un dispositif de redirection, et où l'opération de relais de données comprend le relais d'une partie uniquement de trames de données dans la session entre les premier et deuxième points d'extrémité (A, B).
 
5. Le procédé selon l'une quelconque ou plusieurs des Revendications précédentes, où l'opération de relais de données comprend généralement le suivi en continu et la conservation en mémoire d'un état de la session, comprenant le suivi des données qui ont été envoyées et reçues par chaque point d'extrémité, et des attributs de niveau transport, des attributs de niveau liaison ou des attributs de niveau réseau de la session,
où le suivi comprend la surveillance d'une pluralité d'étages de tunnellisation.
 
6. Le procédé selon l'une quelconque ou plusieurs des Revendications précédentes, où l'opération de modification transparente comprend l'analyse d'une requête de données à partir du premier point d'extrémité (A) dirigée vers le deuxième point d'extrémité (B) et/ou d'une réponse de données à la requête de données reçue du deuxième point d'extrémité (B), l'appariement de la requête de données et de la réponse de données avec une réponse de données conservée en mémoire antérieurement, et la remise de la réponse de données conservée en mémoire antérieurement au premier point d'extrémité (A), tout en bloquant le relais de la réponse de données à partir du deuxième point d'extrémité (B),
où le procédé comprend en outre l'opération consistant à amener le deuxième point d'extrémité (B) à terminer la session à son extrémité ou à arrêter la remise de la réponse de données, et
où l'opération de remise de la réponse de données conservée en mémoire antérieurement comprend l'utilisation de l'adresse IP et de port du deuxième point d'extrémité (B) et l'état de session de la session en tant que poursuite de la session entre les premier et deuxième points d'extrémité (B).
 
7. Le procédé selon la Revendication 6, où l'opération d'appariement est basée sur au moins une variable sélectionnée dans le groupe se composant d'informations temporelles, d'informations ordinales, d'informations de fréquence, d'informations client et d'informations d'identification.
 
8. Le procédé selon l'une quelconque ou plusieurs des Revendications précédentes, où l'opération de modification transparente comprend :

l'analyse d'une première requête de données à partir du premier point d'extrémité (A) dirigée vers le deuxième point d'extrémité (B), l'appariement de la première requête de données avec une réponse de données conservée en mémoire antérieurement, et la remise de la réponse de données conservée en mémoire antérieurement au premier point d'extrémité (A) par l'intermédiaire d'un point d'extrémité transparent distinct, et

la réception d'une deuxième requête de données à partir du premier point d'extrémité (A) dirigée vers le deuxième point d'extrémité (B), la détermination qu'une réponse de données répondant à la deuxième requête de données n'a pas été conservée en mémoire au niveau du service d'optimisation, le relais de la deuxième requête de données au deuxième point d'extrémité (B), la réception de la réponse de données correspondant à la deuxième requête à partir du deuxième point d'extrémité (B), et le relais de la réponse de données du deuxième point d'extrémité (B) au premier point d'extrémité (A) sans l'utilisation d'un point d'extrémité transparent distinct tout en modifiant des paramètres de transport de ladite réponse de sorte qu'elle apparaisse sous la forme d'une poursuite de la session entre les premier et deuxième points d'extrémité (B).


 
9. Le procédé selon l'une quelconque ou plusieurs des Revendications précédentes, où l'opération de modification transparente de données comprend le rendu de données textuelles dans un format différent, l'adaptation de données à des capacités de dispositif de point d'extrémité, la modification d'une qualité vidéo ou le transcodage de données audio et/ou vidéo en un format différent.
 
10. Le procédé selon l'une quelconque ou plusieurs des Revendications précédentes, où l'opération de relais de données est exécutée par le suivi de l'état de transport des points d'extrémité (A, B), et en comptant sur l'envoi par le point d'extrémité de données de façon à renvoyer des données en cas de perte de paquets, et la modification des données renvoyées à envoyer au deuxième point d'extrémité (B).
 
11. Le procédé selon l'une quelconque ou plusieurs des Revendications précédentes, où l'opération de modification transparente de données comprend :

la modification de paramètres d'une requête ou d'une réponse de données d'un point d'extrémité vers un autre de façon à affecter un attribut de requêtes ou de réponses de données subséquentes de sorte que la capacité de mise en mémoire tampon de requêtes de données dans la session soit améliorée, et

l'utilisation de réponses de données conservées en mémoire antérieurement pour une modification subséquente d'une réponse de données, ou éventuellement

où l'opération de modification transparente de données comprend la modification de réponses de disponibilité de données signalées par un point d'extrémité de façon à améliorer un taux de succès de mémoire tampon par le service d'optimisation ou à amener un point d'extrémité à transférer de tels éléments de données qui ne sont pas actuellement conservés en mémoire au niveau du service d'optimisation si les points d'extrémité (A, B) échangent des informations relatives à une disponibilité de données.
 
12. Un service d'optimisation destiné à une optimisation transparente d'une transmission de données entre un premier point d'extrémité (A) et un deuxième point d'extrémité (B) dans un réseau informatique, les points d'extrémité (A, B) possédant une session de données établie directement (S) entre eux, la session de données étant identifiée par chaque point d'extrémité au moins à elle-même de la même manière tout au long de la session, le service d'optimisation étant configuré de façon à :

relayer des données de niveau application entre les points d'extrémité (A, B) de manière transparente dans la session au moyen d'un service d'optimisation de réseau (C), où l'opération de relais de données comprend la création d'un premier point d'extrémité transparent (Ca) tourné vers le premier point d'extrémité (A), où le premier point d'extrémité transparent (Ca) apparaît au premier point d'extrémité (A) sous la forme du deuxième point d'extrémité (B), de sorte que le premier point d'extrémité (A) communique avec le premier point d'extrémité transparent (Ca), et

modifier de manière transparente au moins certaines des données de niveau application transmises à partir du deuxième point d'extrémité (B) vers le premier point d'extrémité (A) au moyen du service d'optimisation de réseau (C) de façon à modifier une capacité de réception d'une réponse dans un format compressé signalé par le premier point d'extrémité (A) de façon à amener le deuxième point d'extrémité (B) à transmettre une réponse dans un format compressé et à modifier une réponse compressée du deuxième point d'extrémité (B) vers le premier point d'extrémité (A) par la décompression de la réponse afin d'optimiser des communications de données entre les points d'extrémité (A, B), où la modification transparente d'au moins certaines des données de niveau application comprend en outre la modification des données de niveau application, le remplacement des données de niveau application ou l'insertion de données de niveau application additionnelles de sorte que le premier point d'extrémité (A) reçoive des données de niveau application différentes de celles qui ont été envoyées par le deuxième point d'extrémité (B).


 
13. Le service d'optimisation selon la Revendication 12 où :

le service d'optimisation est mis en œuvre dans un dispositif impliqué dans le relais de données entre des nœuds de réseau sur un réseau de type IP,

le service d'optimisation fonctionne sur un nœud de réseau, sur lequel réside un des points d'extrémité (A, B),

le service d'optimisation comprend une ou plusieurs instances de service d'optimisation fonctionnant en série ou fonctionnant en parallèle, et

chaque instance du service d'optimisation effectue des opérations de modification et de conservation en mémoire de données différentes et/ou identiques en coordination les unes avec les autres.


 




Drawing























Cited references

REFERENCES CITED IN THE DESCRIPTION



This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.

Patent documents cited in the description