(19)
(11)EP 2 712 479 B1

(12)EUROPEAN PATENT SPECIFICATION

(45)Mention of the grant of the patent:
30.10.2019 Bulletin 2019/44

(21)Application number: 12760718.2

(22)Date of filing:  23.03.2012
(51)International Patent Classification (IPC): 
H04L 12/70(2013.01)
H04L 29/12(2006.01)
(86)International application number:
PCT/US2012/030448
(87)International publication number:
WO 2012/129540 (27.09.2012 Gazette  2012/39)

(54)

TIME MACHINE DEVICE AND METHODS THEREOF

ZEITMASCHINENVORRICHTUNG UND VERFAHREN DAFÜR

DISPOSITIF DE MACHINE À MESURER LE TEMPS ET PROCÉDÉS ASSOCIÉS


(84)Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

(30)Priority: 23.03.2011 US 201113070086

(43)Date of publication of application:
02.04.2014 Bulletin 2014/14

(73)Proprietor: Keysight Technologies Singapore (Sales) Pte. Ltd.
Singapore 768923 (SG)

(72)Inventors:
  • MATITYAHU, Eldad
    Santa Clara, US 95054 (US)
  • SHAW, Robert
    Santa Clara, US 95054 (US)
  • CARPIO, Dennis
    Santa Clara, US 95054 (US)
  • FUNG, Randy
    Santa Clara, US 95054 (US)

(74)Representative: Murgitroyd & Company 
Scotland House 165-169 Scotland Street
Glasgow G5 8PL
Glasgow G5 8PL (GB)


(56)References cited: : 
US-A- 5 835 726
US-A1- 2006 153 092
US-A1- 2010 195 538
US-B2- 7 773 529
US-A1- 2005 278 565
US-A1- 2009 168 659
US-A1- 2010 195 538
  
  • "CISCO IOS Netflow Overview", INTERNET CITATION, 5 February 2006 (2006-02-05), XP002396719, Retrieved from the Internet: URL:http://www.cisco.com [retrieved on 2006-08-29]
  
Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give notice to the European Patent Office of opposition to the European patent granted. Notice of opposition shall be filed in a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).


Description

BACKGROUND OF THE INVENTION



[0001] In today society, a company may depend upon its network to be fully functionally in order to conduct business. Thus, a company may monitor its network in order to ensure reliable performance, enable fault detection, and detect unauthorized activities. Monitoring may be performed by connecting network taps to the network to gather information about the data traffic in order to share the information with monitoring tools.

[0002] To facilitate discussion, Fig. 1 shows a simple diagram of a network environment with a network tap. Consider the situation wherein, for example, a network environment 100 has two network devices (a router 102 and a switch 104). Data traffic may be flowing through the two network devices. To monitor the health of the network environment, a network tap 106 may be positioned between the two network devices in order to gather information about the data flowing between the two network devices. In an example, a data packet is received by router 102. Before the data packet is forwarded to switch 104, network tap 106 may make a copy of the data packet and forward the copied data packet to a monitoring device, such as an analyzer 108.

[0003] Since most network taps are configured as a bypass device, network tap 106 does not have storage capability. In other words, original data packets flow from router 102 to switch 104 via network tap 106. Further, data packets copied by network tap 106 are forwarded to one or more monitored devices. In both situations, a copy of the data packets being handled is not stored by network tap 106. Thus, if a problem arises in regard to the origin of a 'bad' data packet, network tap 106 is usually unable to provide useful information in resolving the problem.

[0004] United States Patent Application US 2009/0168659 discloses a director device arrangement for enabling a plurality of monitoring functions to be performed on data traffic traversing through a network is provided. The arrangement includes a set of network ports for receiving data traffic and outputting the data traffic.

[0005] Accordingly, an improved intelligent network tap for managing and/or storing the data packets flowing through the network environment is desirable.

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS



[0006] The invention provides for a time machine arrangement per claim 1 and corresponding method claim 10. Preferred embodiments are defined by dependent claims. The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:

Fig. 1 shows a simple diagram of a network environment with a network tap.

Fig. 2A shows, in an embodiment of the invention, a simple diagram of a network environment with a time machine device.

Fig. 2B shows, in an embodiment of the invention, a simple logical diagram of a time machine.

Fig. 3 shows, in an embodiment of the invention, a simple flow chart for managing incoming data traffic.

Fig. 4 shows, in an embodiment of the invention, a simple flow chart for managing performing storage and playback

Fig. 5 shows, in an embodiment of the invention, a simple diagram illustrating an arrangement and/or method for exporting data packets from the time machine device.

Fig. 6A shows, in an embodiment of the invention, a simple block diagram illustrating an arrangement for maintaining a link after a power disruption.

Fig. 6B shows, in an embodiment, examples of data paths between two network devices.

Fig. 7 shows, in an embodiment of the invention, a simple flow chart illustrating a method for maintaining a link after a power disruption in the primary power source has occurred.

Fig. 8 shows, in an embodiment of the invention, a simple block diagram illustrating an arrangement for maintaining zero delay within a fast Ethernet environment.


DETAILED DESCRIPTION OF EMBODIMENTS



[0007] The present invention will now be described in detail with reference to a few embodiments thereof as illustrated in the accompanying drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without some or all of these specific details. In other instances, well known process steps and/or structures have not been described in detail in order to not unnecessarily obscure the present invention.

[0008] Various embodiments are described hereinbelow, including methods and techniques. It should be kept in mind that the invention might also cover articles of manufacture that includes a computer readable medium on which computer-readable instructions for carrying out embodiments of the inventive technique are stored. The computer readable medium may include, for example, semiconductor, magnetic, optomagnetic, optical, or other forms of computer readable medium for storing computer readable code. Further, the invention may also cover apparatuses for practicing embodiments of the invention. Such apparatus may include circuits, dedicated and/or programmable, to carry out tasks pertaining to embodiments of the invention. Examples of such apparatus include a general-purpose computer and/or a dedicated computing device when appropriately programmed and may include a combination of a computer/computing device and dedicated/programmable circuits adapted for the various tasks pertaining to embodiments of the invention.

[0009] In accordance with embodiments of the present invention, a time machine device is provided for storing and/or managing network traffic. Embodiments of the invention include arrangements and methods for establishing conditions for storing network traffic. Embodiments of the invention also include arrangements and methods for encrypting the network traffic. Embodiments of the invention further include arrangements and methods for distributing network traffic flow to minimize impact on line rate.

[0010] In this document, various implementations may be discussed using network tap as an example. This invention, however, is not limited to network tap and may include any network and/or security appliances (e.g., routers, switches, hubs, bridges, load balancer, firewalls, packet shaper, and the like). Instead, the discussions are meant as examples and the invention is not limited by the examples presented..

[0011] In an embodiment of the invention, a time machine device is provided for performing health check on a network environment. The time machine, in an embodiment, may be configured to capture data traffic and to store the data for analysis. In an embodiment, the time machine may include a pre-processing module, a set of processors, a storage memory component, and an export component.

[0012] In an embodiment, the time machine may employ the pre-processing module to perform preliminary analysis (e.g., aggregation, filtering, etc.) on the data flowing through the network devices. Preliminary analysis may be performed if certain conditions are met, in an example. For example, data packets coming from an IP address that is known for propagating viruses may be excluded. The pre-processing module is an optional module and is not required for the implementation of the invention.

[0013] The time machine, in an embodiment, may employ the set of processors to manage the data traffic. The number of processor that may be required may vary depending upon the amount of traffic flowing through the time machine and/or the type of analysis that is being performed on the data traffic. For example, for a company that has a high volume of data traffic, the time machine may be configured to have more processors than a company that has a fairly low volume of data traffic.

[0014] In an embodiment, the set of processors may include a scheduler component, a filtering component, an encryption component, and a trigger component. The scheduler component, in an embodiment, may be configured to direct data traffic, thereby enabling the scheduler component to redirect data traffic as needed. In an embodiment, the filtering component may include logic for performing filtering, including ingress filtering, egress filtering and/or deep packet inspection (DPI). Data flowing through the time machine may also be encrypted by the encryption component, in an embodiment, thereby minimizing the possibility of unapproved tapping. In an embodiment, the time machine may employ a trigger component to define the condition for storing a data packet.

[0015] The time machine, in an embodiment, may store the data packets using the storage memory component The amount of memory available in the storage memory component may be configured to meet the user's needs. In an embodiment, the storage memory component may be an internal component that is integrated with the time machine. Additionally or alternatively, the storage memory component may be an external component, such as a set of external hard drives. In an embodiment, a memory controller may be employed to manage the storage memory component. The memory controller may be employed to control how the data is stored, where the data is stored, and how to redirect the data when one of the memory devices is not available.

[0016] In an embodiment, data traffic saved on the time machine may be exported and made available to other devices through the export component. In an example, the data may be exported to SATA-supported devices. In another example, the data may be exported through an Ethernet interface. In yet another example, the data may be exported to USB-type devices. With the export capability, data analysis may be performed off-site.

[0017] The features and advantages of the present invention may be better understood with reference to the figures and discussions that follow.

[0018] Fig. 2A shows, in an embodiment of the invention, a simple diagram of a network environment with a time machine device. Fig. 2A will be discussed in relation to Fig. 2B. Fig. 2B shows, in an embodiment of the invention, a simple logical diagram of a time machine. Consider the situation wherein, for example, a network environment 200 has two network devices (such as a router 202 and a switch 204). Although a router and switch are shown, the invention is not limited by the type of network devices. Instead, the network devices are provided as example only.

[0019] Data traffic may be flowing through the two network devices (router 202 and switch 204). In an embodiment, a time machine device 206 may be positioned between the two network devices (router 202 and switch 204). Time machine 206 may be configured to manage the data traffic flowing through the network environment and may include programmable logic for performing inline and/or span functions.

[0020] In an embodiment, time machine 206 may include a pre-processing module 210 that may include at least one of an aggregate component 212 and a filtering component 214. In an example, data (such as data packets) may be flowing through multiple ports. The data packets from the ports may be aggregated into a single data stream, for example, by aggregate component 212 of pre-processing module 210. In another example, simple filtering functionalities may be performed by filtering component 214 on the data stream before the data stream is sent for further processing. For example, an example of a filter may include dropping all data commencing from a specific internet address. As a result, time machine 206 may not only control the type of data that may be flowing to analyzer 208 but may also control the data flow traffic between the two network devices (such as router 202 and switch 204).

[0021] In an embodiment, pre-processing module 210 (such as a field-programmable gate array (FPGA)) may be configured to perform packet ordering and time stamp. As can be appreciated from the foregoing, no particular order is required in aggregating and/or filtering the data. Further, pre-processing module 210 is an optional module and is not required for the implementation of the invention.

[0022] In an embodiment, time machine 206 may include a set of processors 216. The set of processors may include one or more processors for handling the flow of data traffic through time machine 206. The number of processors that may be required may depend upon the amount of data traffic and/or the amount of processing that may be handled by time machine 206. In order to manage the flow of traffic, set of processors 216 may also include a scheduler component 218, which is configured to direct data traffic. In an example, scheduler component 218 may determine the percentage of data traffic that may be handled by each processor. In another example, scheduler component 218 may be configured to redirect data traffic to other processors when a processor is not working properly. By managing the data traffic with scheduler component 218, data being handled by set of processors 216 may be managed at or close to line rate.

[0023] In an embodiment, set of processors 216 may include a filtering component 220, which may be configured to perform filtering on the data traffic. In an embodiment, filtering component 220 may be configured to perform at least one of ingress filtering, egress filtering and/or deep packet inspection (DPI). As discussed herein, ingress filtering refers to a technique for verifying the origination of the data packets. This type of filtering is usually performed to protect the network from malicious senders. As discussed herein, egress filtering refers to a technique for restricting the flow of outbound data traffic if the data traffic fails a set of security policies. As discussed herein, deep packet inspection refers to a technique for analyzing the data for security and/or data mining purposes. As can be appreciated, other filtering techniques may be implemented and filtering component 220 is not limited to those discussed above.

[0024] In an embodiment, set of processors 216 may also include an encryption component 222, which may be employed to encrypt the data managed by time machine device 206. The invention is not limited by the type of encryption technique that may be employed. By encrypting the data, unapproved tapping may be preventing from listening to the data traffic that may be flowing through time machine device 206.

[0025] In an embodiment, encryption component 222 may be a configurable component. In an example, a user may have the option of determining whether or not the encryption component 222 is active. In an example, if a user wants to turn off the encryption function, the data packets flowing through time machine 216 are not encrypted. In another example, if the encryption function is turned on, then the data traffic is encrypted and only a key may be employed to decrypt the data traffic.

[0026] In an embodiment, time machine device 206 may be configured to capture the data traffic flowing between the two network devices. In an example, a data packet is received by router 202. Before the data packet is forwarded to switch 204, network tap 206 may make a copy of the data packet and forward the copied data packet to a monitoring device, such as an analyzer 208.

[0027] Unlike the prior art, all the data traffic is not automatically captured, copied and forwarded to a monitoring device (such as analyzer 208). Instead, filtering may be performed (via a set of processors 216 and/or pre-processing module 210) and only data packets that meet the criteria established for the monitoring device may be forwarded to the monitoring device. In an example, analyzer 208 is only interested in monitoring data packets related to emails. Thus, only email data packets are forwarded to analyzer 208. By sending only data packets that are relevant to analyzer 208, the path between time machine device 206 and analyzer 208 is not burdened by unnecessary traffic. Also, analyzer 208 does not have to perform additional processing to extract the data that is relevant to its analysis.

[0028] In the prior art, once the data packets have been forwarded to the monitoring device, the network tap does not usually maintain a copy of the data streams. Unlike the prior art, time machine device 206 includes a storage memory component 224. In an embodiment, the storage memory component is a set of memory devices internally integrated with time machine device 206. In another embodiment, storage memory component 224 may be a set of external memory devices coupled to time machine device 206. In yet another embodiment, storage memory component 224 may be both a set of internal and external memory devices. The amount of memory required may vary depending upon a user's requirements.

[0029] In an embodiment, a memory controller 226 may be provided for managing storage memory component 224. In an example, storage memory component 224 may include four memory devices (e.g., RAID 5, RAID 0, etc.). After a time, the first memory device needs to be replaced. Memory controller 226 may be employed to redirect the flow of data to the other three memory devices while the first memory device is being replaced. Thus, disruption is minimized while part of the device is being repaired/replaced,

[0030] In an embodiment, data traffic that is copied by time machine device 206 may be stored within storage memory component 224. In an embodiment, a time stamp may be added to each data packets to establish an order sequence. Since most data traffic may not provide useful information after a period of time, most data traffic may be eliminated after a predefined period of time. In an embodiment, time machine device 206 may be configured to save incoming data packets over "old data" once storage memory component 224 has reached its maximum capacity.

[0031] However, some data packets may require a longer "saved" period. In an embodiment, a set of processors 216 may include a trigger component 228, which is a component that may define the conditions under which a set of data packets may be protected from being overwritten. In an embodiment, the conditions may be user-configurable. In an example, the user may define the conditions for protecting the set of data packets. For example, all emails from accounting are to be saved for six months. In another example, all emails from the president are to be kept indefinitely.

[0032] In an embodiment, data traffic from time machine device 206 may be exported to other media types instead of just to Ethernet-type media (such as analyzer 208). In an embodiment, an export component 230 may be configured to export data through a plurality of media types, including but not limited to, SATA, USB, and the like. By enabling the data traffic to be exported, data traffic may be monitored and/or analyzed off-site.

[0033] As aforementioned, time machine device 206 is configured for storing data packets. In an embodiment, the conditions for storing the data are user-configurable. In an example, all of the incoming data traffic is stored. In another example, only data packets that meet specific conditions are stored. Since the data packets are stored, time machine device 206 may include a playback feature that enable the user to analyze the data stored and statistical data relating to the data to be analyzed. The playback feature may enable analysis to be performed at a later date and may be employed to address problems that may arise.

[0034] Fig. 3 shows, in an embodiment of the invention, a simple flow chart for managing incoming data traffic.

[0035] At a first step 302, a set of data packets is received by a time machine device.

[0036] At a next step 304, the set of data packets is copied by the time machine device. In other words, before the set of data packets is sent onward to the next network device, a copy of the set of data packets is made by the time machine device.

[0037] At a next step 306, pre-processing is performed. In an embodiment, if more than one data packets are received, the pre-processing module may aggregate the data packets into a single data stream. In another embodiment, the pre-processing module may perform some preliminary filtering. In an example, all data packets from a known bad IP address may be dropped.

[0038] Step 306 may be optional. Once pre-processing has been performed, the set of processors may perform its functions at a next step 308. In an embodiment, additional filtering may be performed on the copied set of data packets. In another embodiment, the set of data packets may be encrypted to prevent snooping.

[0039] Once the set of data packets have been filtered and/or encrypted, at a next step 310, the set of data packets may be stored within a storage memory component.

[0040] In an embodiment, the set of data packets may also be exported to external location, at a next step 312. In an example, at least a part of the data packets may be forwarded to a monitoring device. In another example, at least a part of the data packets may be forwarded off-site to a USB device. In yet another example, at least a part of the data packets may be forwarded to a SATA device.

[0041] Steps 310 and 312 are not dependent upon one another.

[0042] Fig. 4 shows, in an embodiment of the invention, a simple flow chart for managing performing storage and playback
At a first step 402, the set of data packets is received.

[0043] At a next step 404, the set of processors may make a determination if a set of trigger conditions has been met. If the set of trigger conditions has been met, then at a next step 406, the saved condition is applied to the set of data packets. In an example, all data packet with an email address from the accounting department is saved for six months. As can be appreciated from the foregoing, the set of trigger conditions may be employed to help determined the type of content to save and the duration for saving the content.

[0044] At a next step 408, the set of data packet which met the trigger conditions is forwarded to memory controller, which is configured for storing the set of data packets (step 410) in a storage memory component (such as a hard drive).

[0045] Referring back to step 404, if the set of trigger condition is not met, then the set of data packets is sent to the memory controller (408) and is stored within the storage memory component (410) at the standard duration time. As can be seen, in this example, the set of trigger conditions is employed to differentiate the duration for saving a data packets. However, the set of trigger condition may also be employed to determine what type of content is saved. For example, a trigger condition may be set where all personal emails are dropped.

[0046] Once stored, the data is available for playback (step 412). In an embodiment, playback may be a full playback or a partial playback based on a user's command. In an example, the user may have to analyze all stored data to determine the cause of virus within the company's network. In another example, the user may only want to analyze data from the last six months in determining network utilization from the accounting department.

[0047] In addition, the data is also available for exporting (step 414). All or portion of the copied data packets may be exported to one or more monitoring devices for analysis. Also, the data may also be exported to external drives for long-term storage and/or for off-site analysis, for example.

[0048] Fig. 5 shows, in an embodiment of the invention, a simple diagram illustrating an arrangement and/or method for exporting data packets from the time machine device.

[0049] Command for exporting a set of data packets may be received through one of a web interface 502 or a command line interface 504. The interfaces (502 and 504) may be interacting with a configuration manager 506 of a memory controller 508. In an embodiment, configuration manager 506 may be configured to set up the rules on how the data is configured. In an embodiment, memory controller 508 is configured to set up the control for the storage memory components 510 (e.g., disk drives). By employing memory controller 508, problems that may occur to one or more disk drive may be handled while minimizing the impact to the time machine device. In an example, memory controller 508 may divert data packets away from a "bad" disk drive to the other disk drives while the 'bad' disk drive is being repaired and/or replaced.

[0050] In an embodiment, the time machine device may also include an export manager 512, the export manager may be part of the set of processors and may be configured to export the data through one of the ports (e.g., 516A, 516B, 516C, 516D, etc.). In an example, the data may be exported to one of the monitoring ports. In another example, the data may be exported to an external drive such as a SATA device or a USB device. In an embodiment, an export filtering engine 514 may be employed to perform additional filtering before the set of data packets is exported.

[0051] In an embodiment, the time machine may be applied in a high-speed Ethernet environment, such as a gigabit Ethernet, to establish a communication link between network devices. Usually, a communication link may be established between network devices. However, the direction of the data traffic between network devices is usually bidirectional and unpredictable.

[0052] In the prior art, each time a network tap experiences a power disruption, the path between the network devices may have to be renegotiated since the communication link is lost and a new communication link may have to be established. In an example, when the communication link is broken, a set of mechanical relays may be triggered to create a new path. Unfortunately, the time required to trigger the set of mechanical relays and to enable the two network devices to perform auto-negotiation may require a few milliseconds. The latency experienced during this time period may have dire financial consequences. In an example, in a financial industry, a latency of a few milliseconds can result in millions of dollars loss.

[0053] In an embodiment of the invention, the time machine may include a zero-delay arrangement for establishing an alternative path. In an embodiment, the zero-delay arrangement may include a sensor controller, which may be configured to monitor the power flowing into the tap. In an embodiment, the sensor controller may be configured to compare the power flowing into the time machine against a predefined threshold. If the power level is below a predefined threshold, then a set of capacitors may be employed to provide the temporary power source to the time machine to maintain the current communication link while a set of relays is establishing an alternative path (communication link) between the network devices. In an example, a direct communication path between the network devices (moving said set of relays from an opened position to a closed position) may be established when the current communication link is failing. Since the alternative path is established when the power drop is first detected and the communication link between the network devices has not yet been broken, no data packet loss is experienced. Thus, disruption to a company's network traffic may be substantially minimized, thereby, enabling the company to maintain its quality of service and limit its financial loss.

[0054] Fig. 6A shows, in an embodiment of the invention, a simple block diagram illustrating an arrangement for maintaining a link after a power disruption. Consider the situation wherein, for example, data traffic is flowing between two network devices, between a port 602 of Network A and a port 604 of Network B. Both port 602 and port 604 may be RJ45 jacks that support Ethernet over twisted pairs. To monitor the data traffic, a gigabit network tap (such as a time machine) 606 may be provided. As aforementioned, in order for network tap 606 to monitor the data traffic, a communication link may be established between network tap 606 and port 602 of Network A and network tap 606 and port 604 of Network B.

[0055] Those skilled in the art are aware that a gigabit network tap may include a set of PHYs for establishing communication links with the network devices. In an embodiment, when network tap 606 is first turn on, the master-slave mode of a set of PHYs 608 may be configured. In an embodiment, a sensor controller 614 may be employed to configure set of PHYs 608 via a path 616. In an example, side 610 of set of PHYs 608 may be set up in a master mode while side 612 of set of PHYs 608 may be set up in a slave mode. Once the master-slave mode has been established, network tap 606 may participate in auto-negotiation to establish a communication link with each of the network devices.

[0056] Since side 610 of set of PHYs has been set up in a master mode, port 602 of Network A may be set up in a slave mode. Likewise, since side 612 of set of PHYs has been set up in a slave mode, port 604 of Network B may be set up in a master mode. In an example, data traffic may flow from network twisted pair pins 1-2 of port 604 to tap twisted pair pins 3'-6' of side 612 of set of PHYs. The data traffic is then forwarded by tap twisted pair pins 1-2 of side 610 of set of PHYs 604 to network twisted pair pins 3'-6' side of port 602. In another example, data traffic may flow from network twisted pair pins 4-5 of port 604 to tap twisted pair pins 7'-8' of side 612 of set of PHYs. The data traffic is then forwarded by tap twisted pair pins 4-5 of side 610 of set of PHYs 604 to network twisted pair pins 7'-8'side of port 602.

[0057] In an embodiment, sensor controller 614 may also be configured to monitor the power level flowing to network tap 606. In an example, a primary power source 620 (such as a 12 volt power adaptor) may be available to provide power to network tap 606. Similar to Fig. 3, sensor controller 614 may be configured to compare the power level from primary power source 620 to a predefined threshold. If the power level falls below the predefined threshold, then sensor controller may switch a set of relays 622 from an opened position to a close position to create an alternative data path.

[0058] Fig. 6B shows, in an embodiment, examples of data paths between two network devices. In an example, data traffic may be flowing from port 604 (network twisted pair pins 1-2) through network tap 606 to port 602 (network twisted pair pins 3'-6'). In other words, data traffic may flow from network twisted pair pins 1-2 of port 604 through a relay 622a (paths 650a/650b) to tap twisted pair pins 3'-6' of side 612 of set of PHYs (paths 652a/652b). The data traffic is then forwarded by tap twisted pair pins 1-2 of side 610 of set of PHYs 604 through a relay 622b (paths 654a/654b) to network twisted pair pins 3'-6'side of port 602 (paths 656a/656b). However, when power disruption occurs, relay 622 may be switched to establish a set of alternative paths. In an example, instead of flowing through paths 652a/652b and paths 654a/654b, data traffic may be directed from relay 622a along paths 658a/658b to relay 622b (without going through network tap 606) before flowing onward to port 604 of Network B.

[0059] In an embodiment, auto-negotiation is not required to establish a new communication link. Since port 602 of Network A has been previously set up in a slave mode, for example, and port 604 of Network B has been previously set up in a master mode, for example, auto-negotiation is not required to set up a new communication link since the master-slave mode has already been defined and has not changed.

[0060] In the prior art, the set of relays may be activated to establish a new path after power has been loss. As a result, renegotiation is usually required to set up an alternative path between Network A and Network B. Unlike the prior art, the set of relays is activate by sensor controller 614 before the power disruption causes a power drop that is unable to maintain the current communication link, in an embodiment. In other words, the set of relays may be activated before all power has been lost. By creating an alternate path prior to loss of all power, an alternative path may be established while minimizing data loss. In an embodiment, a set of capacitor modules 624 may be employed to store a power source to provide sufficient power to network tap 606 (via a path 626) to maintain the current communication links while set of relays 622 is setting up an alternative path. In an embodiment, since the master-slave mode has already been established, auto-renegotiation is not necessary to establish a new communication link between the network devices.

[0061] In an embodiment, the set of relays is a modular component and may be removable. In an example, the set of relays may be connected to a set of PHYs via a set of sockets. Thus, the set of relays may be quickly connected and disconnected for maintenance.

[0062] Fig. 7 shows, in an embodiment of the invention, a simple flow chart illustrating a method for maintaining a link after a power disruption in the primary power source has occurred.

[0063] At a first step 702, power is provided to a network tap, which is configured to monitor data traffic flowing between two network devices. In an example, primary power source 620 is turned on.

[0064] At a next step 704, power level is monitored by a sensor controller. In an example, sensor controller 614 may be monitoring the power level flowing from primary power source 620 to network tap 606.

[0065] At a next step 706, the sensor controller determines if a power disruption has occurred. In an example, sensor controller 614 may be comparing the power level flowing from primary power source 620 against a predefined threshold. If the power level is above the predefined threshold, power continues to flow from primary power source (step 702).

[0066] However, if the power level is below the predefined threshold, the sensor controller may make a determination if an alternative path has already been established (step 708). In an example, if power is currently being flowing from primary power source 620, then an alternative path is not currently established. Thus, when sensor controller 614 makes a determination that a power drop has occurred, sensor controller 614 may close a set of relays to create an alternative path (step 710). In an embodiment of the invention, a set of capacitors may be available to provide a source of temporary power to network tap 606 in order to maintain the current communication link in order to provide set of relays 622 sufficient time to establish an alternative path for data traffic to flow between Network A and Network B (step 712).

[0067] However, if an alternative path has already been established, then the data traffic continues to flow through the alternative path (step 712),

[0068] As can be appreciated from Figs. 6 and 7, an arrangement and methods are provided for maintaining a link when power disruption may occur causing the network tap to go offline. By monitoring the power level, an alternative path may be established to maintain the link between two network devices. Thus, even though the network tap may no longer be available to monitor the data traffic, an alternative data path may be established. As a result, financial losses that may be experienced due to latency delay may be minimized.

[0069] Fig. 8 shows, in an embodiment of the invention, a simple block diagram illustrating an arrangement for maintaining zero delay within a fast Ethernet environment. Consider the situation wherein, for example, data traffic is flowing between two network devices, between a port 802 of Network A and a port 804 of Network B. Both port 802 and port 804 may he RJ45 jacks that support Ethernet over twisted pairs. To monitor the data traffic, a gigabit network tap 806 (such as a time machine) may be provided.

[0070] In an embodiment, a set of PHYs 810 may be configured to assign data traffic flowing from each specific twisted pair pins along a designated data path. In an embodiment, a set of direction passive couplers 808 may be employed to direct traffic to network tap 806 along the designated data paths. Set of direction passive couplers 808 may be configured to at least receive a copy of the data traffic, determine the direction of the data traffic and route the data traffic through a designated path. In an example, data traffic flowing from twisted pair pins 1-2 of port 802 may be directed by set of direction passive couplers 808 along a path 820. In another example, data traffic flowing from twisted pair pins 1'-2' of port 804 may be directed by set of direction passive couplers 808 along a path 822. Since data traffic is flowing into set of PHYs 810 along a designated path, set of PHYs 810 is able to route the data traffic onward to one or more monitoring devices.

[0071] As can be appreciated from Fig. 8, an arrangement is provided for providing zero delay in a faster Ethernet environment. Given that the inline set of direction passive couplers is passive and does not require power, the possibility of auto-negotiation due to power disruption is substantially eliminated. Thus, even if the network tap suffers power disruption, the power situation of the network tap does not affect the communication link between Network A and Network B.

[0072] Discussion about zero-delay arrangement is provided in a related application entitled "Gigabits Zero-Delay Tap and Methods Thereof," US Application No. 61/308,981, Attorney Docket No. NETO-P017P1, filed on 2/28/2010, by inventors Matityahu et al.,, which is the priority document of the present patent application filed initially as International patent application WO 2011/106589 A2.

[0073] As can be appreciated from the forgoing, one or more embodiments of the present invention provide for a time machine device for managing data traffic through a network. With a time machine device, data are stored at a line rate thereby enabling data to be readily available for analysis. By providing for playback, data may be extracted and analyzed at a later data. Further, time machine device provides for the data to be forwarded to other media type.

[0074] While this invention has been described in terms of several preferred embodiments, there are alterations, permutations, and equivalents, which fall within the scope of this invention. Although various examples are provided herein, it is intended that these examples be illustrative and not limiting with respect to the invention.

[0075] Also, the title and summary are provided herein for convenience and should not be used to construe the scope of the claims herein. Further, the abstract is written in a highly abbreviated form and is provided herein for convenience and thus should not be employed to construe or limit the overall invention, which is expressed in the claims. If the term "set" is employed herein, such term is intended to have its commonly understood mathematical meaning to cover zero, one, or more than one member. It should also be noted that there are many alternative ways of implementing the methods and apparatuses of the present invention.


Claims

1. A time machine arrangement (206, 606) configured to capture network data traffic between at least two network devices (202, 204) in a network environment in which the time machine arrangement (206, 606) is connected, the network environment containing said at least two network devices (202, 204), an analyzer (208) and the time machine arrangement (206, 606) being positioned between the at least two network devices (202, 204), the time machine arrangement (206, 606) comprising:
a set of network ports (610, 612), said set of network ports coupling the time machine arrangement (206,606) to said at least two network devices and said set of network ports including a set of input network ports (610) configured to receive data traffic and a set of output network ports (612) configured to output said data traffic from said time machine arrangement (206, 606), and a set of processors (216) configured to manage and perform filtering on the flow of data traffic through the time machine arrangement (206, 608) and to forward to said analyzer (208) only data packets that meet a criteria established for the analyzer (208),
characterized in that
said set of processors (216) includes:

a scheduler component (218) configured to determine a percentage of data traffic that can be handled by each processor of the set of processors (216);

a filtering component (220) configured to apply a set of filters on said data traffic, the filtering component (220) configured to apply at least one of ingress filtering, egress filtering and deep packet inspection of received data traffic, wherein ingress filtering refers to a technique for verifying the origination of data packets of said data traffic, wherein egress filtering refers to a technique for restricting the flow of outbound data traffic to said output network ports, and wherein deep packet inspection refers to a technique for analyzing data for security and/or data mining purposes;

a storage memory component (224) configured to store data traffic copied by time machine arrangement (206, 606), wherein only data packets from said data traffic that meet certain conditions are stored and wherein said time machine arrangement (206, 606) is configured to save incoming data packets over old data once said storage memory component (224) has reached its maximum capacity; and

a trigger component (228) configured to define a set of conditions under which a set of data packets stored within said storage component (224) is protected from being overwritten.


 
2. The time machine arrangement (206, 606) of claim 1, wherein said scheduler component (218) is configured to direct said data traffic based on a percentage rule.
 
3. The time machine arrangement (206, 606) of claim 2, wherein said scheduler component (218) is configured for redirecting a first set of data packets flowing to a first processor of said set of processors (216) if said first processor is not available to perform processing.
 
4. The time machine arrangement (206, 606) of any preceding claim, wherein said trigger component (228) is configurable, thereby enabling said set of conditions to be configured to a user's specification.
 
5. The time machine arrangement (206, 606) of any preceding claim, wherein said storage memory (224) component includes
a set of memory devices configured to store said data traffic, and
a memory controller (226) configured at least for managing the flow of said data traffic to said set of memory devices.
 
6. The time machine arrangement (206, 606) of claim 5, wherein said memory controller (226) is configured for redirecting said flow of said data traffic when a first memory device of said set of memory devices is inaccessible.
 
7. The time machine arrangement (206, 606) of claim 5, wherein a time stamp is added to each data packet of said data traffic before storing said each data packet in one of said set of memory devices.
 
8. The time machine arrangement (206, 606) of any preceding claim, further including a pre-processing module (210) configured for performing preliminary analysis on said data traffic flowing through said network environment, wherein said pre-processing module (210) includes at least an aggregating component (212) configured for combining a plurality of data packets flowing through said set of network ports into a single data stream.
 
9. The time machine arrangement (206, 606) of one of the claims 1 to 8, further comprising an encryption component (222) configured to encrypt data traffic flowing through said time machine arrangement (206, 606).
 
10. A method carried out in a time machine arrangement (206, 606), for capturing network data traffic between at least two network devices (202, 204) in a network environment in which the time machine arrangement (206, 606) is connected, the network environment containing said at least two network devices (202, 204), an analyzer (208) and the time machine arrangement (206, 606) being positioned between the at least two network devices (202, 204), the method comprising:

- coupling, by a set of network ports (610, 612), said time machine arrangement (206,606) to said at least two network devices, said set of network ports including a set of input network ports (610) for receiving data traffic and a set of output network ports (612) for outputting said data traffic from said time machine arrangement (206, 606);

- managing and performing, by a set of processors (216), filtering on the flow of data traffic through the time machine arrangement (206, 608), and forwarding, by said set of processors, to said analyzer (208) only data packets that meet a criteria established for the analyzer (208), the method being characterized in that

- said set of processors (216) includes :

a scheduler component (218) for determining a percentage of data traffic that can be handled by each processor of the set of processors (216);

a filtering component (220) for applying a set of filters on said data traffic, the filtering component (220) applying at least one of ingress filtering, egress filtering and deep packet inspection of received data traffic, wherein ingress filtering refers to a technique for verifying the origination of data packets of said data traffic, wherein egress filtering refers to a technique for restricting the flow of outbound data traffic to said output network ports, and wherein deep packet inspection refers to a technique for analyzing data for security and/or data mining purposes;

a storage memory component (224) for storing data traffic copied by time machine arrangement (206, 606), wherein only data packets from said data traffic that meet certain conditions are stored and wherein said time machine arrangement (206, 606) saves incoming data packets over old data once said storage memory component (224) has reached its maximum capacity; and

a trigger component (228) for defining a set of conditions under which a set of data packets stored within said storage component (224) is protected from being overwritten.


 
11. The method of claim 10, further including performing preliminary assessment on said monitored set of data packets wherein said preliminary assessment including at least one of aggregation and preliminary filtering.
 
12. The method of claim 10 or 11, further including exporting at least a portion of said monitored set of data packets to an external location, wherein said external locations include at least one of a monitoring device and an external memory device.
 


Ansprüche

1. Eine Zeitmaschinenanordnung (206, 606), die konfiguriert ist, um Netzwerkdatenverkehr zwischen mindestens zwei Netzwerkvorrichtungen (202, 204) in einer Netzwerkumgebung, in der die Zeitmaschinenanordnung (206, 606) verbunden ist, zu erfassen, wobei die Netzwerkumgebung die mindestens zwei Netzwerkvorrichtungen (202, 204), einen Analysator (208) und die Zeitmaschinenanordnung (206, 606), die zwischen den mindestens zwei Netzwerkvorrichtungen (202, 204) positioniert ist, enthält, wobei die Zeitmaschinenanordnung (206, 606) Folgendes beinhaltet:

einen Satz Netzwerk-Ports (610, 612), wobei der Satz Netzwerk-Ports die Zeitmaschinenanordnung (206, 606) mit den mindestens zwei Netzwerkvorrichtungen koppelt und der Satz Netzwerk-Ports einen Satz Eingangs-Netzwerk-Ports (610), die konfiguriert sind, um Datenverkehr zu empfangen, und einen Satz Ausgangs-Netzwerk-Ports (612), die konfiguriert sind, um den Datenverkehr aus der Zeitmaschinenanordnung (206, 606) auszugeben, umfasst, und

einen Satz Prozessoren (216), die konfiguriert sind, um das Filtern des Datenverkehrsflusses durch die Zeitmaschinenanordnung (206, 608) zu managen und durchzuführen und um lediglich Datenpakete, die ein für den Analysator (208) erstelltes Kriterium erfüllen, an den Analysator (208) weiterzuleiten,

dadurch gekennzeichnet, dass

der Satz Prozessoren (216) Folgendes umfasst:

eine Scheduler-Komponente (218), die konfiguriert ist, um einen Prozentanteil des Datenverkehrs, der von jedem Prozessor des Satzes Prozessoren (216) bewältigt werden kann, zu bestimmen;

eine Filterkomponente (220), die konfiguriert ist, um einen Satz Filter auf den Datenverkehr anzuwenden, wobei die Filterkomponente (220) konfiguriert ist, um mindestens eines von Ingress-Filtern, Egress-Filtern und Deep Packet Inspection von empfangenem Datenverkehr anzuwenden, wobei sich Ingress-Filtern auf eine Technik zum Verifizieren des Ursprungs von Datenpaketen des Datenverkehrs bezieht, wobei sich Egress-Filtern auf eine Technik zum Beschränken des ausgehenden Datenverkehrsflusses zu den Ausgangs-Netzwerk-Ports bezieht und wobei sich Deep Packet Inspection auf eine Technik zum Analysieren von Daten für Sicherheits- und/oder Data-Mining-Zwecke bezieht;

eine Datenspeicher-Speicherkomponente (224), die konfiguriert ist, um von der Zeitmaschinenanordnung (206, 606) kopierten Datenverkehr zu speichern, wobei lediglich Datenpakete aus dem Datenverkehr, die gewisse Bedingungen erfüllen, gespeichert werden, und wobei die Zeitmaschinenanordnung (206, 606) konfiguriert ist, um eingehende Datenpakete über alten Daten abzuspeichern, sobald die Datenspeicher-Speicherkomponente (224) ihre maximale Kapazität erreicht hat; und

eine Auslösekomponente (228), die konfiguriert ist, um einen Satz Bedingungen zu definieren, unter denen ein innerhalb der Datenspeicherkomponente (224) gespeicherter Satz Datenpakete vor dem Überschreiben geschützt ist.


 
2. Zeitmaschinenanordnung (206, 606) gemäß Anspruch 1, wobei die Scheduler-Komponente (218) konfiguriert ist, um den Datenverkehr basierend auf einer Prozentanteilsregel zu leiten.
 
3. Zeitmaschinenanordnung (206, 606) gemäß Anspruch 2, wobei die Scheduler-Komponente (218) zum Umleiten eines ersten Satzes Datenpakete, die zu einem ersten Prozessor des Satzes Prozessoren (216) fließen, falls der erste Prozessor nicht zum Durchführen der Verarbeitung zur Verfügung steht, konfiguriert ist.
 
4. Zeitmaschinenanordnung (206, 606) gemäß einem der vorhergehenden Ansprüche, wobei die Auslösekomponente (228) konfigurierbar ist, wodurch ermöglicht wird, dass der Satz Bedingungen gemäß einer Benutzerspezifikation konfiguriert wird.
 
5. Zeitmaschinenanordnung (206, 606) gemäß einem der vorhergehenden Ansprüche, wobei die Datenspeicher-Speicherkomponente (224) Folgendes umfasst:

einen Satz Speichervorrichtungen, die konfiguriert sind, um den Datenverkehr zu speichern, und

eine Speichersteuerung (226), die mindestens zum Managen des Datenverkehrsflusses zu dem Satz Speichervorrichtungen konfiguriert ist.


 
6. Zeitmaschinenanordnung (206, 606) gemäß Anspruch 5, wobei die Speichersteuerung (226) zum Umleiten des Datenverkehrsflusses, wenn auf eine erste Speichervorrichtung des Satzes Speichervorrichtungen nicht zugegriffen werden kann, konfiguriert ist.
 
7. Zeitmaschinenanordnung (206, 606) gemäß Anspruch 5, wobei jedem Datenpaket des Datenverkehrs ein Zeitstempel hinzugefügt wird, bevor jedes Datenpaket in einer von dem Satz Speichervorrichtungen gespeichert wird.
 
8. Zeitmaschinenanordnung (206, 606) gemäß einem der vorhergehenden Ansprüche, ferner umfassend ein Vorverarbeitungsmodul (210), das zum Durchführen einer vorausgehenden Analyse des durch die Netzwerkumgebung fließenden Datenverkehrs konfiguriert ist, wobei das Vorverarbeitungsmodul (210) mindestens eine Aggregierkomponente (212) umfasst, die zum Kombinieren einer Vielzahl von durch den Satz Netzwerk-Ports fließenden Datenpaketen zu einem einzigen Datenstrom konfiguriert ist.
 
9. Zeitmaschinenanordnung (206, 606) gemäß einem der Ansprüche 1 bis 8, ferner beinhaltend eine Verschlüsselungskomponente (222), die konfiguriert ist, um durch die Zeitmaschinenanordnung (206, 606) fließenden Datenverkehr zu verschlüsseln.
 
10. Ein Verfahren, das in einer Zeitmaschinenanordnung (206, 606) ausgeführt wird, zum Erfassen von Netzwerkdatenverkehr zwischen mindestens zwei Netzwerkvorrichtungen (202, 204) in einer Netzwerkumgebung, in der die Zeitmaschinenanordnung (206, 606) verbunden ist, wobei die Netzwerkumgebung die mindestens zwei Netzwerkvorrichtungen (202, 204), einen Analysator (208) und die Zeitmaschinenanordnung (206, 606), die zwischen den mindestens zwei Netzwerkvorrichtungen (202, 204) positioniert ist, enthält, wobei das Verfahren Folgendes beinhaltet:

- Koppeln, durch einen Satz Netzwerk-Ports (610, 612), der Zeitmaschinenanordnung (206, 606) mit den mindestens zwei Netzwerkvorrichtungen, wobei der Satz Netzwerk-Ports einen Satz Eingangs-Netzwerk-Ports (610) zum Empfangen von Datenverkehr und einen Satz Ausgangs-Netzwerk-Ports (612) zum Ausgeben des Datenverkehrs aus der Zeitmaschinenanordnung (206, 606) umfasst;

- Managen und Durchführen, durch einen Satz Prozessoren (216), des Filterns des Datenverkehrsflusses durch die Zeitmaschinenanordnung (206, 608) und Weiterleiten, durch den Satz Prozessoren, lediglich der Datenpakete, die ein für den Analysator (208) erstelltes Kriterium erfüllen, an den Analysator (208), wobei das Verfahren dadurch gekennzeichnet ist, dass

- der Satz Prozessoren (216) Folgendes umfasst:

eine Scheduler-Komponente (218) zum Bestimmen eines Prozentanteils des Datenverkehrs, der von jedem Prozessor des Satzes Prozessoren (216) bewältigt werden kann;

eine Filterkomponente (220) zum Anwenden eines Satzes Filter auf den Datenverkehr, wobei die Filterkomponente (220) mindestens eines von Ingress-Filtern, Egress-Filtern und Deep Packet Inspection von empfangenem Datenverkehr anwendet, wobei sich Ingress-Filtern auf eine Technik zum Verifizieren des Ursprungs von Datenpaketen des Datenverkehrs bezieht, wobei sich Egress-Filtern auf eine Technik zum Beschränken des ausgehenden Datenverkehrsflusses zu den Ausgangs-Netzwerk-Ports bezieht und wobei sich Deep Packet Inspection auf eine Technik zum Analysieren von Daten für Sicherheits- und/oder Data-Mining-Zwecke bezieht;

eine Datenspeicher-Speicherkomponente (224) zum Speichern von von der Zeitmaschinenanordnung (206, 606) kopiertem Datenverkehr, wobei lediglich Datenpakete aus dem Datenverkehr, die gewisse Bedingungen erfüllen, gespeichert werden, und wobei die Zeitmaschinenanordnung (206, 606) eingehende Datenpakete über alten Daten abspeichert, sobald die Datenspeicher-Speicherkomponente (224) ihre maximale Kapazität erreicht hat; und

eine Auslösekomponente (228) zum Definieren eines Satzes Bedingungen, unter denen ein innerhalb der Datenspeicherkomponente (224) gespeicherter Satz Datenpakete vor dem Überschreiben geschützt ist.


 
11. Verfahren gemäß Anspruch 10, ferner umfassend das Durchführen einer vorausgehenden Bewertung des überwachten Satzes Datenpakete, wobei die vorausgehende Bewertung mindestens eines von Aggregation und vorausgehendem Filtern umfasst.
 
12. Verfahren gemäß Anspruch 10 oder 11, ferner umfassend das Exportieren mindestens eines Teils des überwachten Satzes Datenpakete an eine externe Stelle, wobei die externen Stellen mindestens eines von einer Überwachungsvorrichtung und einer externen Speichervorrichtung umfassen.
 


Revendications

1. Un agencement de machine à mesurer le temps (206, 606) configuré pour capturer un trafic de données de réseau entre au moins deux dispositifs de réseau (202, 204) dans un environnement de réseau dans lequel l'agencement de machine à mesurer le temps (206, 606) est connecté, l'environnement de réseau contenant lesdits au moins deux dispositifs de réseau (202, 204), un analyseur (208) et l'agencement de machine à mesurer le temps (206, 606) étant positionnés entre les au moins deux dispositifs de réseau (202, 204), l'agencement de machine à mesurer le temps (206, 606) comprenant :

un ensemble de ports de réseau (610, 612), ledit ensemble de ports de réseau couplant l'agencement de machine à mesurer le temps (206, 606) auxdits au moins deux dispositifs de réseau et ledit ensemble de ports de réseau incluant un ensemble de ports de réseau d'entrée (610) configuré pour recevoir un trafic de données et un ensemble de ports de réseau de sortie (612) configuré pour délivrer en sortie ledit trafic de données provenant dudit agencement de machine à mesurer le temps (206, 606), et un ensemble de processeurs (216) configuré pour gérer et réaliser un filtrage sur le flux de trafic de données à travers l'agencement de machine à mesurer le temps (206, 608) et pour transférer audit analyseur (208) seulement des paquets de données qui remplissent un critère établi pour l'analyseur (208),

caractérisé en ce que

ledit ensemble de processeurs (216) inclut :

un composant ordonnanceur (218) configuré pour déterminer un pourcentage de trafic de données qui peut être manipulé par chaque processeur de l'ensemble de processeurs (216) ;

un composant de filtrage (220) configuré pour appliquer un ensemble de filtres sur ledit trafic de données, le composant de filtrage (220) étant configuré pour appliquer au moins une action parmi un filtrage d'arrivée, un filtrage de départ et une inspection de paquets approfondie du trafic de données reçu, le filtrage d'arrivée désignant une technique pour vérifier l'origine de paquets de données dudit trafic de données, le filtrage de départ désignant une technique pour restreindre le flux de trafic de données sortant vers lesdits ports de réseau de sortie, et l'inspection de paquets approfondie désignant une technique pour analyser des données à des fins de sécurité et/ou d'exploration de données ;

un composant mémoire de stockage (224) configuré pour stocker un trafic de données copié par l'agencement de machine à mesurer le temps (206, 606), dans lequel seuls des paquets de données provenant dudit trafic de données qui remplissent certaines conditions sont stockés et dans lequel ledit agencement de machine à mesurer le temps (206, 606) est configuré pour sauvegarder des paquets de données entrants par-dessus des données anciennes une fois que ledit composant mémoire de stockage (224) a atteint sa capacité maximale ; et

un composant de déclenchement (228) configuré pour définir un ensemble de conditions selon/suivant lesquelles un ensemble de paquets de données stocké à l'intérieur dudit composant de stockage (224) est protégé contre l'écrasement.


 
2. L'agencement de machine à mesurer le temps (206, 606) de la revendication 1, dans lequel ledit composant ordonnanceur (218) est configuré pour diriger ledit trafic de données sur la base d'une règle de pourcentage.
 
3. L'agencement de machine à mesurer le temps (206, 606) de la revendication 2, dans lequel ledit composant ordonnanceur (218) est configuré pour rediriger un premier ensemble de paquets de données circulant vers un premier processeur dudit ensemble de processeurs (216) si ledit premier processeur n'est pas disponible pour réaliser un traitement.
 
4. L'agencement de machine à mesurer le temps (206, 606) de n'importe quelle revendication précédente, dans lequel ledit composant de déclenchement (228) est configurable, permettant ainsi audit ensemble de conditions d'être configuré selon la spécification d'un utilisateur.
 
5. L'agencement de machine à mesurer le temps (206, 606) de n'importe quelle revendication précédente, dans lequel ledit composant mémoire de stockage (224) inclut
un ensemble de dispositifs de mémoire configuré pour stocker ledit trafic de données, et
un contrôleur de mémoire (226) configuré au moins pour gérer le flux dudit trafic de données vers ledit ensemble de dispositifs de mémoire.
 
6. L'agencement de machine à mesurer le temps (206, 606) de la revendication 5, dans lequel ledit contrôleur de mémoire (226) est configuré pour rediriger ledit flux dudit trafic de données lorsqu'un premier dispositif de mémoire dudit ensemble de dispositifs de mémoire est inaccessible.
 
7. L'agencement de machine à mesurer le temps (206, 606) de la revendication 5, dans lequel une estampille temporelle est ajoutée à chaque paquet de données dudit trafic de données avant le stockage dudit chaque paquet de données dans un dispositif dudit ensemble de dispositifs de mémoire.
 
8. L'agencement de machine à mesurer le temps (206, 606) de n'importe quelle revendication précédente, incluant en outre un module de prétraitement (210) configuré pour réaliser une analyse préliminaire sur ledit trafic de données circulant à travers ledit environnement de réseau, ledit module de prétraitement (210) incluant au moins un composant d'agrégation (212) configuré pour combiner une pluralité de paquets de données circulant à travers ledit ensemble de ports de réseau en un seul train de données.
 
9. L'agencement de machine à mesurer le temps (206, 606) de l'une des revendications 1 à 8, comprenant en outre un composant de chiffrement (222) configuré pour chiffrer un trafic de données circulant à travers ledit agencement de machine à mesurer le temps (206, 606).
 
10. Un procédé effectué dans un agencement de machine à mesurer le temps (206, 606), pour capturer un trafic de données de réseau entre au moins deux dispositifs de réseau (202, 204) dans un environnement de réseau dans lequel l'agencement de machine à mesurer le temps (206, 606) est connecté, l'environnement de réseau contenant lesdits au moins deux dispositifs de réseau (202, 204), un analyseur (208) et l'agencement de machine à mesurer le temps (206, 606) étant positionnés entre les au moins deux dispositifs de réseau (202, 204), le procédé comprenant :

- le couplage, par un ensemble de ports de réseau (610, 612), dudit agencement de machine à mesurer le temps (206, 606) auxdits au moins deux dispositifs de réseau, ledit ensemble de ports de réseau incluant un ensemble de ports de réseau d'entrée (610) pour recevoir un trafic de données et un ensemble de ports de réseau de sortie (612) pour délivrer en sortie ledit trafic de données provenant dudit agencement de machine à mesurer le temps (206, 606) ;

- la gestion et la réalisation, par un ensemble de processeurs (216), d'un filtrage sur le flux de trafic de données à travers l'agencement de machine à mesurer le temps (206, 608), et le transfert, par ledit ensemble de processeurs, audit analyseur (208) seulement de paquets de données qui remplissent un critère établi pour l'analyseur (208), le procédé étant caractérisé en ce que

- ledit ensemble de processeurs (216) inclut :

un composant ordonnanceur (218) pour déterminer un pourcentage de trafic de données qui peut être manipulé par chaque processeur de l'ensemble de processeurs (216) ;

un composant de filtrage (220) pour appliquer un ensemble de filtres sur ledit trafic de données, le composant de filtrage (220) appliquant au moins une action parmi un filtrage d'arrivée, un filtrage de départ et une inspection de paquets approfondie du trafic de données reçu, le filtrage d'arrivée désignant une technique pour vérifier l'origine de paquets de données dudit trafic de données, le filtrage de départ désignant une technique pour restreindre le flux de trafic de données sortant vers lesdits ports de réseau de sortie, et l'inspection de paquets approfondie désignant une technique pour analyser des données à des fins de sécurité et/ou d'exploration de données ;

un composant mémoire de stockage (224) pour stocker un trafic de données copié par un agencement de machine à mesurer le temps (206, 606), dans lequel seuls des paquets de données provenant dudit trafic de données qui remplissent certaines conditions sont stockés et dans lequel ledit agencement de machine à mesurer le temps (206, 606) sauvegarde des paquets de données entrants par-dessus des données anciennes une fois que ledit composant mémoire de stockage (224) a atteint sa capacité maximale ; et

un composant de déclenchement (228) pour définir un ensemble de conditions selon/suivant lesquelles un ensemble de paquets de données stocké à l'intérieur dudit composant de stockage (224) est protégé contre l'écrasement.


 
11. Le procédé de la revendication 10, incluant en outre la réalisation d'une évaluation préliminaire sur ledit ensemble surveillé de paquets de données, ladite évaluation préliminaire incluant au moins une action parmi une agrégation et un filtrage préliminaire.
 
12. Le procédé de la revendication 10 ou de la revendication 11, incluant en outre l'exportation d'au moins une partie dudit ensemble surveillé de paquets de données vers un emplacement externe, lesdits emplacements externes incluant au moins un dispositif parmi un dispositif de surveillance et un dispositif de mémoire externe.
 




Drawing



































Cited references

REFERENCES CITED IN THE DESCRIPTION



This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.

Patent documents cited in the description