Network Working Group Brooks Hickman Internet-Draft Spirent Communications Expiration Date: December 2001 David Newman Network Test Terry Martin
Internet-DraftM2networx INC Expiration Date: B. Hickman Netcom Systems November 2000June 2001 Benchmarking Methodology for Firewalls <draft-ietf-bmwg-firewall-01.txt>Firewall Performance <draft-ietf-bmwg-firewall-02.txt> Status of this Memo This document is an Internet-Draft and is in full conformance with all provisions of Section 10 of RFC2026. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet- Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . .2 2. Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . .2 3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 4. Test setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.1 Test Considerations . . . . . . . . . . . . . . . . . . . . . 4 4.1.13 4.2 Virtual Client/Servers . . . . . . . . . . . . . . . . . . 4 4.1.23 4.3 Test Traffic Requirements . . . . . . . . . . . . . . . . 4 22.214.171.124 DUT/SUT Traffic Flows . . . . . . . . . . . . . . . . . . 5 4.1.44 4.5 Multiple Client/Server Testing . . . . . . . . . . . . . .5 126.96.36.199 NAT(Network Address Translation) . . . . . . . . . . . . . 6 4.1.65 4.7 Rule Sets . . . . . . . . . . . . . . . . . . . . . . . . 6 4.1.75 4.8 Web Caching . . . . . . . . . . . . . . . . . . . . . . . 6 4.1.85 4.9 Authentication . . . . . . . . . . . . . . . . . . . . . .6 5. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . . . . 76 5.1 Concurrent Connection Capacity . . . . . . . . . . . . . . . . 76 5.2 Maximum Connection Setup Rate . . . . . . . . . . . . . . . . . . . 87 5.3 Connection Establishment Time . . . . . . . . . . . . . . . . 109 5.4 Denial Of Service Handling . .Connection Teardown Time . . . . . . . . . . . . . . . . 11 5.5 Single Application Goodput . . . . . . . . . . . . . . . . . . 12 5.5.1 FTP Goodput . . . . .Denial Of Service Handling . . . . . . . . . . . . . . . 13 5.6 HTTP . . . 12 5.5.2 SMTP Goodput. . . . . . . . . . . . . . . . . . . . . . . 14 5.5.3 HTTP Goodput . . . . . . . . . . . . . . . . . . . . . . . 15 5.6 Concurrent Application Goodput5.7 IP Fragmentation Handling . . . . . . . . . . . . . . . . 17 5.716 5.8 Illegal Traffic Handling . . . . . . . . . . . . . . . . . . . 19 5.818 5.9 Latency . . . . . . . . . . . . . . . . . . . . . . . . . . .19 Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1922 A. FileHyperText Transfer Protocol(FTP) . . . . . . . . . . . . . . . . . . . 19 A.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 19 A.2 Connection Establishment/Teardown . . . . . . . . . . . . . . 20 A.3 Object Format . . . . . . . . . . . .Protocol(HTTP) . . . . . . . . . . . . 2022 B. Simple Mail Transfer Protocol(SMTP) . . . . . . . . . . . . . . . 21 B.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 21 B.2 Connection Establishment/Teardown . . . . . . . . . . . . . . 21 B.3 Object Format . . . . . . . . . . . . . . . . . . . . . . . . 22 C. HyperText Transfer Protocol(HTTP) . . . . . . . . . . . . . . . . 22 C.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 22 C.2 Version Considerations . . . . . . . . . . . . . . . . . . . . 23 C.3 Object Format . . . . . . . . . . . . . . . . . . . . . . . . 23 E. TCP establishment/teardown . . . . . . . . . . . . . . . . . . . . 23 D. GoodPut Measurements . . . . . . . . . . . . . . . . . . . . . . . 23 F. References . . . .References . . . . . . . . . . . . . . . . . . . . . . . . 23 1. Introduction This document is intended to provide methodologyprovides methodologies for the performance benchmarking of firewalls. It provides methodologies for benchmarking forwarding performance, connection performance,in four areas: forwarding, connection, latency and filtering. In addition to defining the tests, this document also describes specific formats for reporting the results of the tests. A previous document, "Benchmarking Terminology for Firewall Performance" , defines many of the terms that are used in this document. The terminology document SHOULD be consulted before attempting to make use of this document. 2. Requirements The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119. 3. Scope Firewalls can provide a single point of defense between two networks--it protects one network from the other.networks. Usually, a firewall protects the company'sprivate networknetworks from the public or shared networks to which it is connected. A firewall can be as simple as a device that filters different packets or as complex as a group of devices providing solutionsthat offers combinedcombine packet filtering and application-level proxy or network translation services. This RFC will focus on developing benchmark testing of systems from an application perspective and will be developedDUT/SUTs, wherever possible, independent of any firewall implementation. These tests will evaluate the ability of firewall devices to control and manage applications services used by today's businesses such as applications like the World Wide Web, File transfer procedures and e-mail. Even through there are many different control methods of managing application level being implemented, this RFC does not condone or promote any aforementioned process or procedure. It's goal is to present a procedure that will evaluate firewall performance independent of theirtheir implementation. 4. Test Setup Test configurations defined in this document will be confined to dual-homed and tri-homed as shown in figure 1 and figure 2 respectively. Firewalls employing Dual-Homeddual-homed configurations connect two networks. One interface of the firewall is attached to the unprotected network, typically the public network(i.e. - Internet).network(Internet). The other interface is connected to the protected network, typically the internal LAN. In the case of Dual-Homeddual-homed configurations, servers which are made accessible to the public(Unprotected) network are attached to the private(Protected) network. +----------+ +----------+ | | | +----------+ | | | | Servers/ |----| | | |------| Servers/ | | Clients | | | | | | Clients | | | |-------| DUT/SUT |--------| | | +----------+ | | | | +----------+ | +----------+ | Protected | | Unprotected Network Network Figure 1(Dual-Homed) Tri-homed configurations employ a third segment called a DMZ. With tri-homed configurations, servers accessible to the public network are attached to the DMZ. Tri-Homed configurations offer additional security by separating server accessible to the public network from internal hosts. +----------+ +----------+ | | | +----------+ | | | | Clients |----| | | |------| Servers/ | | | | | | | | Clients | +----------+ |-------| DUT/SUT |--------| | | | | | | +----------+ | +----------+ | Protected | | | Unprotected Network | Network | | ----------------- | DMZ | | +-----------+ | | | Servers | | | +-----------+ Figure 2(Tri-Homed) 4.1 Test Considerations 188.8.131.52 Virtual Clients/Servers Since firewall testing may involve data sources which emulate multiple users or hosts, the methodology uses the terms virtual clients/servers. For these firewall tests, virtual clients/servers specify application layer entities which may not be associated with a unique physical interface. For example, four virtual clients may originate from the same data source. The test report SHOULD indicate the number of virtual clients and virtual servers participating in the test on a per interface(See 4.1.3) basis. Need to include paragraph forTesters MUST synchronize start of test. Dataall data sources MUST be synchronized to start initiating connections withinparticipating in a specified time of each other. 4.1.2test. 4.3 Test Traffic Requirements While the function of a firewall is to enforce access control policies, the criteria by which those policies are defined vary depending on the implementation. Firewalls may use network layer, transport layer and/or,or, in many cases, application-layer criteria to make access-control decisions. Therefore, the test equipment used to benchmark the DUT/SUT performance MUST consist of real clients and servers generating legitimate layer 7seven conversations. The tests defined in this document use HTTP, FTP, and SMTP sessions for benchmarking the performance of the DUT/SUT. Other layer 7 conversations are outsideFor the scopepurposes of benchmarking firewall performance, HTTP 1.1 will be referenced in this document. See appendices for specific information regardingdocument, although the transactions involved in establishing/tearing down connections as wellmethodologies may be used as object formatsa template for each of the aforementioned protocols. 4.1.3benchmarking with other applications. Since testing may involve proxy based DUT/SUTs, HTTP version considerations are discussed in appendix A. 4.4 DUT/SUT Traffic Flows Since the number of interfaces are not fixed, the traffic flows will be dependent upon the configuration used in benchmarking the DUT/SUT. Note that the term "traffic flows" is associated with client-to- server requests. For Dual-Homed configurations, there are two unique traffic flows: Client Server ------ ------ Protected -> Unprotected Unprotected -> Protected For Tri-Homed configurations, there are three unique traffic flows: Client Server ------ ------ Protected -> Unprotected Protected -> DMZ Unprotected -> DMZ 184.108.40.206 Multiple Client/Server Testing One or more clients may target multiple servers for a given application. Each virtual client MUST initiate requests(Connection, fileobject transfers, etc.) in a round-robin fashion. For example, if the test consisted of six virtual clients targeting three servers, the pattern would be as follows: Client Target Server(In order of request) #1 1 2 3 1... #2 2 3 1 2... #3 3 1 2 3... #4 1 2 3 1... #5 2 3 1 2... #6 3 1 2 3... 220.127.116.11 NAT(Network Address Translation) MostMany firewalls come with Network Address Translation(NAT)networks built inimplement network address translation(NAT), a function which translates internal host IP addresses attached to the protected network to a virtual IP address for communicating across the unprotected network(Internet). This involves additional processing on the part of the DUT/SUT and may impact onperformance. Therefore, tests SHOULD be ran with NAT disabled and NAT enabled to determine the performance differentials. The test report SHOULD indicate whether NAT was enabled or disabled. 18.104.22.168 Rule Sets Rule sets are a collection of access control policies that determines which packets the DUT/SUT will forward and which it will reject. The criteria by which these access control policies may be defined will vary depending on the capabilities of the DUT/SUT. The scope of this document is limited to how the rule sets should be applied when testing the DUT/SUT. The firewall monitors the incoming traffic and checks to make sure that the traffic meets one of the defined rules before allowing it to be forwarded. It is RECOMMENDED that a rule be entered for each host(i.e. - Virtualhost(Virtual client). Although many firewalls permit groups of IP addresses to be defined for a given rule, tests SHOULD be performed with large rule sets, which are more stressful to the DUT/SUT. The DUT/SUT SHOULD be configured to denies access to all traffic which was not previously defined in the rule set. 4.84.7 Web Caching Some firewalls include caching agents in order to reduce network load. When making a request through a caching agent, the caching agent attempts to service the response from its internal resources.memory. The cache itself saves responses it receives, such as responses for HTTP GET requests. The report SHOULD indicate whether webcaching was enabled or disabled on the DUT/SUT. The test report SHOULD indicate whether NAT was enabled or disabled. 4.94.8 Authentication Access control may involve authentication processes such as user, client or session authentication. Authentication is usually performed by devices external to the firewall itself, such as an authentication servers and may add to the latency of the system. Any authentication processes MUST be included as part of connection setup process. 5. Benchmarking Tests 5.1 Concurrent Connection Capacity 5.1.1 Objective To determine the maximum number of concurrent connections supported bythrough or with the DUT/SUT, as defined in RFC2647. This test will employ a step searchalgorithm to obtain the maximum number of concurrent FTP,HTTP or SMTPTCP connections that the DUT/SUT can maintain. 5.1.2 Setup Parameters The following parameters MUST be defined. Each parameters is configured with the following considerations.defined for all tests. Connection Attempt Rate - The raterate, expressed in connections per second, at which new TCP connection requests are attempted. The rate SHOULD be set lower than maximum rate at which the DUT/SUT can accept newconnection requests. Connection Step countCount - Defines the number of additional TCP connections attempted for each iteration of the step search algorithm. Object/MessageObject Size - Defines the number of bytes to be transferred across each connection.in response to a HTTP 1.1 GET request . It is RECOMMENDED to use the minimum object size supported by the media. 5.1.3 Procedure Each virtual client will attempt to establish TCP connections to theirits target server(s)server(s), using either the target server's IP address or NAT proxy address, at a fixed rate in a round robin fashion. Each iteration will involve the virtual clients attempting to establish a fixed number of additional TCP connections. This search algorithm will be repeated until either: - One or more of the additional connection attempts fail to completecomplete. - One or more of the previously established connections failed. Data transfers SHOULD be performed on each connection after the given connection is established. Data transfersfail. The test MUST be performed on all connections after all of the addition connection have been established. When benchmarking with FTP, virtual clients will issue NOOP command'salso include application layer data transfers in order to validate that work can be performed across each connection. The virtual clients must receive a Command Successful reply fromthe target serverTCP connections since, in order to be considered a validthe case of proxy based DUT/SUTs, the tester does not own both sides of the connection. When benchmarking with other applications such as HTTP or SMTP, validationFor the purposes of validation, the connectionvirtual client(s) will be performed by initiating object/message transfers. All bytes associatedrequest an object from its target server(s) using an HTTP 1.1 GET request, with both the object/message transfers MUST be received by the requesting virtualclient request and server response excluding the connection-token close in order tothe connection header. In addition, periodic HTTP GET requests MAY be considered a valid connection. 5.1.5required to keep the underlying TCP connection open(See Appendix A). 5.1.4 Measurements Maximum concurrent connections - Total number of TCP connections that were successfully completed in a step. Test equipment MUST be able to track each connection to verify all required transaction betweenopen for the virtual client and server completed successfully. This includeslast successful completion of bothiteration performed in the command sequences and exchanging of any data across each of those connections. 5.1.6search algorithm. 5.1.5 Reporting Format Maximum22.214.171.124 Transport-Layer Reporting: The test report MUST note the connection attempt rate, connection step count and maximum concurrent connections reportedmeasured. 126.96.36.199 Application-Layer Reporting: The test report MUST benote the aggregate number of connections completed forobject size(s) and the last successful iteration. Report SHOULD also include: - Connection Attempt Rate. - Connection Step Count.use of HTTP 1.1 client and server. 188.8.131.52 Log Files A log file MAY be generated which includes the TCP connection attempt rate, HTTP object size and for each stepiteration: - Step Iteration - Pass/Fail Status. - Total TCP connections established. - Number of previously established TCP connections dropped. - Number of the additional TCP connections that failed to complete. 5.2 Maximum Connection Setup Rate 5.2.1 Objective To determine the maximum TCP connection setup rate which can be supportedthrough the DUT/SUT. Asor with the Concurrent Connection Capacity test, FTP,HTTP and SMTP sessionsDUT/SUT, as defined by RFC2647. This test will be usedemploy a search algorithm to determine this metric.obtain the maximum rate at which TCP connections can be established through or with the DUT/SUT. 5.2.2 Setup Parameters The following parameters MUST be defined. Each test parameter is configured with the following considerations.Initial Attempt Rate - The raterate, expressed in connections per second, at which the initial TCP connection requests are attempted. Number of Connections - Defines the number of TCP connections that must be established. The number MUST be between the number of participating virtual clients and the maximum number supported by the DUT/SUT. It is RECOMMENDED not to exceed the concurrent connection capacity found in section 5.1. Connection Teardown Rate - The rate, expressed in connections per second, at which the tester will attempt to teardown TCP connections between each iteration. The connection teardown rate may vary depending onSHOULD be set lower than rate at which the number of connections attempted. Object/MessageDUT/SUT can teardown TCP connections. Age Time - The time, expressed in seconds, the DUT/SUT will keep a connection in it's state table after receiving a TCP FIN or RST packet. Object Size - Defines the number of bytes to be transferred across each connection.in response to a HTTP 1.1 GET request . It is RECOMMENDED to use the minimum object size supported by the media. 5.2.3 Procedure An iterative search algorithm will be used to determine the maximum connection rate. This test iterates through different connection rates with a fixed number of connections attempted by the virtual clients to their associated server(s). Each iteration will use the same connection establishment and connection validation algorithms defined in the concurrent capacity test(See section 5.1). AfterBetween each iteration,iteration of the test, the tester MUSTmust close all connections priorcompleted for the previous iteration. In addition, it is RECOMMENDED to abort all unsuccessful connections attempted. The tester will wait for the period of time, specified by age time, before continuing to the next iteration. 5.2.4 Measurements The highestHighest connection rate - Highest rate, in connections per second, for which all TCP connections completed successfully. Test equipment MUST be able to track each connection to verify all required transaction between the virtual client and server completed successfully. This includes successful completion of both the command sequences and exchanging of any data across each of those connections.5.2.5 Reporting Format 184.108.40.206 Transport-Layer Reporting: The maximumtest report MUST note the number of connections attempted, connection teardown rate, age time, and highest connection rate reportedmeasured. 220.127.116.11 Application-Layer Reporting: The test report MUST benote the maximum rate for which all connections successfully completed.object size(s) and the use of HTTP 1.1 client and server. 18.104.22.168 Log Files A log file MAY be generated which includes for each step iteration: - Pass/Fail Status. - Connection attempt rate.the total TCP connections attempt, TCP connection teardown rate, age time, HTTP object size and for each iteration: - Step Iteration - Pass/Fail Status. - Total TCP connections established. - Number of theTCP connections that failed to complete. - Total connections established.5.3 Connection Establishment Time 5.3.1 Objective To characterizedetermine the connection establishment timetimes through or with the DUT/SUT as a function of the number of open connections. A connection for a client/server application is not atomic, in that it not only involves transactions at the application layer, but involves first establishing a connection using one or more underlying connection oriented protocols(TCP, ATM, etc). Therefore, it is encouraged to make separate measurements for each connection oriented protocol required in order to perform the application layer transaction. 5.3.2 Setup Parameters The following parameters MUST be defined. Each parameters is configured with the following considerations.Connection Attempt Rate - The raterate, expressed in connections per second, at which new TCP connection requests are attempted. The rate SHOULD be set lower than maximum rate at whichIt is RECOMMENDED not to exceed the DUT/SUT can accept newmaximum connection requests.rate found in section 5.2. Connection Attempt Step count - Defines the number of additional TCP connections attempted for each iteration of the step algorithm. Object/MessageMaximum Attempt Connection Count - Defines the maximum number of bytes to be transferred across each connection. 5.3.3 Procedure The test will use the same algorithm as definedTCP connections attempted in the Concurrent Capacity Test. This includes both the connection establishment and validation of each connection by transferring data across each connection. 5.3.4 Measurement For each iteration, the tester MUST measure the Min/Avg/Max connection times for the additional connections.test. It is RECOMMENDED that in additionnot to exceed the application layer, the tester include measurements at the lower layer protocols(i.e.concurrent connection capacity found in section 5.1. Hickman, Newman, Martin [Page 9] Object Size - TCP, ATM) when applicable. For each of the protocols which the tester is measuring,Defines the connection establishment time shall consistnumber of all transactions required to enable databytes to be transferred across the given connection. For example, FTP requires the user to login prior to being ablein response to get files, view directories and so forth. Connection establishment times MUST include all of these transactions. In the casea HTTP 1.1 GET request. Number of TCP,requests - Defines the connection establishment time would consistnumber of HTTP 1.1 GET requests per connection. Note that connection, in this case, refers to the three-way handshake betweenunderlying transport protocol. 5.3.3 Procedure Each virtual client will attempt to establish TCP connections to its target server(s) at a fixed rate in a round robin fashion. Each iteration will involve the two hosts(See Appendix D). 5.3.5 Reporting Format Graphvirtual clients attempting to establish a fixed number of additional connections until the min/avg/maximummaximum attempt connection establishment times versus the number of open connections. The report MUST identifycount is reached. As with the concurrent capacity tests, application layer fordata transfers will be performed. Each virtual client(s) will request one or more objects from its target server(s) using one or more HTTP 1.1 GET request, with both the client request and server response excluding the connection-token close in the connection header. In addition, periodic HTTP GET requests MAY be required to keep the underlying TCP connection open(See appendix A). Since testing may involve proxy based DUT/SUTs, which terminates the TCP connection, making a direct measurement was taken(i.e. - Application, transport, etc). 5.4 Denial Of Service Handling 5.4.1 Objective To determineof the effectTCP connection establishment time is not possible since the protocol involves an odd number of messages in establishing a denial of service attackconnection. Therefore, when testing with proxy based firewalls, the datagram following the final ACK on connection establishment rates throughthe DUT/SUT. The Denial Of Service Handling test shouldthree-way handshake will be ran after obtaining baseline measurements from section 5.2. When a normalused in determining the connection setup time. The following shows the timeline for the TCP connection starts, a destination host receives a SYN (synchronize/start)packet fromsetup involving a source hostproxy DUT/SUT and sends back a SYN ACK (synchronize acknowledge). The destination host must then hear an ACK (acknowledge) of the SYN ACK before the connectionis established. The TCP SYN attack exploitsreferenced in the measurement section. Note that this design by havingmethod may be applied when measuring other connection oriented protocols involving an attacking source host generate TCP SYN packets with random source addresses towardsodd number of messages in establishing a victim host, thereby consuming that hosts resources. Some firewalls employ one or more mechanisms to guard against SYN attacks. If such mechanisms exist onconnection. t0: Client sends a SYN. t1: Proxy sends a SYN/ACK. t2: Client sends the DUT/SUT, tests SHOULD be ranfinal ACK. t3: Proxy establishes separate connection with these mechanisms enabledserver. t4: Client sends TCP datagram to determine how wellserver. *t5: Proxy sends ACK of the DUT/SUT can maintaindatagram to client. * While t5 is not considered part of the baselineTCP connection rates determined in section 5.2 under such attacks. 5.4.2 Setup Parameters The following parameters MUSTestablishment, acknowledgement of t4 must be defined. Each parameter is configured with the following considerations. Initial Attempt Rate - The rate at whichreceived for the initialconnection requests are attempted. Numberto be considered successful. 5.3.4 Measurements For each iteration of Connections - Definesthe number of connections that must be established. The numbertest, the tester MUST be betweenmeasure the number of participating clientsminimum, maximum and average TCP connection establishment times. Measuring TCP connection establishment times will be made two different ways, depending on whether or not the maximum number supported byDUT/SUT is proxy based. If proxy based, the DUT/SUT. Itconnection establishment time is RECOMMENDED notconsidered to exceedbe from the concurrent connection capacity found in section 5.1. SYN Attack Rate - Definestime the rate at whichfirst bit of the server(s) are targeted with TCPSYN packets. 5.4.3 Procedure This test usespacket is transmitted by the same procedure as defined inclient to the maximum connection setup rate, withtime the additionclient transmits the first bit of the TCP SYN packets targetingdatagram, provided that the server(s) IP address or NATTCP datagram gets acknowledged(t4-t0 in the above timeline). For DUT/SUTs that are not proxy address. The tester originatingbased, the TCP SYN attack MUSTestablishment time shall be attacheddirectly measured and is considered to be from the Unprotected network. In addition,time the tester MUST not respond tofirst bit of the SYN packet is transmitted by the client to the time the last bit of the final ACK packets sentin the three-way handshake is received by the target server in response toserver. In addition, the SYN packet. 5.4.4 Measurements The highesttester SHOULD measure the minimum, maximum and average connection rate, in connections per second,establishment times for whichall legitimate connections completed successfully. Test equipment MUSTother underlying connection oriented protocols which are required to be ableestablished for the client/server application to track eachtransfer an object. Each connection to verify alloriented protocol has its own set of transactions required transactionfor establishing a connection between the virtual clienttwo hosts or a host and server completed successfully. This includes successful completionDUT/SUT. For purposes of bothbenchmarking firewall performance, the command sequences and exchangingconnection establishment time will be considered the interval between the transmission of any data across eachthe first bit of those connections. In addition,the tester SHOULD track SYN packets associated withfirst octet of the SYN attack whichpacket carrying the DUT/SUT forwardsconnection request to receipt of the last bit of the last octet of the last packet of the connection setup traffic received on the protectedclient or DMZ interface(s). 5.4.5server, depending on whether a given connection requires an even or odd number of messages, respectfully. 5.3.5 Reporting Format The maximum connection rate reportedtest report MUST benote the TCP connection attempt rate, TCP connection attempt step count and maximum rate for which allTCP connections successfully completed. The report SHOULD include what percentageattempted, HTTP object size and number of TCP SYN packets were forwarded byrequests per connection. For each connection oriented protocol the DUT/SUT. A log file MAYtester measured, the connection establishment time results SHOULD be generated which includesin tabular form with a row for each step iteration: - Pass/Fail Status. - Connection attempt rate. - Numberiteration of the test. There SHOULD be a column for the iteration count, minimum connection establishment time, average connection establishment time, maximum connection establishment time, attempted connections that failed to complete. - Totalcompleted, attempted connections established. 5.5 Single Application Goodput This section definedfailed. 5.4 Connection Teardown Time 5.4.1 Objective To determine the procedures for base liningconnection teardown time through or with the GoodputDUT/SUT as a function of the DUT/SUTnumber of open connections. As with the connection establishment time, separate measurements will be taken for FTP, HTTP and SMTP traffic. 5.5.1 FTP Goodput 22.214.171.124 Objective The File Transfer Protocol iseach connection oriented protocol involved in closing a common application used by companies to transfer files from one device to another. Evaluating FTP Goodput will allow individuals to measure how much successful traffic has been forwarded by the DUT/SUT. 126.96.36.199connection. 5.4.2 Setup Parameters The following parameters MUST be defined. Each parameterparameters is configured with the following considerations. Number of ConnectionsInitial connections - Defines the number of TCP connections to be opened for transferring data objects. Number MUST be equal or greater than the number of virtual clients participating ininitialize the test. The number SHOULD be a multiple oftest with. It is RECOMMENDED not to exceed the virtual client participatingconcurrent connection capacity found in the test. Connection Ratesection 5.1. Initial connection rate - Defines the raterate, in connections per second, at which the initial TCP connections are established. Object Size - Defines the number ofattempted. It is RECOMMENDED not to exceed the maximum Connection setup rate found in section 5.2. Teardown attempt rate - The rate at which the tester will attempt to teardown TCP connections. Teardown step count - Defines the number of TCP connections the tester will attempt to teardown for each iteration of the step algorithm. Object size - Defines the number of bytes to be transferred across each connection. 188.8.131.52 Procedure Each virtual client will establish a FTPconnection to its respective server(s)in response to an HTTP 1.1 GET request during the initialization phase of the test as well as periodic GET requests, if required. 5.4.3 Procedure Prior to beginning a round robin fashion atstep algorithm, the connection rate.tester will initialize the test by establishing connections defined by initial connections. The transaction involved intest will use the same algorithm for establishing the FTPconnection should follow the procedure definedas described in Appendix A. Afterthe login process has been completed,connection capacity test(Section 5.1). For each iteration of the virtual clientstep algorithm, the tester will initiate a file transferattempt teardown the number of connections defined by issuingteardown step count at a "Get" command. The "Get" commandrate defined by teardown attempt rate. This will target a predefined file of Object Size bytes. Oncebe repeated until the file transfertester has completed, the virtual client will close the FTP connection by issuingattempted to teardown all of the QUIT command. 184.108.40.206 Measurement The Goodput forconnections. 5.4.4 Measurements For each interfaceiteration of the DUT/SUT MUST be measured. See appendix D for details in regards to measuring the Goodput oftest, the DUT/SUT. Only file transfers which have been completed are to be included intester MUST measure the Goodput measurements. Theminimum, average transaction time each object successfully transferred MAY be measured. The start time will begin whenand maximum connection teardown times. As with the connection establishment time test, the "Get" commands is initiated and end at the time when the client receives an acknowledgment from the server that file transfer has completed. 220.127.116.11tester SHOULD measure all connection oriented protocols which are being torn down. 5.4.5 Reporting Format The Goodput fortest report MUST note the initial connections, initial connection rate, teardown attempt rate, teardown step count and object size. For each interface ofconnection oriented protocol the DUT/SUT MUSTtester measured, the connection teardown time results SHOULD be reportedin bits per second. This willtabular form with a row for each iteration of the test. There SHOULD be a column for the aggregateiteration count, minimum connection teardown time, average connection teardown time, maximum connection teardown time, attempted teardowns completed, attempted teardown failed. 5.5 Denial Of Service Handling 5.5.1 Objective To determine the effect of session Goodput's measured fora given interface. Failure analysis: The report SHOULD include the percentagedenial of connectionsservice attack on a DUT/SUTs connection establishment rates and/or goodput. The Denial Of Service Handling test MUST be run after obtaining baseline measurements from sections 5.2 and/or 5.6. The TCP SYN flood attack exploits TCP's three-way handshake mechanism by having an attacking source host generate TCP SYN packets with random source addresses towards a victim host, thereby consuming that failed. This includes: - Connections which failed to establish - Connections which failedhost's resources. Some firewalls employ mechanisms to completeguard against SYN attacks. If such mechanisms exist on the object transfer Transaction Processing analysis: The report SHOULD include average transaction time in transactions per second. The reportDUT/SUT, tests SHOULD also include the object size(Bytes) being transferred. 5.5.2 SMTP Goodput 18.104.22.168 Objective Another application commonly in use today is the mail transfer. One the common transportbe run with these mechanisms for mail messages is the Simple Mail Transfer Protocol(SMTP). The SMTP Goodput will allow individualsenabled to measuredetermine how much successful SMTP traffic has been forwarded bywell the DUT/SUT. 22.214.171.124DUT/SUT can maintain, under such attacks, the baseline connection rates and goodput determined in section 5.2 and section 5.6, respectively. 5.5.2 Setup Parameters The following parameters MUST be defined. Each parameter is configured withUse the following considerations. Number of Connections - Defines the number of connections to be opened for transferring data objects. Number MUST be equalsame setup parameters as defined in section 5.2.2 or greater than5.6.2, depending on whether testing against the number of virtual clients participating inbaseline connection setup rate test or goodput test, respectfully. In addition, the test. The number SHOULDfollowing setup parameters MUST be a multiple of the virtual client participating in the test. Connectiondefined. SYN Attack Rate - Defines the raterate, in packets per second at which connections are attempted. Message Size - Definesthe number of bytes to be transferred across each connection. 126.96.36.199 Procedure Each virtual client will establish a SMTP connection to its respectiveserver(s) in a round robin fashion at the connection rate. The transaction involved in establishing the SMTP connection should followare targeted with TCP SYN packets. 5.5.3 Procedure Use the same procedure as defined in Appendix B. After the greeting exchanges have been completed, the client will initiate the transfer ofsection 5.2.3 or 5.6.3, depending on whether testing against the message by initiatingbaseline connection setup rate test or goodput test, respectfully. In addition, the MAIL command. The clienttester will then send the predefined message of Object Size. Oncegenerate TCP SYN packets targeting the message has been acknowledged as being receivedserver(s) IP address or NAT proxy address at a rate defined by the server, the virtual client will then close the connection. 188.8.131.52 MeasurementSYN attack rate. The Goodput for each interface oftester originating the DUT/SUTTCP SYN attack MUST be measured. See appendix D for details in regardsattached to measuringthe Goodput ofunprotected network. In addition, the DUT/SUT. Only message transfers which have been completed aretester MUST not respond to be includedthe SYN/ACK packets sent by target server in response to the Goodput measurements. The average transaction time for each message transferred MAY be measured. The start time will begin whenSYN packet. 5.5.4 Measurements Perform the timesame measurements as defined in section 5.2.4 or 5.6.4, depending on whether testing against the "MAIL" command is initiated and end atbaseline connection setup rate test or goodput test, respectfully. In addition, the time whentester SHOULD track SYN packets associated with the client receives an acknowledgment fromSYN attack which the server thatDUT/SUT forwards on the message has been received. 184.108.40.206protected or DMZ interface(s). 5.5.5 Reporting Format Goodput analysis:The Goodput for each interface oftest SHOULD use the DUT/SUT MUST be reportedsame reporting format as described in bits per second. This will besection 5.2.5 or 5.6.5, depending on whether testing against baseline throughput rates or goodput, respectively. In addition, the aggregate of session Goodput's measured for a given interface. Failure analysis: Thereport SHOULD include the percentageMUST indicate a denial of connections that failed. This includes: - Connections which failed to establish - Connections which failed to complete the object transfer Transaction Processing analysis: The report SHOULD include average transaction time in transactions per second. The report SHOULD also include the object size(Bytes) being transferred. 5.5.3 HTTP Goodput Goodput 220.127.116.11 Objective Another common application is the World Wide Web (WWW) application that can access documents over the Internet. This application usesservice handling test, SYN attack rate, number SYN attack packets transmitted and number of SYN attack packets received and whether or not the Hypertext Transfer Control Protocol (HTTP) to copy information from one system to another. HTTP Goodput measurement is actually determined by evaluatingDUT has any SYN attack mechanisms enabled. 5.6 HTTP 5.6.1 Objective To determine the Forwarding rategoodput, as defined by RFC2647, of packets. This isthe DUT/SUT when presented with HTTP traffic flows. The goodput measurement will be based on measuring only data that has successfully beenHTTP objects forwarded to the correct destination interface. When benchmarking the performance of the DUT/SUT, consideration of the HTTP version being used must be taken into account. Appendix Cinterface of this document discusses enhancements tothe HTTP protocol which may impact performance results. 18.104.22.168DUT/SUT. 5.6.2 Setup Parameters The following parameters MUST be defined. Each variable is configured with the following considerations.Number of Connectionssessions - Defines the number of HTTP connections1.1 sessions to be openedattempted for transferring data objects.an HTTP object(s). Number MUST be equal or greater than the number of virtual clients participating in the test. The number SHOULD be a multiple of the virtual clientclients participating in the test. Connection RateNote that each session will use one underlying transport layer connection. Session rate - Defines the rate at which connectionsrate, in sessions per second, that the HTTP sessions are attempted. Requests per session - Defines the number of HTTP GET requests per session. Object Size - Defines the number of bytes to be transferred across each connection. 22.214.171.124 HTTP Procedure For the HTTP Goodput tests, it is RECOMMENDEDin response to determine which version ofan HTTP the DUT/SUT has implemented and use the same version for the test. To determine the version of HTTP, the user documentation of the DUT/SUT SHOULD be consulted.GET request. 5.6.3 HTTP Procedure Each client will attempt to establishHTTP connection's to their respective servers a user defined rate. The clients will attach to the servers using either the servers IP address or NAT proxy address. After the client has established the connection with the server, the1.1 virtual client will initiate GET command(s)attempt to retrieve predefined web page(s). When employing HTTP/1.0 in benchmarking the performance of the DUT/SUT, only one object will be retrieved for each of the defined object sizes. After the object has been transferred, the connection should then be torn down. When defining multiple objects, object transfers must be completed and the connections closed for all ofestablish sessions to its HTTP 1.1 target server(s), using either the participating clients prior testingtarget server's IP address or NAT proxy address, at a fixed rate in a round robin fashion. Baseline measurements SHOULD be performed using a single GET request per HTTP session with the nextminimal object size. This process is repeated until all of the defined objects are tested. When employing HTTP/1.1, all objects definedsize supported by the user will be requested with amedia. If the tester makes multiple HTTP GET requests per session, it MUST request overthe same connection. The connection should then be torn down after allsame-sized object each time. Testers may run multiple iterations of thethis test with objects have been transferred. 126.96.36.199of different sizes. See appendix A when testing proxy based DUT/SUT regarding HTTP version considerations. 5.6.4 Measurement TheAggregate Goodput for each- The aggregate bit forwarding rate of the objects sizes transferred MUST be measured. See appendix D for details in regards to measuringrequested HTTP objects. The measurement will start on receipt of the Goodputfirst bit of the DUT/SUT. Only objectsfirst packet containing a requested object which havehas been successfully acknowledged by the server are to be included in the Goodput measurements. The transaction times for each objecttransferred MUST measured. The transaction connection time starts when the connection is madeand willend when the web pages is completely mappedon receipt of the virtual client (when the client sends an acknowledgmentlast packet is sent fromcontaining the client). 188.8.131.52last requested object that has been successfully transferred. The goodput, in bits per second, can be calculated using the following formula: OBJECTS * OBJECTSIZE * 8 Goodput = -------------------------- DURATION OBJECTS - Objects successfully transferred OBJECTSIZE - Object size in bytes DURATION - Aggregate transfer time based on aforementioned time references. 5.6.5 Reporting Format Goodput analysis:The Goodputtest report MUST note the object size(s), number of sessions, session rate and requests per session. The goodput results SHOULD be reported in tabular form with a row for each interfaceof the DUT/SUT MUST be reported in bits per second. This willobject sizes. There SHOULD be columns for the aggregate of session Goodput'sobject size, measured for a given interface.goodput and number of successfully transferred objects. Failure analysis: The test report SHOULD includeindicate the number and percentage of connectionsHTTP sessions that failed. This includes: - Connections which failed to establish - Connections whichfailed to complete the object transfer Transaction Processing analysis: The report SHOULD include averagerequested number of transactions, with a transaction time in transactions per second. The report SHOULD also include the object size(Bytes)being transferred.the GET request and successfully returned object. Version Information Reportinformation: The test report MUST includenote the versionuse of an HTTP used for the test. In addition, if the HTTP/1.1 is used, the number of concurrent GET's allowable(Pipelining) SHOULD be reported. 5.6 Concurrent Application Goodput 184.108.40.206 client and server. 5.7 IP Fragmentation 5.7.1 Objective To determine the Goodput ofperformance impact when the DUT/SUT when offeringis presented with IP fragmented traffic. IP datagrams which have been fragmented, due to crossing a mix of FTP, SMTP and HTTP traffic flows. Real world traffic does not consist ofnetwork that supports a single protocol, butsmaller MTU(Maximum Transmission Unit) than the actual datagram, may require the firewall to perform re-assembly prior to the datagram being applied to the rule set. While IP fragmentation is a mixcommon form of different applications. Thisattack, either on the firewall itself or on internal hosts, this test will allow an individual to determinefocus on determining how wellthe DUT/SUT handles a mixadditional processing associated with the re-assembly of applications by comparingthe results todatagrams has on the individual baseline measurements. 5.6.2goodput of the DUT/SUT. 5.7.2 Setup Parameters The following parameters MUST be defined. Each variable is configured with the following considerations. Number of Connections - Defines the aggregate number of connections to be opened for transferring data/message objects. Number MUST be equal to or greater than the number of virtual clients participating in the test. The numberTrial duration - Trial duration SHOULD be a multiple ofset for 30 seconds. 220.127.116.11 Non-Fragmented Traffic Parameters Session rate - Defines the virtual client participatingrate, in sessions per second, that the test. Connection RateHTTP sessions are attempted. Requests per session - Defines the rate at which connections attempts are opened. Number MUST be evenly divided among allnumber of the virtual clients participating in the test. Object/MessageHTTP GET requests per session. Object Size - Defines the number of bytes to be transferred across each connection. RECOMMENDED message sizes still needsin response to be determined. At a minimum, at least one of the following parameters MUST be defined. In addition,an HTTP GET request. 18.104.22.168 Fragmented Traffic Parameters Packet size, expressed as the cumulative percentage allnumber of bytes in the defined percentages MUST equal 100%. FTP PercentageIP/UDP packet, exclusive of link-layer headers and checksums. Fragmentation Length - Defines the percentage of traffic connections which are to consistlength of FTP file transfers. SMTP Percentage - Definesthe percentagedata portion of traffic connections which are to consistthe IP datagram and MUST be multiple of SMTP Message transfers. HTTP Percentage - Defines8. Testers SHOULD use the minimum value, but MAY use other sizes as well. Intended Load - Intended load, expressed as percentage of traffic connections which are to consist of HTTP GET requests. 5.6.3media utilization. 5.7.3 Procedure This testEach HTTP 1.1 virtual client will run eachattempt to establish sessions to its HTTP 1.1 target server(s), using either the target server's IP address or NAT proxy address, at a fixed rate in a round robin fashion. At the same time, a client attached to the unprotected side of the single application Goodput tests, for which there isnetwork will offer a defined percentage, concurrently. For eachunidirectional stream of unicast UDP/IP packets to a server connected to the defined traffic types,protected side of the connection establishment, data/message transfer and teardown procedures willnetwork. The tester MUST offer IP/UDP packets in a steady state. Baseline measurements SHOULD be performed with a deny rule(s) that filters the same as defined infragmented traffic. If the individual tests. 5.6.4 Measurements AsDUT/SUT has logging capability, the log SHOULD be checked to determine if it contains the correct information regarding the fragmented traffic. The test SHOULD be repeated with the individual tests,DUT/SUT rule set changed to allow the Goodput for eachfragmented traffic through. When running multiple iterations of the defined traffic types MUST be measured. See appendix D for details in regardstest, it is RECOMMENDED to measuringvary the fragment length while keeping all other parameters constant. 5.7.4 Measurements Aggregate Goodput - The aggregate bit forwarding rate of the DUT/SUT.requested HTTP objects.(See section 5.6). Only messages/dataobjects which have beensuccessfully acknowledged as being transferredcompleted transferring within the trial duration are to be included in the Goodput measurements. The transaction times for eachgoodput measurement. Transmitted UDP/IP Packets - Number of the defined applicationsUDP packets transmitted by client. Received UDP/IP Packets - Number of UDP/IP Packets received by server. 5.7.5 Reporting Format The test report MUST be measured. Seenote the appropriate single application Goodputtest for the specifics of measuringduration. The test report MUST note the transaction times for eachpacket size(s), offered load(s) and IP fragmentation length of the defined traffic types. 5.6.5 Reporting Format Goodput analysis: ReportingUDP/IP traffic. It SHOULD include a graph ofalso note whether the number of connections versusDUT/SUT egresses the measured Goodputoffered UDP/IP traffic fragmented or not. The test report MUST note the object size(s), session rate and requests per session. The results SHOULD be reported in Mbps for each ofthe defined traffic types(FTP, SMTP, HTTP). Failure analysis: Reporting should include a graph of numberformat of connections versus percent successa table with a row for each of the defined traffic types. Transaction Processing analysis: Reporting should include a graph of number of virtual connections versus average transactionfragmentation lengths. There SHOULD be columns for each ofthe defined traffic types. 5.7fragmentation length, IP/UDP packets transmitted by client, IP/UDP packets received by server, HTTP object size, and measured goodput. 5.8 Illegal Traffic Handling 5.8.1 Objective To determine the behavior of the DUT/SUT when presented with a combination of both legal and Illegal traffic. 5.7.1 Procedure Still Needs to be determined 5.7.2 Measurements Still Needs to be determined 5.7.3 Reporting Format Still Needs to be determined 5.8 Latency Determine the latency of application layer data through the DUT/SUT. 5.8.1 Procedure Still Needs to be determined 5.8.2 Measurements Still needs to be determined. 5.8.3 Reporting format Still needsthe behavior of the DUT/SUT when presented with a combination of both legal and Illegal traffic flows. Note that Illegal traffic does not refer to be determined. APPENDICES APPENDIX A: FTP(File Transfer Protocol) A.1 Introduction The FTP protocol was designedan attack, but to be operatedtraffic which has been explicitly defined by interactive end users or application programs. The communication protocola rule(s) to transport this service is TCP.drop. 5.8.2 Setup Parameters The core functionsfollowing parameters MUST be defined. Number of this application enable users to copy files between systems, view directory listings and perform house keeping choressessions - such as renaming, deleting and copying files. Unlike other protocols, FTP uses two connections. One connection, calledDefines the control connection, is usednumber of HTTP 1.1 sessions to pass commands betweenbe attempted for transferring an HTTP object(s). Number MUST be equal or greater than the client andnumber of virtual clients participating in the server.test. The other, callednumber SHOULD be a multiple of the data connection, is used to transfervirtual clients participating in the actual data(Files, directory lists, etc.). A.2 Connection Establishment/Teardown(Control) FTP control connectionstest. Note that each session will use one underlying transport layer connection. Session rate - Defines the rate, in sessions per second, that the HTTP sessions are established by issuing OPEN command targeting eitherattempted. Requests per session - Defines the URL or a specific IP address. Sincenumber of HTTP GET requests per session. Object size - Defines the methodology does not include DNS servers, OPEN commands should target specific IP addressnumber of target server. A TCP connection willbytes to be established between the client and target server. Thetransferred in response to an HTTP GET request. Illegal traffic percentage - Percentage of HTTP 1.1 sessions which have been explicitly defined in a rule(s) to drop. 5.8.3 Procedure Each HTTP 1.1 virtual client will then begin the login process. When logging in, it is RECOMMENDED to perform the test using Anonymous FTP Login and should use the following syntax: User ID: Anonymous Password: will correspondattempt to establish sessions to its HTTP 1.1 target server(s), using either the System ID (email@example.com through client firstname.lastname@example.org) Oncetarget server's IP address or NAT proxy address, at a successful login acknowledgment is received from the server, the client may then initiatefixed rate in a file transfer. After all transfer operations have been completed,round robin fashion. The tester MUST present the FTPconnection requests, both legal and illegal, in an evenly distributed manner. Many firewalls have the capability to filter on different traffic criteria( IP addresses, Port numbers, etc). Testers may be closed by issuing a QUIT command. A.3 Data Connection The data connection is established each timerun multiple iterations of this test with the user requests a file transferDUT/SUT configured to filter on different traffic criteria. 5.8.4 Measurements Legal sessions allowed - Number and torn down when the transfer is completed. FTP supports two modespercentage of operation, namely normal modelegal HTTP sessions which completed. Illegal session allowed - Number and passive mode,percentage of illegal HTTP session which determine who initiates the data connection. In normal mode operation,completed. 5.8.5 Reporting Format The test report MUST note the server initiatesnumber of sessions, session rate, requests per session, percentage of illegal sessions and measurement results. The results SHOULD be reported in the data connection, targetingform of a predefined PortID specified intable with a row for each of the PORT command. In passive mode,object sizes. There SHOULD be columns for the client initiatesobject size, number of legal sessions attempted, number of legal sessions successful, number of illegal sessions attempted and number of illegal sessions successful. 5.9 Latency 5.9.1 Objective To determine the latency of network-layer or application-layer data connection, targeting the PortID returned in response totraversing the PASV Command. It is RECOMMENDED to performDUT/SUT. RFC 1242  defines latency. 5.9.2 Setup Parameters The following parameters MUST be defined: 22.214.171.124 Network-layer Measurements Packet size, expressed as the testsnumber of bytes in normal mode operation. File transfers are initiated by usingthe "Get" or "Put" commandIP packet, exclusive of link-layer headers and specifying the desired filename. The tests definedchecksums. Intended load, expressed as percentage of media utilization. Offered load, expressed as percentage of media utilization. Test duration, expressed in this document will use the "Get" command to initiate file transfers from the target server to the client. A.4seconds. Test instruments MUST generate packets with unique timestamp signatures. 126.96.36.199 Application-layer Measurements Object Format Need to definesize, expressed as the object format. APPENDIX B: SMTP (Simple Mail Transfer Protocol) B.1 Introduction The SMTP defines a simple straight forward waynumber of bytes to move messages between hosts. There are two rolesbe transferred across a connection in response to an HTTP GET request. Testers SHOULD use the SMTP protocol, one is the sender and one isminimum object size supported by the receiver.media, but MAY use other object sizes as well. Connection type. The sender acts like a client and establishes a TCPtester MUST use one HTTP 1.1 connection for latency measurements. Number of objects requested. Number of objects transferred. Test duration, expressed in seconds. Test instruments MUST generate packets with the receiver which acts likeunique timestamp signatures. 5.9.3 Network-layer procedure A client will offer a unidirectional stream of unicast packets to a server. The transactions defined in this section willpackets MUST use the terms client and server in place of sender and receiver. B.2 Connection Establishment/Teardown Each connection involvesa connection greeting between the sender(Client) and receiver(Server).connectionless protocol like IP or UDP/IP. The syntax used to identify each other's hostnames during this greeting exchange SHOULD be: "SMTPRcv_<Virtual_Server>.com" "SMTPSender_<Virtual Client>.com" where <Virtual_Client> and <Virtual_Server> representtester MUST offer packets in a unique virtual number forsteady state. As noted in the client and server respectively. The basic transactionslatency discussion in moving mail between two hosts involve three basic steps which are outlined below. These are: 1) Client issuing a MAIL command identifyingRFC 2544 , latency measurements MUST be taken at the message originator forthroughput level -- that session. Syntax used to identifyis, at the originator SHOULD be as follows: connection1,2,3...@hostname 2) Client issues an RCPT command identifyinghighest offered load with zero packet loss. Measurements taken at the recipient ofthroughput level are the message foronly ones that session. Syntax used to identifycan legitimately be termed latency. It is RECOMMENDED that implementers use offered loads not only at the recipient ofthroughput level, but also at load levels that are less than or greater than the message SHOULDthroughput level. To avoid confusion with existing terminology, measurements from such tests MUST be labeled as follows: reciever1,2,3...@hostname 3) Client issues a DATA command. After receiving the acknowledgment fromdelay rather than latency. If desired, the server,tester MAY use a step test in which offered loads increment or decrement through a range of load levels. The duration of the test portion of each trial MUST be at least 30 seconds. 5.9.4 Application layer procedure An HTTP 1.1 client will then transferrequest one or more objects from an HTTP 1.1 server using one or more HTTP GET requests. If the message whichtester makes multiple HTTP GET requests, it MUST include a linerequest the same-sized object each time. Testers may run multiple iterations of this test with a period to indicateobjects of different sizes. Implementers MAY configure the tester to run for a fixed duration. In this case, the servertester MUST report the endnumber of objects requested and returned for the message. Once the endduration of message is received bythe server,test. For fixed-duration tests it will acknowledgeis RECOMMENDED that the end of message.duration be at least 30 seconds. 5.9.5 Measurements Minimum delay - The client may initiate another message transfer or close the sessionsmallest delay incurred by initiating the QUIT command. B.3 Message Format As Internet e-mail has evolved, SMTP extensions have been added to support both audio and video message transfers. For these firewall tests, messages SHOULD consist of plain text ASCII. APPENDIX C: HTTP(HyperText Transfer Protocol) C.1 Introduction As HTTP has evolved overdata traversing the years, changes toDUT/SUT at the protocol have occurred to both fix problems of previous versions as wellnetwork layer or application layer, as improve performance.appropriate. Maximum delay - The most common versions in use today are HTTP/1.0 and HTTP/1.1 and are and are discussed below. C.2 Version Considerations HTTP/1.1 was approvedlargest delay incurred by data traversing the WWW Consortium in July 1999DUT/SUT at the network layer or application layer, as an IETF Draft Standard. This is a formal recognitionappropriate. Average delay - The mean of all measurements of delay incurred by data traversing the fact thatDUT/SUT at the network layer or application layer, as appropriate. Delay distribution - A set of histograms of all known technical issues have been resolved indelay measurements observed for data traversing the specification which was brought out in June 1997. HTTP/1.1 is also downward compatible with HTTP/1.0. Both protocols onDUT/SUT at the popular browsers in use today.network layer or application layer, as appropriate. 5.9.6 Network-layer reporting format The following is a list of features that are offered in HTTP 1.1 that are not in HTTP 1.0. - Persistent connectionstest report MUST note the packet size(s), offered load(s) and pipelining Though both use TCP for data transfer, but differtest duration used. The latency results SHOULD be reported in the way it is used,format of a table with the later version being more efficient. Oncea connection is opened, it is not closed untilrow for each of the HTML documenttested packet sizes. There SHOULD be columns for the packet size, the intended rate, the offered rate, and all objects referred by it are downloaded. This technique is called persistent connection. By serving multiple requests onthe same TCP segment, many control packets (which are not part of actual data transfer) are avoided.resultant latency or delay values for each test. 5.9.7 Application-layer reporting format The techniquetest report MUST note the object size(s) and number of containing multiplerequests and responses withincompleted. If applicable, the same TCP segment overreport MUST note the test duration if a persistent connection is called pipelining. - Data compression HTTP/1.1 providesfixed duration was used. The latency results SHOULD be reported in the format of a table with a row for compressioneach of documents before file transfer. Since most other objects like imagesthe object sizes. There SHOULD be columns for the object size, the number of completed requests, the number of completed responses, and binaries are already compressed, this feature applies only to HTMLthe resultant latency or delay values for each test. Failure analysis: The test report SHOULD indicate the number and plain text documents. - Rangepercentage of HTTP GET request and validation Bandwidth saving measure isor responses that failed to complete within the test duration. Version information: The test report MUST note the introductionuse of two new fields inan HTTP request header, viz. If-Modified-Since:1.1 client and If-Unmodified- Since:.server. APPENDICES APPENDIX A: HTTP(HyperText Transfer Protocol) The significancemost common versions of this feature is that if a browser identifies a fileHTTP in its cache, it needn't reload it unless it has changed sinceuse today are HTTP/1.0 and HTTP/1.1 with the last time it was used. - Support for multiple hosts Itmain difference being in regard to persistent connections. HTTP 1.0, by default, does not support persistent connections. A separate TCP connection is commonopened up for an ISP to host more than one Web site on a single server. In such a case,each domain requires its own IP address. C.3 Object Format Object SHOULD be an HTML formatted object. Append D. GOODPUT Measurements. The Goodput will measureGET request the numberclient wants to initiate and closed after the requested object transfer is completed. Some implementations of bits per second forwardedHTTP/1.0 supports persistence by adding an additional header to the DUT/SUTrequest/response: Connection: Keep-Alive However, under HTTP 1.0, there is no official specification for how the keep-alive operates. In addition, HTTP 1.0 proxies do support persistent connection as they do not recognize the connection header. HTTP/1.1, by default, does support persistent connection and will beis therefore the version that is referenced toin this methodology. When HTTP/1.1 entities want the application level data. The formula for determining Goodput ofunderlying transport layer connection closed after a transaction has completed, the DUT/SUTrequest/response will include a connection-token close in the connection header: Connection: close If no such connection-token is as follows: ObjectSize(Bytes) * 8 Goodput(Bits/Sec) = Transfer Time(Seconds) Transfer Time starts whenpresent, the first bit ofconnection remains open after the object/messagetransaction is received atcompleted. In addition, proxy based DUT/SUTs may monitor the destination port ofTCP connection and after a timeout, close the tester.connection if no activity is detected. The transfer time ends when the last bitduration of the object/messagethis timeout is receivednot defined in the HTTP/1.1 specification and will vary between DUT/SUTs. When performing concurrent connection testing, GET requests MAY need to be issued at a periodic rate so that the destination port ofproxy does not close the TCP connection. While this document cannot foresee future changes to HTTP and it's impact on the tester.methodologies defined herein, such changes should be accommodated for so that newer versions of HTTP may be used in benchmarking firewall performance. Appendix E.B. References  D. Newman, "Benchmarking Terminology for Firewall Devices", RFC 2647, February 1998.  J. Postel, "Simple Mail Transfer Protocol", RFC 821,August 1982. 1999.  R. Fielding, J. Gettys, J. Mogul, H Frystyk, L.Masinter, P. Leach, T. Berners,Berners-Lee , "Hypertext Transfer Protocol -- HTTP/1.1", January 1997RFC 2616 June 1999  S. Bradner, editor. "Benchmarking Terminology for Network Interconnection Devices," RFC 1242, July 1991.  S. Bradner, J. Postel, J. Reynolds, "File Transfer Protocol(FTP)", October 1985McQuaid, "Benchmarking Methodology for Network Interconnect Devices," RFC 2544, March 1999.  David C. Clark, "IP Datagram Reassembly Algorithm", RFC 815 , July 1982.