Most speed tests do not report throughput. Throughput is the true performance of the user experience on the connection being tested. Throughput is the amount of data over a period of time.
Measuring efficiency of a network is important if you want to understand how the connection performs under load. As load climbs efficiency drops, just like traffic on the roads. The percentage of slowdown is essentially the drop in throughput for the load variation. Poor networks have low efficiency factors.
Max delay is the amount of time the TCP stack waits for data. Understanding when max delay is bad versus good is very important because a max delay value that is too high signifies that quality problems exist. Max Delay must not exceed the trip latency time minus the data consumption time + 20%. If max delay is very high, for example → 500 this is a good indication of timeout retransmissions. Timeout retransmission have severe penalties.
The best performing connections are one that run consistently. Meaning that the same number of bytes pass in the same interval of time every time. It does not mean it is fast or slow. 1 byte per minute would be consistent but also very slow. Inconsistent data flow is a sign of data flow issues such as packet loss, duplicates and retransmissions. It can also mean that the connection is provisioned poorly. Example 20Mbps could be delivered as 100Mbps for 1 second of time and then nothing for 4 seconds of time. The average being 20Mbps. Not very good for media based applications.
TCP forced idle defines the amount of spare capacity that is available for a specific connection under test which results from the capacity limit being greater than the amount of the TCP unacknowledged payload (called a window). This is the same as an idle road that has little traffic. Its not roads fault that only a few cars are using it.
Duplicates and part duplicates are bad problems and imply that retransmissions are occurring. Duplicates occur when packets are delayed but never get lost resulting in a retransmission and both packets eventually arise. Whichever packet arrives last (although it can be the first packet sent) will be flagged as duplicate. Part duplicates are more problematic than non-part duplicates because it implies that packets are being fragmented.
There are two types of retransmit… A fast retransmit and a timeout retransmit both are problems that cause slow connections although the retransmit timeout is the worse. The best way to describe a retransmit is to consider two people talking on the phone are repeatedly ask the other to repeat what they just said because they couldn’t hear it. Makes it hard to have a good conversation experience.
Out of order packets are only measured in a quality test. An out of order packet is a problem because it slows the data flow and can cause network timeouts and duplicates. These are significant problems if they occur.
Bytes that arrive outside the TCP leading or trailing edge will be discarded as outside window. This is a serious problem because it means the data has no place to go. A bit like driving into a multi-story car park which broadcasts at the entrance that there are spaces available when there aren’t any.
The percentage of packets that have been lost. So, if 5 packets are sent per hop and one is dropped this will show as 20% packet loss. Is a measure of how many packets did not reach the destination for one reason or another, expressed as a percentage of the total number of packets. Any packet loss is bad and affects the quality of applications
is a measure of the time taken for a route testing packet to reach a particular hop on the route and return. By measuring each hop along the route to the destination, including the destination it is possible to see where high latency is possible causing degradation in throughput speed or dropping packets. Latency should only be higher if there is distance involved. For example, a route from London to New York will have to cross the Atlantic Ocean, 3000 miles of will obviously cause higher latency for that hop. If you see high latency for a route validating the geography is an important step to making sense of any potential routing issues.
An Internet route is similar in concept to a road route. In order to navigate a car between the beginning of a journey and the destination the journey taken will be comprised of a series of points along the route where the car has to make a direction change to complete the journey end-to-end. At each point on the Internet where there are choices of direction there is a router device (termed a ‘hop’) that is responsible for sending the data in the right direction for the next ‘Hop’. Just like road traffic, if a router hop is situated on a popular route then it can be come congested causing heavy delays in throughput or even lost packets. Routers are owned by the companies that operate them, when two different companies send traffic to each other, the hops where these routes join are called Peering Points. Because of natural issues invoked when more than one organization is involved Peering Points are the most likely hops where problems can occur.
Consistency of Service is a measure of how smoothly data packets are moving. If a connection is un congested or unregulated then every packet should flow at a rate that matches the maximum capacity of the slowest part of the connection’s route. If regulation of congestion causes this pattern to change then the data flow QoS should drop. Note if there are problems affecting packets but the impact is evenly spread, e.g. all or nearly all of packets are affected then the QoS may still be high.
This is the time it takes for a packet to be sent end-to-end between the client and the server and back. The length and consistency of the trip time ultimately defines the TCP throughput speed. A long trip time will dramatically slow connection throughput speed. An erratic trip time is an early indication of regulation or congestion problems.
Max delay is the amount of time spent idle waiting for data to arrive from the other end of the connection being tested. A high (greater than 100ms) max delay is an indication of a significant quality problem.
Score is a measure from 1 (being the worst) to 5 (being the best). MOS is quite subjective, as it originated from the phone companies and used human input from related quality tests. Software applications have adopted the MOS score and scale, namely 5 – Clear as if in a real face to face conversation; 4 – Fair, small interference but sound still clear. Cell phones are a good example of an everyday; 3 – Not fair, enough interference to start to annoy; 2 – Poor, very annoying and almost unusable; 1 – Not fit for purpose.
Is a measure for the difference in time that each packet takes to reach the destination. In an ideal world it would be nice if each packet sent took exactly the same time to travel between the client and the server (0% jitter) but in reality this is seldom the case and packets vary in the length of time (Latency) it takes to reach the destination which on a bad connection can be very larger. Jitter is an expression of the variance.
Is a measure in percentage of how many packets arrived in order. Packet do not necessarily take the same route or the same time to reach the destination. This results in packets arriving out of order which causes other packets to be delayed or even in very bad cases discarded. Delayed or discarded packets cause a quality problem for the application.
Is a measure of packets that arrive too late to be used by the application. Packets are very time dependent when it comes to media-based applications. There is a time window when packets can be used after which it is too late and the packet has to be intentionally discarded when it arrives. A bit like missing a connecting flight because the first flight was delayed and arrived after the second flight had taken-off.