0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

What is Network Latency and How to Minimize It

Posted at

Network latency is not a new term. In modern Ethernet switches, network latency plays a significant role in performance measurement, especially for clustered computing and high-performance networking applications. Most of us have heard about network latency, but don’t know what it actually is. This article covers everything about network latency – what is it? What causes network latency? How to minimize it on Ethernet switches?

  1. What is Network Latency?
    Latency is the time it takes for data to travel from one point on a network to another. Let’s say Server A in New York sends a packet of data to Server B in London. Server A sends the packet at 04:38:00.000 GMT and Server B receives it at 04:38:00.145 GMT. The amount of latency on this path is the difference between those two times: 0.145 seconds or 145 milliseconds.

Apa itu Latensi Jaringan dan Bagaimana Cara Meminimalkannya.jpg

Most often, latency is measured between a user’s device (the “client” device) and a data center. This measurement helps developers understand how quickly a web page or application will load for the user.

Although data on the Internet travels at the speed of light, the effects of distance and delays caused by Internet infrastructure equipment mean that latency can never be completely eliminated. However, it can and should be minimized. High latency results in poor website performance, negatively impacts SEO, and can cause users to abandon the site or application.

  1. Why Network Latency Matters
    Latency is especially important when it comes to businesses that serve visitors in a specific geographic location. For example, let’s say you have an e-commerce business in Chicago, and 90% of your customers are from the United States. Your business will definitely benefit if your website is hosted on a server in the United States instead of in Europe or Australia. This is because there is a big difference between loading a website from near your hosting server or across the world.

Distance is one of the main reasons for latency, but there may be other factors that prevent your website from having low latency. Below, we’ll go into more detail about the causes of latency below and give you essential tools to find out if you’ve chosen the right location to host your website.

  1. Latency vs. Bandwidth vs. Throughput
    While latency, bandwidth, and throughput all work together, they have different meanings. It’s easier to visualize how each term works when referring to a pipe:

Bandwidth determines how narrow or wide a pipe is. The thinner it is, the less data can be pushed through it at one time and vice versa.
Latency determines how quickly the content in the pipe can be transferred from the client to the server and back.
Throughput is the amount of data transferred over a given period of time.
If the latency in the pipe is low and the bandwidth is also low, throughput will be inherently low. However, if the latency is low and the bandwidth is high, it will allow for greater throughput and a more efficient connection. Ultimately, latency creates congestion in the network, reducing the amount of data transferred over time.

  1. Causes of Network Latency
    What is latency has been answered; now, where does latency come from? Four primary causes can affect network latency times. These include the following:
    Transmission mediums such as WANs or fiber optic cables have limitations and can affect latency simply by their nature.
    Propagation is the time it takes for a packet to travel from one source to another (at the speed of light).
    Routers take time to analyze packet header information and, in some cases, add additional information. Each hop a packet takes from router to router increases latency.
    Storage delays can occur when packets are stored or accessed, resulting in delays caused by intermediary devices such as switches and bridges.
    From the above, one of the primary causes of network latency is distance, specifically the distance between the client device making the request and the server responding to the request. If a website is hosted in a data center in Columbus, Ohio, it will receive a request fairly quickly from a user in Cincinnati (about 100 miles away), likely within 5-10 milliseconds. On the other hand, a request from a user in Los Angeles (about 2,200 miles away) would take much longer to arrive, closer to 40-50 milliseconds.

A few milliseconds of increase may not seem like much, but it’s compounded by all the back-and-forth communication required for the client and server to establish a connection, the total size and load time of the page, and any issues with the network equipment the data passes through along the way. The amount of time it takes for a response to reach the client device after a client request is known as the round-trip time (RTT). The RTT is equal to twice the amount of latency, because the data has to travel in both directions—out and back again.

Data traveling across the Internet typically has to pass through not just one, but several networks. The more networks an HTTP response has to pass through, the more opportunities there are for delays. For example, as data packets travel across networks, they pass through Internet Exchange Points (IXPs). There, routers must process and route the data packets, and sometimes routers may need to break them up into smaller packets, all of which add a few milliseconds to the RTT.

  1. How to reduce latency
    Latency can be reduced using a few different techniques as explained below. Reducing the amount of server latency will help your web resources load faster, thus improving the overall page load time for your visitors.
    Using a CDN: As we mentioned earlier, the distance between the client making the request and the server responding to the request plays a significant role. Using a content delivery network (CDN) helps bring resources closer to the user by caching them in multiple locations around the world. Once the resource is cached, the user’s request only has to go to the nearest Point of Presence to fetch the data, instead of going back to the origin server every time.
    HTTP/2: The ever-popular use of HTTP/2 is another great way to help minimize latency. HTTP/2 helps reduce server latency by minimizing the number of round trips from sender to receiver and parallel transfers. KeyCDN is proud to offer HTTP/2 support to customers on all of our edge servers.
    Fewer external HTTP requests: Reducing the number of HTTP requests applies to images and other external resources such as CSS or JS files. Let’s say you are referencing information from a server other than your own. In such cases, you are making external HTTP requests that can significantly increase the latency of your website based on the speed and quality of the third-party server.
    Using prefetching methods: Prefetching web resources does not necessarily reduce the amount of latency. However, it does improve the performance of your website. With prefetching, processes that require high latency occur in the background while the user is browsing a particular page. Therefore, work such as DNS lookups have already been done by the time they click on the next page, so the page loads faster.
    Browser caching: Another type of caching that can reduce latency is browser caching. Browsers will store certain website resources locally to help improve latency times and reduce the number of requests back to the server. Read more about browser caching and the various directives in our Cache-Control article.

  2. How to measure network latency?
    Basically, latency is measured using one of two methods:
    Round trip time (RTT)
    Time to first byte (TTFB)
    Round trip time can be measured using the above methods and involves measuring the time taken between when a client sends a request to a server and when it receives the request back. TTFB, on the other hand, measures the time taken between when a client sends a request to a server and when it receives its first byte of data. You can use our performance testing tool to measure the TTFB of any asset across our network of 10 test sites.

Screenshot from 2024-08-29 14-30-53.png

Hopefully this article has helped answer the question of what latency is and given readers a better understanding of what causes it. Latency is an inevitable part of today’s networking ecosystem and is something we can minimize but not eliminate. However, the suggestions mentioned above are important steps to take to reduce your website’s latency and help improve page load times for your users. After all, in today’s internet age, the importance of website speed is in the milliseconds and can be worth millions of dollars in profits gained or lost.

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?