DED9

What Does Load Balancing In Computer Networks Mean?

Load Balancing Is A Technique Used To Distribute Network Traffic Among A Collection Of Servers Called A Server Farm. 

The above technique optimizes the reliability and capacity of the network and reduces the latency because the demand to access resources is evenly distributed among multiple servers and computing resources.

A load balancer uses a physical or virtual (software) device to determine in real-time which server in a pool can best respond to client requests while ensuring that heavy network traffic does not cause A server stops moving.

In addition to maximizing network capacity and ensuring high performance, load balancing is an effective solution to overcome failure. So that if one of the servers fails, a load balancer immediately directs its workloads to a backup server, thus reducing the impact on end users.

Load balancing is usually placed as a backup solution applicable at layer four or layer 7 of the Open Systems Communication (OSI) model. A layer four load balancer distributes traffic based on transport layer data, such as IP addresses and TCP port numbers. Layer 7 load balancers make routing decisions based on application-level attributes, including Hypertext Transfer Protocol (HTTP) header information and actual message content, such as URLs and cookies. Typically, it is common to use a layer seven load balancing mechanism, but layer four load balancing is also popular, especially regarding edge deployment.

How does the load-balancing mechanism work?

The load balancer handles incoming user requests for information and other services. They sit between the servers responding to requests and the Internet. When a request is received, the load balancer determines which server in a pool is available and online and then routes the request to that server. A load balancer acts quickly when traffic loads become heavy and can dynamically add servers to the network to respond to traffic spikes. Additionally, load balancers can remove servers when demand is low.

Types of load balancers

Load balancing is a critical component of the high-availability infrastructure. Depending on a network’s needs, different load balancers can be used with other capabilities, functions, and storage complexity.

A load balancer can be a physical appliance, a software solution, or a combination. Below are two types of load balancers:

Hardware load balancer:

Software load balancer:

Hyper-Axis Load Balancer

Enterprises can use the cloud balancing mechanism as their underlying infrastructure to balance cloud computing environments.

Among cloud load balancing models, the following should be mentioned:

Load balancing algorithms

Load balancing algorithms determine which servers receive incoming requests from specific clients. There are two main types of load-balancing algorithms, static and dynamic.

1. Static load-balancing algorithms

In the IP hash-based approach, the target server examines the client’s request based on defined criteria, such as HTTP headers or IP address information. This method supports the durability of the settlement or stickiness. For this reason, it is a good option for applications that rely on user-specific stored state information, a typical example of which is the shopping cart in e-commerce.

The round-robin method goes through all the available servers sequentially and distributes the traffic among a list of servers in a rotating manner using the Domain Name System (DNS). A valid name server maintains a list of different “A” records and selects one in response to each DNS query.

The weighted round-robin approach allows administrators to assign different weights to each server. In this way, servers that can handle more traffic will receive slightly more traffic based on their weight. Weighting is configured in DNS records.

2. Dynamic load balancing algorithms

In the least-connections approach, servers with the fewest ongoing transactions are checked, and traffic is sent to servers with the most irregular open connections. This algorithm assumes that all relationships require approximately equal processing power.

The least-weighted connection method assumes that some servers can handle more traffic than others. Thus, it enables admins to assign different weights to each server.

The weighted response time approach uses the average response times of each server and combines them with the number of connections each server opens to find the best destination to send traffic to. This algorithm ensures faster service by forwarding traffic based on the quickest time the server can respond.

The resource-based algorithm distributes the load based on the availability of resources on each server at that moment. To benefit from the above method, a dedicated software called an agent must be run on each server to measure the availability of the central processing unit and memory before distributing the traffic.

Benefits of load balancing

Organizations that manage multiple servers can significantly benefit from load-balancing their network traffic. The main benefits of using a load balancer are as follows:

Hardware vs. software load balancer

Hardware and software load balancers have specific use cases. The hardware load balancers are used to manage exponential traffic loads, while software solutions are used based on bandwidth calculation. The hardware load balancers require rack-and-stack equipment, while software load balancers are installed on standard x86 servers, virtual machines, or cloud instances.

The advantages and disadvantages of hardware and software load-balancing mechanisms are as follows:

Advantages

Disadvantages

Software load balancer

Advantages

Disadvantages

Die mobile Version verlassen