Home | Networking Fundamentals

 

Google
 
 

 

Browse Topics

Networking Basics

OSI Reference Model

Introduction to TCP/IP

LAN Basics

Understanding Switching

WAN Basics

Understanding Routing

What Is Layer 3 Switching?

Understanding Virtual LANs

Understanding Quality of Service

Security Basics

Understanding Virtual Private Networks (VPN)

Voice Technology Basics

Network Management Basics

The Internet

 

 

 

Quality of Service in Network

QoS Building Blocks

Let’s now take a look at some of the building blocks of QoS.

There are a wide range of QoS services. Queuing, traffic shaping, and filtering are essential to traffic prioritization and congestion control, determining how a router or switch handles incoming and outgoing traffic. QoS signaling services determine how network nodes communicate to deliver the specific end-to-end service required by applications, flows, or sets of users.

Let’s take a look at a few of these.

Classification

   - IP Precedence
   - Committed Access Rate (CAR)
   - Diff-Serv Code Point (DSCP)
   - IP-to-ATM Class of Service
   - Network-Based Application Recognition (NBAR)
   - Resource Reservation Protocol (RSVP)

Policing

   - Committed Access Rate (CAR)
   - Class-Based Weighted Fair Queuing (CB WFQ)
   - Weighted Fair Queuing (WFQ)

Shaping

   - Generic Traffic Shaping (GTS)
   - Distributed Traffic Shaping (DTS)
   - Frame Relay Traffic Shaping (FRTS)

Congestion Avoidance

   - Weighted Random Early Detection (WRED)
   - Flow-Based WRED (Flow RED)

Congestion Management— Fancy Queuing

Weighted fair queuing is another queuing mechanism that ensures high priority for sessions that are delay sensitive, while ensuring that other applications also get fair treatment.

For instance, in the Cisco network, Oracle SQLnet traffic, which consumes relatively low bandwidth, jumps straight to the head of the queue, while video and HTTP are serviced as well. This works out very well because these applications do not require a lot of bandwidth as long as they meet their delay requirements.

A sophisticated algorithm looks at the size and frequency of packets to determine whether a specific session has a heavy traffic flow or a light traffic flow. It then treats the respective queues of each session accordingly.



Weighted fair queuing is self-configuring and dynamic. It is also turned on by default when routers are shipped.
Other options include:

   - Priority queuing assigns different priority levels to traffic according to traffic types or source and      destination addresses. Priority queuing does not allow any traffic of a lower priority to pass until      all packets of high priority have passed. This works very well in certain situations. For instance,      it has been very successfully implemented in Systems Network Architecture (SNA)      environments, which are very sensitive to delay.

   - Custom queuing provides a guaranteed level of bandwidth to each application, in the same way      that a time-division multiplexer (TDM) divides bandwidth among channels. The advantage of      custom queuing is that if a specific application is not using all the bandwidth it is allotted, other      applications can use it. This assures that mission-critical applications receive the bandwidth they      need to run efficiently, while other applications do not time out either.

This has been implemented especially effectively in applications where SNA leased lines have been replaced to provide guaranteed transmission times for very time-sensitive SNA traffic. What does “no bandwidth wasted” mean?Traffic loads are redirected when and if space becomes available. If there is space and there is traffic, the bandwidth is used.

Random Early Detection (RED)

Random Early Detection (RED) is a congestion avoidance mechanism designed for packet switched networks that aims to control the average queue size by indicating to the end hosts when they should temporarily stop sending packets. RED takes advantage of TCP’s congestion control mechanism. By randomly dropping packets prior to periods of high congestion, RED tells the packet source to decrease its transmission rate.

Assuming the packet source is using TCP, it will decrease its transmission rate until all the packets reach their destination, indicating that the congestion is cleared. You can use RED as a way to cause TCP to back off traffic. TCP not only pauses, but it also restarts quickly and adapts its transmission rate to the rate that the network can support.

RED distributes losses in time and maintains normally low queue depth while absorbing spikes. When enabled on an interface, RED begins dropping packets when congestion occurs at a rate you select during configuration.
RED is recommended only for TCP/IP networks. RED is not recommended for protocols, such as AppleTalk or Novell Netware, that respond to dropped packets by retransmitting the packets at the same rate.

Related Topics

 

Home | Links | Contact Us