首页    期刊浏览 2024年12月14日 星期六
登录注册

文章基本信息

  • 标题:Network traffic optimizing algorithms in Linux.
  • 作者:Bucur, Ion ; Enaceanu, Alexandru ; Tabusca, Alexandru
  • 期刊名称:Annals of DAAAM & Proceedings
  • 印刷版ISSN:1726-9679
  • 出版年度:2009
  • 期号:January
  • 语种:English
  • 出版社:DAAAM International Vienna
  • 摘要:Traffic shaping has been an issue for the network administrators for years. There are several solutions to this issue.
  • 关键词:Bridge/routers;Communications traffic;Mathematical optimization;Optimization theory;Routers (Network hardware);Telecommunications traffic

Network traffic optimizing algorithms in Linux.


Bucur, Ion ; Enaceanu, Alexandru ; Tabusca, Alexandru 等


1. INTRODUCTION

Traffic shaping has been an issue for the network administrators for years. There are several solutions to this issue.

The first solution is to have high-cost branded products (routers, multilayer switches) from companies like Cisco, HP or medium-cost networking actives from Zyxel, Nortel, and 3Com.

The second solution is to use a Linux based router, as the Linux kernel contains a native framework for network traffic optimization. For a mid-sized company or campus, the first solution is expensive and not very scalable. However the second solution is very adequate for such a company.

The problem is that all the algorithms included in the Linux kernel are not well documented, the hands-on examples are missing and therefore it's hard to select the best algorithm for your network.

The "Linux Advanced Routing and Traffic Control" website (Hubert, 2005) is well documented and very well known by the network administrators, but the examples are not applied to every algorithm. Also, the examples and implementations are very old and the low bandwidth and few connections are not realistic these days.

This article focuses on analyzing, testing and implementing two of the most used Linux network shaping algorithms CBQ and HTB, within a University Campus.

2. CBQ AND HTB ALGORHITMS

CBQ (Class Based Queuing) is a token bucket based algorithm that provides traffic shaping on subsets (classes), as in the figure 1.

For each subclass, CBQ uses the leaky bucket algorithm. CBQ parses each class using a round-robin algorithm, this way causing a big jitter.

CBQ works by deriving the idle time from the number of microseconds that elapse between requests from the hardware layer for more data. Combined, this can be used to approximate how full or empty the link is.

That doesn't always arrive at proper results. For example, what if the actual link speed of an interface that is not really able to transmit the full 100mbit/s of data? This things often apply because of the ISPs or the carrier.

It gets even worse if one considers virtual network devices like PPP over Ethernet or PPTP over TCP/IP. The effective bandwidth in that case cannot be determined.

[FIGURE 1 OMITTED]

HTB (Hierarchical Token Bucket) is the other well spread kernel traffic shaping algorithm that can be used instead CBQ due to its ease of use.

Hierarchical approach is well suited for setups where you have a fixed amount of bandwidth which you want to divide for different purposes, giving each purpose a guaranteed bandwidth, with the possibility of specifying how much bandwidth can be borrowed.

HTB works just like CBQ but does not uses idle time calculations to shape. Instead, it is a classful Token Bucket Filter--hence the name. It has only a few parameters, which are well documented on HTB's website (Devera, 2003). HTB can allocate one or more clients to every class, and so, when the bandwidth that is allocated to that class is full, it temporarily borrows bandwidth from similar classes.

3. ACCURACY COMPARISON

We considered and simulated a specific situation where we have three leaves and two interior classes in order to make an accurate performance comparison.

Let us figure out that we have a fixed amount of bandwidth (100Mbps), and we split that bandwidth into two zones, noted A and B.

The A bandwidth gets 40% of the total bandwidth and the B bandwidth gets 60%.

Then, traffic within the A zone is split into 30 Mbps for WWW and 10Mbps for other traffic. Any unused bandwidth can be borrowed form the other class.

We used the freeware tool "pbntg" (Perl Based Network Traffic Generator) as network traffic generator. Although "pbntg" is less flexible than "pktgen" tool that is included in every Linux kernel, it permits sustained high network traffic.

Table 1 shows the bandwidth used for each class and leaf. We can see HTB performing at full 100Mbit speed while maintaining the guaranteed bandwidth for zone A & B. Table 2 mirrors how CBQ performs using the same settings and traffic patterns as HTB's.

While HTB is performing well at the sustained generated traffic rates and the calculations provide no errors, CBQ is queueing packets so that the WWW rate is virtually increased, so were the other rates and the total rate. Basically 10 percent of the packets were put in queue at the given rates.

4. PERFORMANCE COMPARISON

In order to create a large qdisc setup with single root class and n leaves under it, we created a perl script and tested it with n having values between 1200 and 5000.

Each leaf has rate 8kbit while root has 30Mbit (except for n=5000 where we set root rate to 50Mbit). The script generated various Qdisc configurations and for each one it simulated the flow with various number of backlogged classes.

Following the measurements from Table 3, we can say that HTB's complexity is predictive and scales well with both rate and number of (active) classes.

CBQ performance measurements showed that while increasing the active leaves count, the time spent by qdiscs linearly increases together with the running time.

5. IMPLEMENTATION

Based on the tests above mentioned, we decided to implement HTB within the University's campus network.

The campus network contains one MDF (Main Distribution Facility) and 4 IDFs (Intermediate Distribution Facilities). There are five separate VLANs :

* Maintenance;

* Administrative;

* Laboratories;

* Internet Cafe;

* Cisco Academy.

Using RRDTool (Oetiker, 2009) for a week, we estimated traffic needs and created three sub-classes:

a) Administrative offices and servers (10);

b) Laboratories, hostel, internet cafe and Cisco academy (20);

c) [P.sub.2]P transfers (30).

Using the Q variable for the total available bandwidth and C10, C20, C30 the bandwidth that will be allocated to each subclass, the script is (Gheorghe, 2006):

#!/bin/sh

Q=60mbit

C1=20mbit

C2=20mbit

C3=20mbit

tc qdisc del dev eth0 root

tc qdisc add dev eth0 root handle 1: htb default 50 r2q 40

tc class add dev eth0 parent 1: classid 1:1 htb rate $Q

//end of main class

tc class add dev eth0 parent 1:1 classid 1:10 htb rate $C1

tc class add dev eth0 parent 1:1 classid 1:20 htb rate $C2

tc class add dev eth0 parent 1:1 classid 1:30 htb rate $C3

//end of subclasses

tc qdisc add dev eth0 parent 1:10 handle 10: sfq perturb 10

tc qdisc add dev eth0 parent 1:20 handle 20: sfq perturb 10

tc qdisc add dev eth0 parent 1:30 handle 30: sfq perturb 10

U32="tc filter add dev eth0protocol ipparent 1:0prio 1 u32"

$U32 match ip src 1.1.1.0/24 flowid 1:10

For [P.sub.2]P packet marking we use the "IP[P.sub.2]P" extension. After having been marked, [P.sub.2]P packets get the 3rd class (Sachs et al., 2005).

I=iptables

$I -t mangle -F

$I -t mangle -A PREROUTING -p tcp -j CONNMARK--restore-mark

$I -t mangle -A PREROUTING -p tcp -m mark !--mark 0 -j

ACCEPT

$I -t mangle -A PREROUTING -p tcp -m ipp2p--ipp2p -j

MARK--set-mark 1

$I -t mangle -A PREROUTING -p tcp -m mark--mark 1 -j

CONNMARK--save-mark

$I -t mangle -A POSTROUTING -m mark--mark 1 -j

CLASSIFY --set-class 1:30

6. CONCLUSION AND FURTHER RESEARCH

This article focuses on network traffic algorithms that can be used on Linux based routers. By using real life measurements within a mid-sized network we selected the most suited algorithm for this kind of infrastructure, increased bandwidth and clients to the University Campus while decreasing costs.

Further research includes algorithms measurements and testing for gigabit and 10 Gigabit uplinks.

7. REFERRENCES

Devera M. (2003), HTB Website, Available from: http://luxik.cdi.cz/~devik/qos/htb/ Accessed: 2009-06-02

Gheorghe L. (2006), Designing and Implementing Linux Firewalls with QoS using netfilter, iproute2, NAT and L7-filter, Packt Publishing, ISBN 1904811655

Hubert B. (2005), LARTC Website, Available from: http://lartc.org/Accessed: 2009-06-01

Oetiker T. (2009), RRDTool Website, Available from: http://oss.oetiker.ch/rrdtool/, Accessed: 2009-05-06

Sachs M., Piccard P., Baskin B. & Spillman G. (2005), Securing Im and P2P Applications for the Enterprise, Syngress, ISBN 1597490172

*** (2006) IPP2P Website, Available from: http://www.ipp2p.org/ Accessed: 2009-05-20
Tab. 1. HTB measurements

Time WWW Other A B Total
(s) rate rate Rate rate rate

 2 40 20 60 40 100
 6 0 40 40 60 100
 10 60 0 60 40 100
 20 80 0 80 20 100

Tab. 2. CBQ measurements

Time WWW Other A B Total
(s) rate rate rate rate rate

 2 50 30 70 40 110
 6 0 70 70 40 110
 10 70 0 70 40 110
 20 90 0 90 0 90

Tab. 3. HTB performance measurements

Active leaves Time spent by Running time
 count qdisc (ms) (s)

 100 450 3
 1000 550 4
 4000 550 4.2
 5000 600 4.4

Tab. 4. CBQ performance measurements

Active leaves Time spent by Running time
 count qdisc (ms) (s)

 100 450 3
 1000 600 4
 4000 650 5
 5000 700 >5
联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有