IPFire supports Quality of Service (QoS) policies which allow bandwidth to be reserved for specific types of traffic. This means that when a network connection is congested, high priority traffic will be allowed at the expense of slowing low priority traffic (by dropping low priority packets).
When QoS is enabled the fq_CoDel packet scheduling algorithm is also active. This algorithm prevents excessive buffering, a problem known as bufferbloat. Bufferbloat occurs where too much buffering of packets (temporary storage in memory) causes high latency (long time between message and response) and packet delay variation (known as jitter).
Both upload and download limits can be configured separately. Different types of traffic (classes) can be configured to have different priorities and a set amount of guaranteed (reserved) bandwidth. When bandwidth is needed for a Class, the QoS algorithm intelligently decides if it needs to borrow bandwidth from a lower priority Class in order to maintain the minimum guaranteed bandwidth that is defined. Even if most of the bandwidth is used up, an amount is still reserved and given if needed, based on the user-configurable settings “Priority”, “Guaranteed Bandwidth” and “Maximum Bandwidth” under each Class.
Traffic that is defined by a class with priority 1 will be given priority over all other traffic. Priority 7 has the lowest priority.
Carefully consider which types of communications are most important. For example, your network may be used for VoIP and video calls. These types of communications need to have high priority as any delayed packets will be discarded leaving users with choppy audio and video. Also, if you mostly use a VPN after hours, it might make more sense to prioritize VPN below web traffic. Otherwise, when someone does use VPN during work hours and it is prioritized above web traffic, it could negatively impact web traffic performance for many users. In contrast, if most users work from home, then it might make sense for VPN traffic to have a higher priority than web traffic.
Use Kilobits per second format. Guaranteed bandwidth will reserve a certain amount of your bandwidth to a Class regardless of its priority. In most cases it is good to be conservative so you do not over-commit your available bandwidth. Maximum bandwidth is how much bandwidth you would like the Class to have when there is extra bandwidth available.
This optional field defines the packet size for data that will be sent at the Class maximum bandwidth. When you leave this field blank, IPFire defaults to 1600 bits. Some experimenting with this value has determined that the default value seems to work best, but you may choose to manually change this. The value you enter should be in kilobytes. So if you wanted a bit value of 5120, you would enter 5 (5 KB = 5120 bits). In other words, divide the bit value by 1024 to get the value you will input.
This optional field defines the packet size for data that will be sent at maximum link speed (the downlink and uplink speeds you define at the top of the QoS page). The value also defaults to 1600 bits, which seems to be ideal for most cases. See this forum thread for more discussion.
Type of Service (TOS) lets you define the bits of traffic sent from each class with an IP header value that receiving routers can use to further prioritize traffic. Select the appropriate value from the drop down box.
If you are an advanced user and comfortable working in a Linux shell, the easiest way to make major changes to QoS configuration is to edit the files directly in IPFire.
Buffering occurs when traffic goes from a link with high bandwidth to a link with lower bandwidth. It is used to prevent packet loss which occurs as the lower bandwidth link cannot handle traffic at the same speed.
Buffering is more of a problem today as people have higher internet speeds and memory is cheap. As a result manufacturers have increased the buffers in their hardware.
The problem occurs when it takes a long time for traffic to go through a big buffer (100s of milliseconds to 1000s of milliseconds). If a large file download is occurring at the same time as shorter communications (like loading a simple webpage) everybody waits. The simple web page will also take a long time to load.
The solution to bufferbloat is Active Queue Management (AQM).
Measures the latency between traffic entering and leaving the buffer. If the latency is too high, it drops a packet which causes the TCP Connection to slow down
Organizes traffic based on type
Most ADSL and cable modems do not have an AQM. Even if your router (IPFire) shapes traffic using FQ_Codel, a modem will still buffer the traffic. To resolve this problem, limit your upload and download bandwidth to slightly below the full amount available.
While your network is quiet (there is very little other traffic) use a speed test to find your upload and download speed. Sites such as speedtest.net, speedof.me, testmy.net are helpful and specific tests for bufferbloat are available at Netalyzr and DSLReports Speedtest.
If your speed is in Megabits, multiply that number by 1000 to get Kilobits.
Multiply that number by about 97% (.97)
60 Megabits Download = 60*1000*.97 = 58200
12 Megabits Upload = 12*1000*.97 = 11640
You can adjust that percentage based on your observed latency.
This makes the connection between your router and modem and the internet (ISP Limit) about the same. As both your router (IPFire) and your internet connection will be around the same speed your modem will not need to buffer traffic.