QOS Tutorial

Discussion in 'Tomato Firmware' started by Toastman, Jul 21, 2013.

Thread Status:
Not open for further replies.
  1. Toastman

    Toastman Super Moderator Staff Member Member

    Using Tomato QOS

    N.B This applies to Toastman versions, some other releases have broken QOS.


    I am involved with supply of wifi to large residential buildings, sharing internet connection using Tomato's QOS system allows large numbers of residents to coexist and obtain a decent experience from the likes of YouTube and all of the normal apps in use by mortals...

    While the article is concerned with a large number of users in my big premises, obviously, if the QOS examples used in Toastman builds will work here, they will also work (and probably even better) for you - as a normal home/standalone user. So this thread is useful for everybody! Now - when I say it will work - I don't mean it will be optimum in every case, but it will give you a base to start. You will see how I have used the different classes for the various protocols and modify them if they aren't right for your setup. It's up to you to decide what to do and how to change it to suit your needs. But to do that you do need to understand how QOS works and the reasoning behind the rules.

    I should point out the difference between a standalone user or one with a couple of family members and a residential building like mine. These lucky "standalone" people have control over each PC and know what applications they are running. Whereas in a residential building, we have no idea what people are running on their PC's and we have no access to, and no control over them at all. So the only thing we can do to prevent one or two users or applications from hogging our valuable bandwidth, is to set up the router's QOS system in a way that prevents it happening. If you are a standalone user, it is often much easier to change what is happening on the PC than it is to try to use a QOS rule to control it afterwards, but we simply don't have that luxury. If you have a family you may have the same problem at home.

    Now, the comment about even a "basic" home setup. People often think they have a "basic" setup. That they browse the web and nothing more, so a single rule to prioritize port 80 HTTP is all they think is necessary. But they fail to understand that almost every single web page has links to other pages, flash videos, advertisers, scam sites, online video, links to messenger, facebook, photo sharing sites. Some of these are secure connections so that involves other ports. Some play music - now we have other streaming protocols. They also use Windows update service and usually Messenger - which itself uses several protocols and many ports.

    [And of course, don't forget almost every web page on the internet has embedded code that causes your browser to report all your details to advertising analysis sites. Google is the biggest of these, but there are hundreds of them. All of them are stealing information from your PC, your friends email addresses, anything they can get hold of. Why? Because you gave them permission and signed away any copyright to your own photos when you chec k the "I agree" buttons without reading the small print. Nothing you do is private any more, and your PC is constantly sending data about you to places you never knew existed and for reasons you would most definitely not like.

    While on this subject, have you ever noticed how slow some sites are from time to time? It's usually not because of the site, but some tracker or spy service that your browser has been referred to, that is so overloaded it hasn't replied yet, and your desired website is stuck waiting for it. e.g. googleanalytics.com and other adservices. ]

    That is why QOS rules tend to get quite complex after several months of hard use, even for a home system with a couple of users. A single user is OK, he knows what he is doing. Add another user, and immediately one of them gets annoyed when the other gets his windows update or downloads his email !

    So, we use QOS for a variety of reasons. What I have to do in residential buildings is to KEEP THE SYSTEM RUNNING when a hundred are so people are all trying to use it at the same time. Ideally, we would like to not only keep it running, but have it so that each user wasn't even aware he was sharing an internet connection. And that is perfectly possible. To do that it is often necessary to limit some types of traffic, this is a trade-off. How much you need to do this depends on your own system. That's why you need to understand it and tweak the settings yourself.

    While reading this series of article it is important to remember that they were originally separate posts - some of which have now been bundled together, so please forgive any repetition or duplication of information.

    This link is a useful place to find answer to a lot of common problemshttp://www.linksysinfo.org/forums/showthread.php?t=63486


    The author has been involved in setting up WiFi in several large residential blocks, where it was important that the result not only worked but was simple to maintain by reception staff. What was achieved has surprised many people here, including myself. Why? Because we share one internet line between a few hundred people, and none of them are aware that they are sharing because the speed is still more than adequate.

    Ever sat in an internet shop, a hotel room or lobby, a local hotspot, and wondered why you can't access your email? Unknown to you, the guy in the next room or at the next table is hogging the internet bandwidth to download the Lord Of The Rings Trilogy Special Extended Edition in HDTV format. You're screwed - because the hotspot router does not have an effective QOS system. In fact, I haven't come across a shop, hotel, or an apartment block that has any QOS system in use at all. Most residents are not very happy with the service they [usually] pay for.

    So what is "QOS" ??

    A "QOS" (Quality Of Service) system is a firmware strategy used in a router connected to the internet gateway to allow it to give priority to those applications which are important. Without it, anarchy rules, and the downloader will usually wreck the internet access for everybody else.

    The normal systems installed in hotspots and residential buildings use a simple router with no QOS, running splash screen and access portal software, and a bunch of AP's nailed to the walls. The user often has to buy a card with an access code, and somebody makes heaps of money out of administering the access controls. Unfortunately, the actual web access is so slow and congested as to be unuseable, the router regularly fails, and everyone in the block is angry and feels cheated.

    It doesn't have to be like this!

    Almost all normal SOHO (small office/home) routers have no real way to prioritise applications and make sure that P2P downloaders do not take over. However, some routers which happen to run Linux as an operating system can use third-party firmware (software) to turn a cheap lump of plastic into something akin to a professional router. All for around 50 - 100 dollars! And hotspot owners, cafes, hotels can also use them to provide a superior WIFI system to that which they currently have. That firmware, is called TOMATO - and it was written by Jonathan Zarate and subsequent developers have been adding to it ever since.

    It is quite easy for residential block owners to install and run a system themselves, with the benefit that the web access works well, and they don't have to pay anyone for a third rate access control service. And best of all - it doesn't have to be prohibitively expensive. A side benefit of installing what is in effect a wireless network covering your building, is that you can also use it for other purposes. For instance, I also have a 32 camera security system online.

    Now, you don't have to use expensive equipment. The Linksys WRT54GL is adequate for most purposes. We aren't aiming to supply ultra-high-speed internet to all users, and ADSL lines from 2 to 5 Mbps are available easily and cheaply in most countries. This will provide adequate service for most users. 8 and 16Mbps lines will be better, but not so much as you might think! Most users will never see any difference. A router with a bit more memory is useful and more stable, try to get the ASUS WL500gP v2 (32MB RAM, 8MB Flash). Even better - if you can get the WRT54G-TM, which is a router that also has 32MB RAM and 8MB Flash, and also runs nicely overclocked to 250MHz. It's faster than the ASUS WL500gP v2, the wireless is better. and would be a better router for this application.

    Faster and better routers will become available as time goes by, but we do need to be able to run Tomato on it. Tomato, a third party firmware which uses Linux, is the secret of getting this stuff to work properly in an apartment block using cheap hardware. On a WRT54GL clocked at 250Mhz, 1,000 mixed connections, but mostly P2P, usually results in a CPU load of about 20%. At this level, it's still fast.

    JANUARY 2010

    The ASUS RT-N16 router is now available in most counties, it is clocked at 480MHz and has 128MB of RAM. Teddy Bear is the first to port Tomato over to it - and even the first "beta" is stable. Keep an eye on this threadhttp://www.linksysinfo.org/forums/showthread.php?t=63587 From now on, it would be best to use this for the main router and WRT54GL for AP's.
    There seems little point in type "N" AP's unless they operate on the 5GHz band, due to interference problems. However, if you use 5GHz the poor penetration of walls and the short range makes it almost useless for use in most apartment buildings. A "G" 54Mbps connection is going to be the standard for some years yet, and for many such reasons will be the best solution.

    You'll need more access points, just use more WRT54GL's and set them up as AP's wired with CAT5e cable to your main router, via switches if necessary. For God's sake don't try to do it with WDS. There is a very severe speed and reliability penalty even with a single WDS connected AP, with a couple or more you will be lucky to download anything this century.

    If you wish to use the network in your building for other purposes too, such as office, security cameras, then it might be a good idea to use gigabit switches, otherwise at the moment they aren't necessary and are more expensive.

    You may find cheaper AP's but the twin external antennas on the WRT54GL's and the ability to set higher transmit power have been an advantage for me. The additional information given by using Tomato firmware on the WRT54GL even when used as an AP is an invaluable tool for faultfinding.

    This is the easy part of the setup. The rest is up to you to get right and maintain.

    Tomato firmware probably now has the most effective and configurable QOS of any SOHO router around. If you have a real need for QOS to control multiple users, you will find DD-WRT etc. quite useless.

    The secret of a successful residential system is the ability of Tomato's QOS to allow you to actually share your ISP's service between all of your clients, hence the title of the first article. And the methods used here can and will work for anyone, what will work for a large residential system should work just about anywhere else, just modify to suit your needs.

    Now, a warning - you'll find some people tell you that you cannot do this job with a small SOHO router, with or without Tomato firmware, because there are too many users. Please don't think about the number of users because it doesn't matter how many USERS there are. The overall throughput is limited by the connection to your ISP and it makes no difference if you have one user or 100, as long as the firmware can handle the overall number of connections and the throughput. Tomato makes this possible. In fact, many of my colleagues have been replacing their business Cisco routers with routers running Tomato, because they are just too difficult for them to administer. Yes, I'm serious. Most small businesses simply don't need them.
    RichtigFalsch and Madumi like this.
  2. Toastman

    Toastman Super Moderator Staff Member Member

    Let's begin by making some things a little clearer for newcomers to Tomato.

    "Incoming" versus "Outgoing" QOS

    Unfortunately many posts on the subject of QOS confuse people, especially newcomers, into misunderstanding what the router's QOS is, what it is NOT, what it is used for, and what it can really achieve if understood and used properly. QOS runs on the router connected to the internet gateway or ISP. It works on the WAN (Wide Area Network) port or Internet port.

    NOW - let's get this straight. There isn't a "QOS for Upload" or a "QOS for Download" in Tomato :wall: Tomato's QOS system operates on outgoing data, but it also has class limits on incoming data which can be used to drop packets, and cause link stabilization, at those class limits. We use both of these as part of QOS.

    This ongoing battle seems to arise from the fact that the QOS system operates on outgoing traffic. Therefore, many people do not understand how it can manipulate the situation to control INCOMING traffic. So they confuse everyone by swamping the forums with comments like "QOS doesn't work" and "the Incoming QOS is rubbish" - etc. This actually makes me extremely angry, because it is just not true. If it were true, then none of the people in the apartments I administer could use the internet. So throughout these articles you will find warnings to disregard posts by such persons. I'm an engineer, I believe in things that work, and only if they work.

    QOS would actually be of no interest whatsoever to us unless it helped us with our incoming data flow. It really doesn't help to look at it as either "incoming" or "outgoing" QOS. Those people who keep insisting that because QOS only works on outgoing traffic (uploads) then it's useless, are missing the whole point. I must stress this, because there are hundreds of people making stupid statements like this in the forums and unfortunately, too many people believe what they are saying. These people are spreading misinformation, based on ignorance. You CAN control incoming data to a great extent, but there's no "magic button". You have to learn how to do so.


    and also, adopt this philosophy:

    Router QOS is best viewed as an OVERALL strategy for improving your flow of data.

    So HOW does the router's QOS work, how does it make any difference to incoming traffic - if it only acts on the outgoing data?

    Well, it's actually very simple.

    Take this analogy. Suppose there are a thousand people out there who will send you letters or parcels in the mail if you give them your address and request it (by ordering some goods, for example). Until you make your request, they don't know you and will not send you anything. But send them your address and a request for 10 letters and 10 parcels and they will send you 10 letters and 10 parcels. Ask for that number to be reduced or increased, or ask (pay!) for only letters and no parcels, and they will do so. If you get too much mail, you stop sending the requests or acknowledgements until it has slowed down to a manageable level. Unsolicited mail can be dealt with by ignoring it or by delaying receipt (payment) and the sender will send less and give up after a while.

    In other words, you stop more goods arriving at your house by simply not ordering more goods!

    If you have letters arriving from several different sources, you stop or delay sending new orders to the ones you don't feel are important.

    That's it!

    The amount of mail you receive is usually directly proportional to the requests you send. If you send one request and get 10 deliveries, that is a 1:10 ratio. You've controlled the large amount of deliveries you receive with only the one order which you sent. Sending 1,000 requests at a 1:10 ratio would likely result in 10,000 letters received - more than your postman can deliver. So based on your experience, you can figure out the ratio of packets you are likely to receive from a particular request, and then LIMIT the number of your requests so that your postman can carry the incoming mail. But if you don't limit what you ask for, then the situation quickly gets out of control.

    It's not a perfect analogy, sure, but router QOS works in a similar way. You have to limit the requests and receipts that you send - and the incoming data reduces according to the ratio you determine by experience. We will ignore, for the moment, the fact that we can also set a limit on incoming data, and use this to also reduce the amount of traffic.

    The problem is you can have no absolute control what arrives at your PC - because your router does not know - and can never know - how many packets are in transit to you at any given time, in what order, and from what server. The only thing your router can directly control is what you SEND, see what comes back, and then respond to it. And the QOS system attempts to influence your incoming data stream indirectly by changing the data that you SEND in much the same way that you can control incoming mail simply by reducing your demand for it.

    That is the whole purpose of the router-based QOS systems, and that is why it they have been developed, not merely to control uploads! However, you can't just check a magic box marked "limit all my P2P when I am busy with something more important" - you have to give clear instructions to the router in how to accomplish your aim. To do this it is necessary to understand how to control your incoming data by manipulating your outgoing requests, class priorities, and receipts for received packets. Added to this we also have the ability to shape traffic by using bandwidth limits on outgoing total traffic, and also on the incoming individual traffic classes. Then we have to also consider UDP packets (rather less easy to control) and how to effectively control applications that use primarily UDP (VOIP, Multimedia etc). Depending on your requirements that may take hours or months to get working satisfactorily.

    Tomato originally had about 4 rules, as I recall. Something has to be said, so I'll just come right out and say it:

    The default QOS rules in the old original Tomato are almost completely USELESS and should be immediately changed.

    The worst problem is the feeble attempt at classifying P2P. P2P cannot be classified by the means shown - assuming that it will be using ports 1024-65535. Neither does using IPP2P or L7 filters (as used in most SOHO routers as the usual way to magically "LIMIT ALL P2P". That is just advertising BS for the nice glossy box. If you want to use the original short QOS ruleset from Jon Zarate, the ONLY way that will work is to set a default class, (I use class D) and then delete ANY rule pertaining to P2P (the one referring to ports 1024-65535). Then address everything that you REALLY want to use on your system is addressed by placing them in higher classes. Now, anything that is NOT addressed by one of your rules (which includes almost all P2P), will bypass them and end up in your default class D. This will include P2P!

    Next you have to define the rule for your DEFAULT class as mentioned above, so decide what you actually want to do with P2P. Usually we want to permit some but prevent it from hogging the bandwidth. As an example, set outgoing rate and limit to 1% and 5%. Set the incoming class limit at 50%. Now you should see it throttled. After this, you can adjust it to suit your own needs.

    The rest of these articles will expand on this, and show you how to effectively control your traffic.
    RichtigFalsch and Madumi like this.
  3. Toastman

    Toastman Super Moderator Staff Member Member

  4. Toastman

    Toastman Super Moderator Staff Member Member

    What do we need QOS to do?

    Before we begin, let’s define a few things and clear up some confusion.

    QOS stands for "Quality Of Service". It's aim is to provide an improved level of service for critical applications. The term “QOS” in engineering circles refers to the marking of packets for critical applications so that they can be easily identified - and given priority by every internet router between your PC and the remote server. Hence providing a guaranteed “quality” of service necessary for some applications like VOIP to work without jitter, dropouts, and delays.

    The SOHO router's simple "QOS" system doesn't do this, and a somewhat different method is implemented in the router to make sure that our most critical applications are accorded at least some priority by the link. Actually, it is little more than traffic shaping used in conjunction with a priority system. But nonetheless, we can still make it work for us provided that we understand how to do so.

    The router QOS system attempts to ensure that all important traffic is sent to the ISP first, and by implication, the remote server will also reply first. It is actually incoming traffic which we are really interested in improving. We try to control or "shape" outgoing traffic so that the higher priority incoming data is not delayed.

    Packets from your PC will be “inspected” and compared with the router’s QOS settings in order to find out if they need priority, and then assigned a place in the outgoing queue waiting to be sent to your ISP. Other mechanisms may also be used to manage the traffic so that the returning data from the remote server is delivered before that which is less important.

    But someone has to define a set of QOS rules for a particular environment. Hey Dude - that's YOU! :p

    If you are a standalone user with one PC then you probably don't need QOS at all. If you are a P2P user and wish to download at absolute maximum speed, you will usually find QOS counter-productive.

    The worst problem faced by all of us in multi-user environments is P2P traffic, which tries to take all available bandwidth. Hence, most discussions of QOS operation refer to P2P when giving examples of traffic control. We normally give P2P a low priority because most people want to browse online websites - and the P2P traffic slows their web browsing down.

    The faster your Internet Connection, the better your system will work, the more P2P you can allow on your network, and the better your VOIP and games will work. This is because of two things - firstly, obviously the overall speed improves. Secondly and more important, it is more difficult for P2P applications to actually generate enough traffic to fill the pipe. Overall, everything becomes less critical.

    The assumption made here is that you will have between 1Mbps and 5Mbps at least, with 512k uplink.

    If you have a small network of 2 or 3 PC's then you may benefit from QOS, but it sometimes doesn't have to be too complicated. But if you have a larger network, similar to mine, which are large apartment blocks with about 250-400 rooms and maybe around 600-1200 residents, then QOS is absolutely essential. Without it, nobody will be able to do anything. Just a single P2P user will often ruin it for everyone else. However, the rules for correct QOS operation are the same for large or small networks - but you must decide for yourself how complex you want your rules to be, what applications running on your PC's you need to address.

    Why are there so many rule examples?

    In a large block like mine, the clients of the rooms are completely uncontrollable, we can't ask them to change settings, ports, etc. We therefore have to try to cover everything with QOS. So your rules need a lot of thought. What we do is of the utmost importance if we want things to work properly, because if we screw up, everyone is dead in the water. Just one resident can ruin the whole thing, often unwittingly. Unfortunately, that means a very steep learning curve. It's also important to keep an open mind, and to understand that if a set of rules don't work, there is a reason - that reason is usually that you have overlooked and failed to address a particular set of circumstances.

    The QOS in our router can only operate on outgoing data, but by “cause and effect” – this has a significant influence on the incoming data stream. After all, the incoming data to our router is what our QOS is *really* trying to control. QOS works by assigning a priority to certain classes of data at the expense of others, and also by controlling traffic by limits and other means - so as to enable prioritised traffic to actually get that priority.

    Since UDP operates in a connectionless state, the main methods used by our router to control traffic involving manipulation of TCP packets, are not effective.

    UDP, used for VOIP, IPTV applications, can't be controlled as such, but it can be helped by the reduction of TCP and other traffic congestion on the same link. In fact, some kinds of UDP traffic can be a huge drain on resources - and we will place a limit on it to prevent it from swamping our router.

    We would usually like to allow WWW browsing to work quickly, and get our email, but aren’t too bothered about the speed of P2P – for example. In the event of huge amounts of traffic occurring which is too much for our bandwidth limitations, we also have to control the maximum amount of data which we attempt to send or receive over those links.

    This is called “capping”, “bandwidth limiting” or “traffic management”. This is also managed by the QOS system in our router and is a ***part*** of QOS.

    So, once again a reminder - we must not refer to "incoming" or "outgoing" QOS. All of these mechanisms are PART of the "QOS" system on the router.

    We can however, refer to the incoming part of QOS as the "ingress" and the outgoing is the "egress" part.
    Last edited: Oct 7, 2013
    RichtigFalsch and Madumi like this.
  5. Toastman

    Toastman Super Moderator Staff Member Member

  6. Toastman

    Toastman Super Moderator Staff Member Member

    Now, let’s have a look to see why many people fail to get QOS to work properly or at all, especially in the presence of large amounts of P2P.

    Firstly, let’s start by making the statement that “slow” web sessions are usually due to “bottlenecks” – your data is stuck in a queue somewhere. Let’s first assume that the route from your ISP to the remote web server is fast and stable. That leaves us with our router - which is something that we have some control over.

    We are left with two points commonly responsible for bottlenecks.

    1) Data sent by your PC’s, having been processed by QOS, is queued in the router waiting to be sent over the relatively slow “outgoing” uplink to your ISP. Let’s assume a 500kbps uplink.

    2) Data coming from the remote web server, in response to your PC’s requests, is queued at the ISP waiting to be sent to your router. Let’s assume a 2Mbps downlink.

    Bottleneck No. 1

    Our PC's can usually send data to the router much faster than the router can pass it on to the ISP. This is the cause of the first "bottleneck". But there is now another function associated with the sending of data by your router - to the ISP, which is the key to QOS operation. Let me try to explain:

    The incoming/outgoing data is queued in sections of the memory in the routers - these are known as “buffers”. It is important not to let these “buffers” become full. If they are full, they are unable to receive more data, which is therefore delayed or lost. The lost data has to be resent, resulting in a delay.

    We need to be sure that we are not sending data to the ISP faster than the ISP can receive it.

    I must stress that it is an absolute necessity that you set the outgoing limit at about 85% of the MINIMUM bandwidth that you EVER observe on the line, and often even less. Note that I did NOT say "AVERAGE" - I said use the "MINIMUM" reading - and I do mean MINIMUM. If you set an averge of the readings, don't complain to me when your QOS doesn't work.


    You must measure the speed at different times throughout the day and night with an online speed test utility, with QOS turned off, and no other traffic - to determine the lowest speed obtained for that line. You then set 85% of this figure as your maximum permitted outgoing bandwidth useage. Just because this seems low to you, don't be tempted to set a higher figure. If you do, then the QOS system will not work. In fact, you will get BETTER performance by setting a LOWER limit, this will be covered later.

    When a maximum outgoing bandwidth limit is reached - packets from the PC's are dropped by the router, causing the PC's on your network to slow down by backing off, and to resend the data after a wait period. This takes care of itself and is only mentioned in passing. You don't have to do anything.

    [*** ADDITION - Since this article was first posted I have had HUNDREDS of people PM me to say they have followed my advice and QOS still does not work. When I follow this up I almost always find they have NOT followed even the basic advice above. They have neither set the Maximum Bandwidth limit to 85% (or less) of the lowest measured speed nor have they set their limits, either class or incoming, as I advised. The moral is - if you don't do what I recommend, don't complain when it doesn't work !! ]

    Next, let’s consider QOS in operation.

    Imagine some unimportant data that you wish to send to your ISP, presently stored in the router's transmit buffer. As it is being sent, you might start up a new WWW session which you would prefer took priority. What we need to do is to insert this new data at the head of the queue so that it will be sent first. When you set a “priority” for a particular class, you are instructing the router that packets in certain class groups need to be sent before other classes, and the router will then try to arrange the packets in the correct order to be sent, with the highest priority data at the front of the queue, and the lowest at the back. This is quite independent of any limits, or traffic shaping, that the QOS system may ALSO do.

    Now, we are going to assume that we have defined a WWW class of HIGH with no limits. Let’s imagine the router has just been switched on, and we then open a WWW session. A packet (or packets) is sent to the remote server requesting a connection - this is quite a small amount of data. The server responds by sending us an acknowledgment, and the session begins by our requesting the server to send us pages and/or images/files. The server sends quitelarge amounts of data to the us, but we respond with quite a small stream of ACK packets acknowledging receipt. There is an approximate ratio between the received data and our sent traffic consisting mostly of receipts for that data, and requests for resends.

    Bottleneck No. 2 - The BIG ONE

    This relationship between the data we send and the date we receive varies with the applications and protocols in use, but is usually of the order of 1:10 or 1:20, but it can rise to around 1:50 with downloads and P2P connections. So an unlimited outgoing data rate of 500kbps *could* result in an incoming data stream of anything from 5 to 25Mbps - which would of course be far too much for our downlink of 2Mbps – and our data would therefore be queued at the ISP waiting to be sent to our router. Most of it will never be received – it will be “dropped” by the ISP’s router. All other traffic will also be stuck in the same queue, and our response time is awful. This is bottleneck no. 2 in the above list.

    It is vitally important that the incoming "pipe" never gets full. If you allow it to do this, you have lost the battle. At this point let me warn you about the many forum posters who insist that if your pipe isn't FULL there was no need for QOS in the first place. Please ignore them because they clearly don't understand what they are talking about. You will see why later.

    So how do you prevent this bottleneck? Well, you have to restrict the amount of data that you SEND to the remote server so that it will NOT send too much data back for your router to process. You have absolutely no control over anything else - you cannot do anything except play around with what you SEND to the remote server. And what you SEND determines what, and how much, trafffic will RETURN. Understanding how to use the former to control the latter is the key to successful QOS operation. And how to do that, you can only learn from experience.

    (Recent versions of Tomato have a better ingress system, and we have better control over incoming data then previously. However, let us stick to the principles used in the earlier versions, because they still have lessons for us. We will assume for the moment that we are using an older version of QOS with no effective incoming data limit and no proper priority system on incoming data. Thus we will try to control traffic by manipulating outgoing data alone).
  7. Toastman

    Toastman Super Moderator Staff Member Member

  8. Toastman

    Toastman Super Moderator Staff Member Member

    Let's go back for a moment to the analogy in the introduction:

    So, we have to understand how the amount of incoming data is influenced by what we send. Experience tells us that for some applications aproximately a 10% ratio of sent to received data is normal, while for others it can be less than 5% (esp. P2P). In the case of UDP packets, we have much less control. UDP operates in a connectionless state and no receipt is sent for incoming packets. Hence we lose the use of this mechanism for controlling the data flow. The ratio of transmitted to received data is not known with any certainty either. In a VOIP connection, for example, a large data stream will be initiated to your PC when the other person speaks, while your own computer sends almost no traffic at all unless you are also speaking. Therefore you can improve VOIP / IPTV by slowing your TCP connections to make room.

    To examine the effect of this "ratio" between sent and received TCP data in more detail we’ll use P2P – the real PITA for most routers. We will define a class of LOWEST for P2P with a rate of 10% (50kbps) and a limit of 50% (250k). Now we look at the result. The link starts sending at 50kbps and quickly increases to 250kbps outgoing data, and following the 5% ratio between send and receive, we get perhaps 5Mbps INCOMING data from the P2P seeders in response. That is far too fast for our miserable little downlink of 2Mbps, and is queued at the ISP’s router waiting for our own router to accept it. The downlink has become saturated. Any other traffic is also stuck in this queue. When most of these packets fail to be delivered, after a preset period of time they are discarded by the ISP’s router and are lost.

    As it does not receive any acknowledgement of receipt from our PC for the missing packets, the originating server “backs off” and resends the lost data after a short delay. It keeps doing this, increasing the delay exponentially each time, until the data rate is slowed down enough that packets are no longer dropped. It may take a few times to do this, eventually the link will stabilize.

    Incidentally, by looking at the “realtime” bandwidth graph in Tomato, it is easy to see when your downlink is being saturated. The graph will “flat top” at maximum bandwidth, with very few and small peaks and troughs noticeable in the graph. This is usually a sign that your QOS isn't working well.

    Right - let’s see what we can do about this !
    RichtigFalsch and Madumi like this.
  9. Toastman

    Toastman Super Moderator Staff Member Member

  10. Toastman

    Toastman Super Moderator Staff Member Member

    There are some different mechanisms available for us to use which will have the effect of slowing down an incoming data stream. At first I will concentrate on showing you the most important ones, which would produce the best speed and response for other classes despite having several online P2P clients.

    1) Reducing outgoing traffic for a class.

    We drop the P2P class rate down to 1% (5k) and the limit to 10% (50k) - and watch what happens. Don't SEED anything, ok?

    The incoming data from the remote server(s) now also drops to 500kbps - 1Mbps (cause and effect). This is OK and fits within our available 2Mbps bandwidth downlink, while a simultaneous WWW session is still quite fast and responsive. However, this is a simplistic view, because the “5-10% ratio” is not *always* applicable, and high-bandwidth seeders may actually send you more data than expected, nevertheless it will still probably be within the 2 Mbps link speed. However, if you try to do better than this and increase the outgoing limit to 20%, it MIGHT still be OK – or it more probably might NOT, depending on the material being sent to you, the number of seeders, the number of connections open at any given time, and many other factors which all have an effect on the link. And at 20% the simultaneous WWW session starts to slow down, and is generally unresponsive as the link starts to saturate. So you really do need to err on the low side to be absolutely certain that the downlink does NOT become saturated, or the QOS will break. I will discuss the pros and cons of increasing this setting to enable us to download more P2P later. For the moment, stay with me.

    TO RECAP - It is quite likely that setting your outgoing P2P traffic limit to more than 15-20% will saturate your downlink with P2P, causing QOS to be much less effective. You have to decide on a compromise setting that allows higher P2P activity while still allowing a reasonably quick response to priority traffic like HTTP. [Shortly, we will see how to combine two methods to achieve this].

    Still, let’s set it to 20% (100k UP) and be optimistic - phew – everything’s still OK. But we’ve hit a snag already – especially with P2P applications :angry:

    Consider what happens, for example, when your P2P application needs to SEED or UPLOAD a lot of files in order to gain “credits”. Your PC uploads lot of data, perhaps quickly filling your “upload” allocation of 100k. BUT this class is shared with the receipts you are sending out in response to incoming files. These packets no longer have exclusive access to the router's buffers, and since they have no special priority in the queue, may be delayed. Now your downloads can no longer reach the normal speed - they may even drop down to almost nothing. At this point you might think there is something wrong with QOS. But QOS is actually working correctly, and it is your application of the rules that is in question.

    Your uploads have dominated the connection because you didn't anticipate what might happen. You allowed uploads to dominate your connection, when what you really wanted to do was to allow downloads.

    2) Limiting the incoming data rate of a class

    A partial solution can be achieved by using the “incoming” traffic limit in Tomato P2P class to set a limit on incoming P2P data. So how does this work? The connection tracking section of the router firmware keeps a record of all outgoing P2P packets and then attempts to keep a tally on any incoming packets associated with it. It can therefore add them all up and then calculate the speed of the incoming P2P, which can then be limited. So we could theoretically set an incoming limit of something under 2 Mbps. If this is exceeded, the router will drop packets, forcing the sender to back off and resend the data – once again allowing the link to stabilize. It is actually a form of crude congestion control. To better understand how the normal built-in backoff strategies of the TCP/IP protocols operate, you must use Google and read up primers on TCP/IP operation.

    This is, of course, the reason why a maximum incoming limit is sometimes recommended to be initially set in QOS/BASIC for rather less than the maximum “real” speed normally achievable from your ISP. It is an attempt to slow down the link before it becomes saturated. That is why it is often recommended to set to something LOWER than the maximum, usually 85% or so. However, if you run a busy network, you've probably noticed that in practice th is actually unable to keep the incoming data pegged low. This is partly because while the link is busily "stabilizing itself", new connections are constantly being opened by WWW, Mail, Messenger, and especially other P2P seeders, while other connections may close unpredictably, and that upsets the whole thing. The goalposts are constantly moving! You will see from this that P2P in particular is very difficult to accurately control. Over a period, the average should approximate the limit figure. Best latency is achieved with a combination of 1) and 2).

    If you want to see your QOS working quickly and with good latency, set the incoming limit low at around 66% of your ISP's maximum speed.

    Here is a graph showing the latency of a 1.5Mbps ADSL line under differing loads, and the result of limiting inbound traffic. If you use VOIP you must limit your incoming bandwidth to allow low latency. You can't have both low latency and a full pipe!!


    (graph thanks to Jared Valentine).

    • 608
    RichtigFalsch and Madumi like this.
  11. Toastman

    Toastman Super Moderator Staff Member Member

    More recent versions of Toastman Tomato have a much improved QOS. The incoming Limit is now a true limit, and the ingress system also has priorities in a similar way to the Outgoing section.

    The way the incoming section of the QOS system works, is by simply dropping packets, which will cause the remote server to back off and send again after a short delay. Each time a packet is dropped on that connection, the remote server will again bak off exponentially, until the link stabilizes.

    Therefore, our QOS setup is now much less critical than before. You can set limits on each class and that will usually prevent congestion, if you get it right. Remember that UDP is not delayed.. so leave some room for it in you incoming bandwidth plan. (We can drop UDP packets but they will not be retransmitted. For that reason, there is a tick box in the setup page to exclude UDP from the QOS ingress.

    It is important not to rely 100% on the incoming limits especially while you set up QOS. Set it only when all else has been adjusted and you can see if your outgoing settings are causing congestion. If you try to set up your QOS with incoming limits set, it will actually make it rather difficult for you to see what is happening as a result of your settings, because the limit will kick in and mask what is going on. Initially, it is useful to set the incoming overall limit to 999999 so that it is in effect switched off, this will make things easier for you while examining your graphs and adjusting your QOS parameters. But once your QOS rules are in place it ALWAYS pays to impose an incoming limit for many applications as well as an overall limit.

    To recap - For best throughput and reasonable response times and speeds, set incoming limits quite high, near the inbound maximum limit if you wish. For best latency, set incoming limits lower, see the graph below. I found 50% maximum limit to be extremely responsive, 66% good, 80% still reasonable but ping times beginning to suffer under load, and things dropped off noticeably after that. As a compromise, I use 80% for my maximum incoming limit, and most residents appear to be happy with the result.

    You sacrifice bandwidth for response/latency.

    In order for WWW to be snappy when using a restriction on other traffic, I usually set my WWW class to "NONE" so that it will ignore any limits.

    What about uploading seeds?

    We're always stuck with a problem with P2P on ADSL connections. We can't upload enough, especially on a shared link, to get enough "credit" or "points" to make our downloads go quickly. Apart from "leecher mods", there isn't really a lot you can do, so if you have complaining customers, then you better be good at explaining this. P2P really isn't very successful on shared "public" WiFi networks. In order to get what I call good downloads, I need to continuously upload at least 200k (not seeds - just outgoing P2P requests/receipts). This isn't usually possible in shared networks. Users can help themselves a little by limiting their seeds to the lowest possible setting (e.g. 1k) or switching them off altogether (if allowed in the client) so as to use what little uplink bandwidth is available for actually downloading files. Not many people appreciate that seeding is not necessary or desirable, if your aim is just to download files. [FYI, killing off all uploads in uTorrent (e.g. set upload=1kbps) while in a session will show an INSTANT increase in download speed.

    The net effect of this is: P2P users on low upload bandwidth (e.g. ADSL) shared networks who seed files not only screw up their own P2P operation but everyone elses too.

    ADSL isn't so good for people trying to use private trackers who insist on good upload ratios and limit your download speeds drastically if you don't seed files. Public torrents work OK but don't try to seed anything. Yes, I know that it's against the idea of sharing files, but do you need it to work for you or not?


    What about when the available bandwidth is taken by people on your network uploading seeds who don't know or haven't taken the trouble to eliminate or at least limit them?

    Well, there would some justification for an extra rule to move "seed" uploads out of your P2P (default) class, to allow QOS to work on the downloads as a separate issue. Your normal P2P rule (which should be the default class) would then control only your downloads, and the extra rule controls the uploads. This is in theory. In practice, I haven't found a good way to differentiate seeds from normal P2P operation, so I no longer try to use class E for this. If you find a way that will work, please post it for us all to benefit.

    While on this subject of classes, do make proper use of the available classes to keep applications separated for clarity. For example, many people place WWW and DNS traffic in the same class. But this slows up DNS response. There are ten classes in Tomato from Highest down to E. There doesn't seem to be any noticeable performance hit involved with using them all. When you have split things into different groups it is much easier to see what is going on using the QOS pie charts and "View Details".

    Here's an example of how you might use them - replace with whatever suits your situataton:

    Highest---DNS, NTP
    High------Game Control Ports
    Medium---IPTV Control Ports (RTP, RTSP, etc)
    Low-------WWW, HTTPS, Web Proxies
    Lowest----Shoutcast/IPTV/Messenger Video etc. data streams
    A----------Mail POP, SMTP, IMAP
    B----------IRC/Chat/Messenger text
    C----------File Uploads/Downloads (HTTP) (FTP)
    D----------Default (P2P and anything unidentifiable or annoying!)
    E----------P2P Uploads/unwanted UDP/anything really annoying (as suggested above - a "crawl" class...)

    As you can see, most of the QOS settings interact with each other and don't have precisely defined points at which things can be seen to be “pegged”. You should perhaps think of the QOS system as a mechanism that "steers" your overall traffic distribution in the direction that you would like to achieve. There is nothing precise about it, no "quick fix" when you have many connections that you have to keep in their place. It may take you some days of staring at your monitor to properly evaluate the results of even a small tweak in your parameters!

    So does QOS really achieve it's aim? The answer is that it is never 100% successful but can do a “best effort” job which minimizes the delays and dropped packets. A point is reached where no further twiddling with parameters will provide an improvement - and the result, for better or worse, is what you’re stuck with. Compulsive twiddlers will eventually go insane trying to achieve the impossible :wall:

    One big piece of advice I have is for those who would like to make maximum use of their bandwidth for downloading P2P, keeping both inbound and outbound links pegged at close to 100%. Doing this will make your QOS slow and ineffective. Web (HTTP) response will not be snappy and users will complain about it. You cannot have it both ways. If you wish to do it, set up QOS as above and then just increase the amount of outgoing and incoming bandwidth available to the P2P class. You can often go up to 80% - Oh, and be absolutely sure to uncheck the "prioritize ACKS" box. As to how well it works - this is a matter of personal opinion. It does slow things down, which is unacceptable to me personally. It may be acceptable to you.

    (original graphs thanks to Jared Valentine)
    Attached Files:

    Last edited: Oct 3, 2013
    RichtigFalsch and Madumi like this.
  12. Toastman

    Toastman Super Moderator Staff Member Member

  13. Toastman

    Toastman Super Moderator Staff Member Member

    Limiting numbers of TCP and UDP connections

    If your router crashes or becomes unstable due to P2P applications opening large numbers of connections, try to limit the number of ports that a user can open. Here is a collection of useful scripts: Put one or more of the following in the "Administration/Scripts/Firewall" box. Check that function before adding another rule. You may list the iptables rules by telnet to the router and issuing the command "iptables -L". If you are running a recent tomato mod, you can also do this from the "system" command line entry box.

    These are just things for you to try, they will be found in many places on the web. Wheteher they work for you, you need to test.

    #Limit TCP connections per user
    iptables -I PREROUTING -p tcp --syn -m iprange --src-range -m connlimit --connlimit-above 80 -j DROP
    iptables -I INPUT -p tcp --syn -m iprange --src-range -m connlimit --connlimit-above 100 -j DROP

    #Limit all *other* connections per user including UDP
    iptables -I PREROUTING -m iprange --src-range -p ! tcp -m connlimit --connlimit-above 20 -j DROP
    iptables -I INPUT -m iprange --src-range -p ! tcp -m connlimit --connlimit-above 50 -j DROP

    #Limit outgoing SMTP simultaneous connections
    iptables -I PREROUTING -p tcp --dport 25 -m connlimit --connlimit-above 10 -j DROP

    The next script is to prevent a machine with a virus from opening thousands of connections too quickly and taking up our bandwidth.

    #Limit UDP packet opens from all users - UDP to Router
    iptables -I INPUT -p udp -m limit --limit 10/s --limit-burst 20 -j ACCEPT

    #Limit UDP packet opens from all users - UDP out to WAN
    iptables -I PREROUTING -p udp -m limit --limit 10/s --limit-burst 20 -j ACCEPT

    QOS still works and show all of the affected users in the appropriate class in the graphs. If you test the above scripts with a limit of say 5 connections in the line, you will often see that it doesn't appear to be working, you will have many more connections than your limit, maybe 30-100, that you can't explain. Some of these may be old connections that have not yet timed out, and waiting for a while will fix it. Be aware that often these may be TEREDO or other connections associated with IPv6 (windows Vista, and 7) which is enabled by default. You should disable it on your PC by command line:

    set state disabled


    If your router becomes unstable, perhaps freezing or rebooting, apparently randomly, then it may have been asked to open too many connections, filling the connection tracking table and running the router low on memory. Often this can happen because poorly behaved applications (usually P2P clients) can attempt to open thousands of connections, mostly UDP, in a short space of time, just a few seconds. The router often does not record these "connection storms" in the logs, because it runs out of memory and crashes before it has time to do so.

    Obviously, there is a flaw in the firmware, which most definitely should never allow this situation to happen. Until such time as we can correct this situation, we must resort to some means of damage prevention and control. Setting the timeout value of TCP and especially UDP connections is necessary.

    Setting the number of allowed connections high (say 8192) makes the situation worse. In fact this number is almost never required. Most connections shown in the conntrack page will actually be old connections waiting to be timed out. Leaving the limit low, say 2000 to 3000 connections, gives the router more breathing space to act before it crashes.

    The following settings have been found to help limit the connection storm problem without too many side effects.

    None 100
    Established 1200
    Syn Sent 20
    Syn Received 20
    FIN Wait 20
    Time Wait 20
    Close 20
    Close Wait 20
    Last Ack 20
    Listen 120

    Unreplied 10 (25 has necessary by the odd guy, for some VOIP applications to work, otherwise reduce it to 10)
    Assured 25 (Sometimes this needs to be increased up to 300 for VOIP use. Choose the smallest number that is reliable for your own VOIP system)

    VOIP is a bit of a mess. If you have problems, there's actually quite a lot of help on the web. Google for it.

    10 for both
    ICMP 10

    ICMP is self explanatory, the "generic" timeout which is used for all TCP/UDP connections that don't have their own timeout setting.
    Last edited: Aug 27, 2013
    RichtigFalsch and Madumi like this.
  14. Toastman

    Toastman Super Moderator Staff Member Member


    I just want to add something to clarify what your maximum bandwidth limits should be set to.

    I see several people advising to take the AVERAGE speed measurement from speedtest and then deduct, say, 15%. Several people who followed this advice have recently mailed me and told me their QOS doesn't work properly.

    The AVERAGE value is NOT what is required. You stand the risk of your QOS failing.

    Let us consider an example.

    Suppose that you have a 1/10Mbps line and can reach full speed for 12 hours each day.

    Suppose that in the evening/night, as demand increases, you can only obtain 500Kbps/5Mbps for the next 12 hours.

    If you take the AVERAGE value, (75%) and add this less 15%, think about what happens:

    1) In the daytime, everything is fine.
    2) In the evening whenever your throughput tries to exceed 500Kbps, QOS will no longer work.

    This will be noticed when you try to utilize your full bandwidth. So under some circumstances you may not notice this.

    To reiterate - the figure that should be entered, IF you want your QOS to work reliably under ALL circumstances and at any time of the day, you must enter the MINIMUM figure that you obtained with your speedtests, less the usual 15%.

    In the above example, we would enter 425 / 42500


    Madumi likes this.
  15. Toastman

    Toastman Super Moderator Staff Member Member

    Classifying applications

    It may appear relatively easy to classify an application, but that isn't always the case. FTP, for instance, is listed as using port 21. So do you just stick it in a rule and forget it? No. There is more to it than that!

    Some applications and protocols, FTP being one of them, use more than one port. Port 21 is actually a control port - the actual data transfer, upload or download, is usually done on a separate port 20, but occasionally on a port which is unknown and therefore cannot be easily classified.

    A variation of FTP known as "Passive FTP, uses port 80 to set up the link, using dynamically allocated ports for the data. You need to learn about this in order that you might try to address those ports. Priority based on a range of ports and size of the transfer may be necessary. You must research your applications carefully before you act.

    IPTV likewise has a control and setup port, but the data is usually carried on a separate port, often HTTP port 80! The connection tracker in Tomato has "helpers" for some applications such as FTP, GRE / PPTP, H.323, and RTSP, but this doesn't give any priority to the control or data channels - you have to do that yourself.

    In the case of IPTV using TCP over port 80, the downlink will get the priority that you use for HTTP. But you usually won’t be able to predict what port data links will be on. It may be possible to find some information on Google, but often the best you can do is to prioritise the control port. You might just try to cover most of the commonest data ports used by the popular applications like Messenger services. Often you won't ever know if you have really been successful. But at least, even if you are not successful, the worst that can happen is that the application falls through the rules into the default class, which it then has to share with your p2P and any other "bulk" traffic. If sometimes your rules work badly and some traffic ends up in a higher class than you intended, that may also not be the end of the world.

    You can control file uploads reasonably well by classifying any very large data movement as a file transfer, and stick it in an appropriately limited class. If you can’t make this work, then take comfort in the fact that luckily, most file transfers don't last very long, especially if they are done on HTTP port 80 - which usually has a high priority. And here, you can move it out of your WWW class by making a rule to treat any large outgoing data transfer on port 80 - say 512k plus, as a file upload - and again stick it in the appropriate class. (The default tomato rules which I hate, put it into "bulk" along with P2P). Note, this transfer setting is for outgoping data only - it's not easy to apply to incoming data because the port is unknown. But since it is our outgoing bandwidth that is particularly scarce, this isn't so bad.

    MSN and other messenger services can be very complex.

    Sometimes they are able to tunnel through firewalls using port 80. Some browser-based file downloads and video applications are done in this way. Audio and video applications often have ports used for opening the connection and for control purposes, but the data itself is carried on another port or even a set of ports in the range 1024-65535. Again, it isn't easy to classify them properly, and very hard to give them the necessary priorities. There are many services broadly referred to as Messenger - white board, file transfer, chat, video, VOIP. In an apartment complex it isn't necessary or desirable to cover them all. VOIP and personal video (webcams), for instance, we felt should not be given particular priority over other users, but if there is spare bandwidth then we try to accomodate it. Because we have a lot of foreign residents who like to watch news and TV from home, we give IPTV some priority when bandwidth is available. Actually, in practice, it seems to be fine, and most of the time there is no excessive IPTV traffic.

    Many video streaming protocols exist and have control and setup ports. Hence you may want a "Control" class near the top of the list to make sure they set up speedily and any control signals are given priority so that the application does not "stutter". Again, you may give priority to whatever you feel is important!

    Multicast TV has to be supplied as a service by your ISP – it is his decision whether to allow it on his network or not. It is not normally available. The “Enable Multicast” button in “Firewall” settings, is therefore best left unchecked as it rarely does anything useful but may flood your LAN with packets from the web. [For your interest, I hear that the IGMP in DD-WRT is also unuseable as it can transfer multicast onto your LAN, and completely swamp it!] It seems that the multicast protocol support in Tomato is broken anyway. So for the moment, I believe it is useless trying to use multicast with Tomato.

    UDP traffic is harder to control as the protocol is a connectionless one and no receipts are sent for incoming data. You can do a limited amount of shaping only. Priority can be given to UDP by reducing TCP and other traffic with QOS rules to give it increased bandwidth. Incoming limits on your other classes may be necessary to unsure that this happens.

    All my rules are port-based unless absolutely necessary, with some size variations set. IPP2P rules are very broad and not much use. L7 filters are likewise, but even slower and more processor intensive. Most of them don't work very well, and some don't work at all. Address and Protocol/Port are the fastest and most efficient ways to match. If at all possible, use Address and Protocol/Port before resorting to IPP2P or L7. And don't forget we have a new L7 filter that works well for YouTube videos, thanks to Porter on Linksysinfo.org.

    For stability with high useage, it's best to completely avoid them. For example, four such filters when used on an ASUS WL500gP v2 slowed the router almost to a crawl. Too many L7 or IPP2P rules can also cause your router to crash or restart. If you are experiencing frequent crashes and restarts under heavy load, these may be the cause. If you have an ASUS RT-N16 router running at twice the speed. then use of a few L7 filters is much less lilely to cause problems.

    I have a mix of Linux, Windows, Apple MAC users, with all sorts of odd applications that need to be covered. When I see something new going on, I try to find out what it is and then consider whether it is covered by an existing rule, whether that is sufficient, and if not – does it require to be addressed, or just left to fall past the filters into the “default” class.

    Get yourself a few lists of common port numbers, and learn what each port is used for. Think carefully before making assigning it to a class, or does it need a new class? Do you need to cover it at all?

    QOS rules are processed from the top down. The first rule that matches will apply. You must be sure that all your rules co-exist, that the order is correct, or things will be siphoned off into the wrong class. It is not always obvious when this has happened. Think carefully and re-examine ALL your rules when you make a new one, or even alter an existing rule. Make sure that your new rule is in the correct place in the list.

    Especially, don’t try to make a "rule" for P2P - because you will fail. Note that the L7 filters are almost useless for P2P, and not so good for a lot of other things too (which is why the QOS on most SOHO routers doesn't actually work). Use the following strategy to deal with P2P and you won't need any special filters at all.

    Set your default class to something suitable for P2P - usually this will be the lowest or penultimate class. Don't be afraid to use classes A to E. Then just make rules for all applications that you want to give priority. Anything NOT in those rules, including P2P, will fall through them - into the default class.

    I used to set all small packets to get priority. That included ACKS - many people would recommend that this box is unchecked. If you do that, my thought was that you will delay traffic unnecesarily. However, there is a problem here which is not mentioned anywhere in Tomato FAQ's or Wikis. The "small packets" in the check boxes use the "Highest" class to prioritize them. So, if you check the ACK box, any ACK packets for e.g. P2P will move out of the P2P class into the "Highest" class. The data stream however will still be identified and correctly classed as P2P, and will still respond to limits. The problem may not be noticed if you have set up QOS for best latency by limiting outgoing P2P severely. But for most people, checking the box will slow down their QOS rules by giving an unfair advantage to P2P byeffectively giving P2P downloads a high priority. This is because most outgoing traffic for P2P is actually ACKS.

    The moral of this is - if you run P2P on your network, and wish to limit it, uncheck the ACK box. If you want the best P2P speeds, check it.

    Don't use DHT or uTP (methods used by uTorrent and others where UDP packets are used for transfer of data) as it takes up a ridiculous amount of of bandwidth for no significant gain. Get your users to turn it off if you can. If this can't be done, make sure that you use firewall scripts to limit the numbers of UDP connections that can be opened per user. Bit Torrent DNA is similarly useless and spends a lot of it's time using your computer and link for someone else's benefit - Google this and see how it is screwing up your network before you kill it. A newer system called uTP is now being used by uTorrent, but so far it has also caused a lot of UDP traffic for no observed benefits at this location. Keep your eye on it to see how it turns out.
    RichtigFalsch and Madumi like this.
  16. Toastman

    Toastman Super Moderator Staff Member Member

  17. Toastman

    Toastman Super Moderator Staff Member Member


    DO get everyone to use UPnP or NAT-PMP. If P2P is allowed to run without any incoming ports being forwarded, it will take up a LOT of your bandwidth but users only get a very, very low download speed. Users aren't generally aware of this, and they will not know what is responsible, so some guidelines at the time the users signs up for internet (a user guide) is the best way to educate them.

    UPnP is often regarded as a security risk and many people are so paranoid they will recommend it is not used. That's your choice, but with no UPnP, most of your users' applications won't work. Is that acceptable? I think not. The simple fact is - if the residents can't use their PC's to do what they want, they will often move out. If it's any comfort, as far as I am aware none of our apartment blocks have ever had any security problems from using UPnP/NAT-PMP.

    Try to set up QOS so that bandwidth is not unduly wasted, but latency is still reasonable low. Do the best you can. To recap - set up a ping session to your ISP gateway on your PC. With ACK priority OFF, start with incoming P2P limit 50%, outbound P2P limit 20% and increase that slowly while watching ping times. You should be able to get downloads to around 80% of max bandwidth with response times still around 50mS. Experiment with outgoing and incoming limits to quickly gain experience. The higher your available bandwidth, the less critical your settings need to be, so for those lucky enough to have 16Mbps or higher, your task will be a lot easier than someone with 1Mbps.

    A lot of software does not always close UPnP forwarded ports on exit, P2P clients and sometimes Messenger are particularly bad for this. Tomato originally supported only 25 UPnP port forwards. If the table filled up, Tomato did not automatically delete entries. You had to cancel them manually or use a script to restart the service periodically which would reset all of the rules.

    A better option has been added to Tomato from February 2009. A rather nifty UPnP client called "miniUPnPd" has been integrated by modders. This UPnP daemon has no limit on the number of portforwards (excepting memory limitations) and also automatically releases unused rules after a configurable period. It also supports NAT-PMP (Apple's version of UPnP). This has now also been adopted by Jon Zarate in the official Tomato release.

    Online Games Enthusiasts

    A word to owners of apartment blocks. From time to time you will get problems with games players. No matter what you do, games will never work as well as if they had a dedicated line, more often than not they don't work very well at all. There are thousands of games and you can spend all of your time fiddling with the router QOS and forwarding ports every time someone changes his preferences. It just isn't practicable. It is very important to make games players fully aware of this fact and that they are sharing the line with many others. Human nature being what it is, though, they usually never stop bitching about everybody else on the network, so you may have to make it quite clear before anyone signs up. Fanatical downloaders also give a lot of trouble, it is common for them to want to take all of the bandwidth for themselves and they will accuse everyone else in the block of downloading so that you, the owner, will cut them off. Sorry to be blunt about it - games players are usually very immature and obsessive. Often if they can't play games they will lock themselves in their rooms and cry - we've seen dozens of those.

    Be firm and let the QOS deal with it, don't overreact. Please also take note - in our experience, disgruntled P2P and Games users will often roam the floors late at night attempting to find your AP's and routers and fiddle with them, often cutting cables and removing power to them, hoping they can get more speed or sometimes revenge against the management. Don't underestimate the damage they can do - this isn't a joke, it is just what we have experienced with several thousand users in our apartment blocks. You must always make sure your AP's and switches have nothing exposed that they can screw up, no exposed cable, and the main routers should be in locked steel cabinets with all cabling inside a conduit, and preferably in the reception area where the staff can see it.
    RichtigFalsch likes this.
  18. Toastman

    Toastman Super Moderator Staff Member Member

    Unclassified connections

    I get a lot of people ask about unclassified connections which they can't seem to get rid of.

    Any connection that terminates at the router, is not classified by QOS. For example, when you look at your router's web gui, you open connections to the router - usually about 10-20 or so will be seen.

    P2P is notorious for generasting masses of "unclassified" connections. You downlaod a file and at the same time notify the "cloud" that you have part of it for seeding. Machines will therefore know how to connect to your P2P client. But what happens when you disconnect? An external machine that tries to open a connection to, say, a P2P client on one of your machines that is no longer listening, will NOT BE classified as it is terminated at the router! Remote machines may continue to connect to your clients long after they have been switched off. You can't stop it - just don't worry about unclassified connections, that is the very reason WHY they are unclassified.

    NOW - a lot of people keep asking me why their QOS doesn't work between LAN machines. Please note that it is supposed to work on traffic between the router and the ISP - i.e. on the router's WAN port. NOT on the LAN.
    Last edited: Aug 27, 2013
    RichtigFalsch likes this.
  19. Toastman

    Toastman Super Moderator Staff Member Member

  20. Toastman

    Toastman Super Moderator Staff Member Member

    If you have need for more bandwidth but have no available higher speed options, you may have no option but to get another ADSL line and assign half of your users to a second gateway.

    Split traffic between gateways with these scripts - enter in Advanced - DHCP custom configuration box:

    dhcp-mac=red,00:0D:87:2D:1C:7A #(one example MAC SHOWN)
    #This entry must be duplicated for each MAC address to use the second gateway


    #This entry sends anything in this address range to the second gateway


    #wildcard used on MAC header can be used to split if the above IP address range does not work

    dhcp-option=net:red, 3, #Assigns "red" to the second gateway
    dhcp-option=net:red, 6, #Assigns "red's" DNS server to the second gateway

    The second gateway is merely an Access Point set up as normal but with an attached ADSL PPPoE modem, which provides the second connection to the internet. Switch off the DHCP/DNS on the second gateway. The script assigns gateway details along with the IP by DHCP). Once it is verified that your split is working, you can add access restrictions (if needed) and QOS to this router also.

    One warning - I see that some Android devices can not be reassigned to a second gateway as they ignore the commands sent to them by the router. Hopefully Google will fix this later.
    RichtigFalsch likes this.
  21. Toastman

    Toastman Super Moderator Staff Member Member


    Connecting two devices by cable remains the best method in terms of speed and reliability.
    The router connected to the internet is known as the "gateway".
    The secondary router will now be called just an "AP" (access point). Set it up as follows:
    • Change mode to AP only
    • Set "gateway" to the IP of the gateway
    • Make a DNS entry for the IP of the gateway
    • Disable DHCP.
    • Use the same security settings and SSID as the main gateway.
    • Leave the router in "Gateway" mode.
    • Decide what wireless channel to use for the AP - usually a different channel to the gateway.
    • Connect a cable between LAN port on the AP and a LAN port on the gateway router.
    If you want ultimate stability on your gateway machine, you may choose to turn off the wireless and just use the AP for wireless access. Without the complications of wireless, these routers are almost 100% stable and will usually stay up for several months or more.

    I would also encourage anyone NOT to use WDS or any other form of wireless connected access point, as they are inherently slow and unstable. Use CABLE to connect devices wherever possible -and save yourself a lot of tears.
    RichtigFalsch likes this.
  22. Toastman

    Toastman Super Moderator Staff Member Member

  23. Toastman

    Toastman Super Moderator Staff Member Member

    Access Control

    Now for a topic that's especially important in an apartment block. How do we stop unauthorized people from gaining access to the network? This is especially important when we want to make a charge for use of the internet.

    Firstly, the method used for Wireless encryption does not matter at all as far as access control is concerned. Some people may think I am mad for making that statement. But the reason is very simple. Human nature being what it is, within a week of changing the access code residents have given it to all of their mates. In the blocks I have upgraded, it was commonplace that there were more freeloaders than paid-up users. As a method of controlling access in residential buildings, wireless encryption is completely useless.

    The true function of the encryption methods is, of course, to stop people seeing your traffic.

    I do use WEP - more as a deterrent and to hide plain text from view, than as a serious attempt at security. Why WEP? Because it's simple and reliable, whereas some of the other options seem to have various problems which have rather put me off using them, and there is also a speed issue involved. However, WPA2/AES would be the preferred option for many people. In Bangkok, it's hardly worth bothering to hack a private network since the city is covered with free ones. So far I have never noticed that anyone outside the buildings has ever bothered to hack into the networks, and I have a large number of sites active 24/7. There are, however. many people who are so totally paranoid about security they have crippled their setups so badly they are unusable. To each their own: biggrin:

    I use DHCP to issue fixed IP addresses for residents. So each resident has to give the reception desks their Wireless MAC - and some cash. I use the "Hostname" field to enter a unique code for the users - actually it's their room number - so that I can easily see who is doing what. Without this it would be very difficult indeed to set up QOS, deal with customer complaints, and to maintain the network. Currently tomato supported 100 users in Static DHCP so I have enabled more (140) for my own use. Since the RT-N16 came out, this is now 250.

    Average takeup of wifi is about 30% of the rooms, incidentally.

    The main router's DHCP is set to issue only one address - which is actually already assigned by Static DHCP to the system administrator's PC. Anyone requesting an IP, who is not in the "Static DHCP" list, can therefore only be issued with .100. BUT - as this address is already in use by the administrator - the freeloader gets the message "address not available" and is thus blocked. [You could instead issue a different IP and then restrict access for that IP with an "Access Restriction" rule - 'Deny Access to this list"].

    Next, I use an Access Restrictions rule to prevent MAC addresses not in the list from accessing the internet. You should be able to figure it out from this:

    Access Restriction

    Rule description: "Allow access to this list" / All Day, Every Day / Normal Access Restriction / Applies to: All Except / Blocked Resources: Block all internet access/

    Tomato originally only supported 50 entries in Access Restriction. Victek's 1.23 mod has enabled 100, I've increased this to to 250 - but some NVRAM space issues do need to be addressed as this is the limiting factor. You may be able to enter hundreds if for example your QOS rules and other stuff is small. BTW - there is a trick to allow the use of more entries than are allowed by the GUI which may help if you want to do this with any older Tomato version. Use iptables -L to see the chain used by the rule - it will be something like "rdev01". Then you can add more addresses to the rule with e.g. :

    iptables -A rdev01 -m mac --mac-source 00:00:00:00:00:01 -j RETURN

    Add them to one of the script boxes if necessary. When there is no space left Tomato will become unstable and do strange things, so be careful with this. If you have NVRAM space problems you can just use the restriction MAC addresses and forget about assigning static DHCP to clients. You do lose some of the convenience though. This way you may be able to add several hundred MAC address restrictions.

    The Wireless restrictions aren't of any use in my situation, because they apply to each individual AP - and not to the whole network including LAN users. However, up to 500 are allowed now if space permits.

    Unauthorized users could still gain access to the network by using a fixed IP and changing their MAC address with utility software, but so far nobody in any of my residences has ever done so, as far as I am aware. But if someone is determined enough, there is no way to stop them. If someone does this, I would block them using netcut until they got fed up. [Toastman mod also allows Static ARP binding, google for info on how this can help prevent someone assigning himself an IP address].

    At the beginning of each month, resident's payment records are checked and the router updated. Two entries have to be maintained, the Static DHCP Table, and the Access Restrictions table. MAC address from Static DHCP in one browser window can be easily cut and pasted into the Restrictions list in another window. The lists can be remotely maintained quite easily over the internet. [See note below].

    Upgrading Firmware via WWW

    I was forced to upgrade 24 AP's and 2 routers over the web, some 200kM from here. I made the discovery that actually flashing over the web was quite reliable. The secret is to WAIT and not panic if the router does not accept a flash in what you might think is a reasonable amount of time. It is quite normal for a remote flash to take up to 10-15 minutes and sometimes longer. If no flash after 15 minutes, I would wait an hour or two before I gave up because once the connection is closed, then you have a big problem!

    TIP: You can check the remote router's GUI in another browser window to see if it still responds - if it does, then the router is obviously not accepting the flash and it is safe to disconnect. If it doesn't, either a) it's accepting the flash b) it has rebooted and DDNS not yet updated, or c) it's dead Jim ....

    I have now flashed remote sites hundreds of times with no failures whatsoever.
    RichtigFalsch likes this.
  24. Toastman

    Toastman Super Moderator Staff Member Member

  25. Toastman

    Toastman Super Moderator Staff Member Member

    DHT and BT DNA

    With P2P, e.g. BT & uTorrent, I never notice any improvement in downloads, nor any extra sources, by using DHT. It seems to generate a lot of completely useless traffic and is best turned off - if you can get your users to do so. [I just spent a week examining my uTorrent downloads, and never saw a single download using DHT]. Kademlia, on the other hand, is a pretty good system, but eMule and the ed2k system is not used much here these days, and is a bit redundant.

    Bit Torrent DNA is a real pest - it can hog your bandwidth while resulting in no benefits for you at all. You should always uninstall this. It is a sneaky attempt to use YOUR bandwidth for commercial gain. If you google for DNA you will see what I mean.

    Since I can't tell hundreds of users what they should or should not turn off, I restrict the numbers of their connections for them, particularly UDP, using the scripts in the previous post :biggrin: The average number of connections at any one time is now around 1000, but on occasions it does reach 2000 or so. There are usually between 5 and 11 P2P users online at any given time in the blocks I administer, and about 3 or 4 of these are what I would call "power users".

    Whatever you do, there will be occasions when it seems that the router will reboot for some as yet unknown reason - because of this I switched to using ASUS WL500gP v2, which has 32MB, memory as the routers. They are more stable, but a little sluggish due to the "budget" chipset. The probable cause of the rebooting with 16MB routers is connection storms from a client, usually from P2P, and more often from a virus on an infected machine. A better choice than the WL500gP v2 would be the WRT54G-TM.

    Nowadays, there are many faster routers such as the ASUS RT-N16, E3000, RT-N66 etc.

    A later post will discuss strategies for dealing with UDP "connection storms" etc.
    RichtigFalsch likes this.
  26. Toastman

    Toastman Super Moderator Staff Member Member

  27. Toastman

    Toastman Super Moderator Staff Member Member


    It doesn't do anything, basically. It does not work at all on connections passing "through" the router. It *may* do something on connections from the router itself, browser, ftp, etc... but even then, does it depend on BOTH ends also running similar protocols?

    My tests show no improvement.

    **** SNAKE OIL ****
    RichtigFalsch likes this.
  28. Toastman

    Toastman Super Moderator Staff Member Member

    Accessing bridged modem via the router:

    Give your modem an IP in a different subnet to your router. Normally it's easy to use which is probably the one in most common useage, as the router's default is

    Enter the following scripts into these sections of ADMIN/SCRIPTS page:

    init: or wan up: ip addr add dev $(nvram get wan_ifname) brd +
    (you may need a "sleep 5" line first to give a delay)

    firewall: iptables -I POSTROUTING -t nat -o $(nvram get wan_ifname) -d -j MASQUERADE

    The first allocates an IP ( and a different subnet to the appropriate vlan interface (WAN port) for your router. Normally this port doesn't have an IP so we have to give it one, so that we can use it to add a route. There's nothing special about the .13 - choose what you want. Since we wish to acess your modem, the subnet is chosen to be the same as that of the modem.

    The second sets a route for that subnet via that vlan interface (WAN port) to the modem.

    The scripts will discover the correct vlan for your router from NVRAM.

    You can check your routing list in Advanced-Routing menu to see if the route is there.

    Now you should be able to access your modem just by typing its IP into the browser. Please note that not all modems seem to respond to this, though, even if they work just fine when connected directly to a PC.

    Exactly where you put these scripts is up to you. The first usually is put in init, but it may cause you to lose access to the modem when a new IP is issued on lease renewal. Placing it in the WAN UP box should cure that. Or, you can stick them all in the firewall. It's up to you!


    For those reading through this thread - as of late 2011 Toastman tomato has modem routing built into the GUI so you don't need to use these scripts. You MUST however give the modem an IP that is different to the ones used by your router and LAN clients.
    RichtigFalsch likes this.
  29. Toastman

    Toastman Super Moderator Staff Member Member

    Confusion about the term "QOS"

    Unmanaged switches, and many routers, just forward traffic without looking at it and without doing anything special to it. Some switches and routers have several priority queues for network traffic (e.g. Tomato has 10 - which are Highest/High/Medium/Low/Lowest/A/B/C/D/E). These provide a basic kind of QoS by giving priority treatment to certain types of network traffic.

    However, anyone searching the web for "QOS" will find that in engineering circles, QOS means something quite different to our simple little router's so-called "QOS".

    Okay, so all these QOS systems do use the idea of priority queues to give some types of traffic an advantage over another. How do they classify which network packets go to which queue?

    The methods used in small SOHO routers can be quite effective but rather limited.

    Simple Router "QoS" can classify by, for example, the UDP port number. ie. SIP traffic for VoIP is often on port 5060. So we usually have a QoS rule that reads the destination port in each packet and sends all packets destined for port 5060 to a high priority queue. Traffic in this queue will be sent before that in a lower priority queue.

    Another classification method could use the physical LAN port on the switch. ie. all traffic going down the wire plugged into socket X gets priority.

    So, what methods are being referred to in the engineering papers and articles we often find on the internet?

    There are methods which tag each packet with a code that can be read by hardware along the traffic route, to tell that hardware how to send the traffic (assuming the hardware is configured to obey the codes). The idea being thatall routers across the internet would recognize these tags and give priority to the marked traffic as needed.

    Two common methods, ToS and the newer DSCP, refer to packet marking schemes that use the packet header.

    ToS uses only 3 bits of the packet header and with 3 binary bits this gives you 7 different possible priority codes other than 0. Namely: 001, 010, 011, 100, 101, 110, 111. DSCP uses 6 bits of the header giving 63 possible codes. These two marking schemes overlap. Packets marked using DSCP can also be understood by hardware that only reads the 3 ToS bits. Using these methods to mark packets traversing the internet should, in theory, give priority to latency-sensitive applications such as VOIP (as long as every router on the internet actually reads and acts upon the tags).

    You can, for example, purchase little adapters which mark packets they send, such as the popular Linksys PAP2. These plug between an analog phone and an ethernet jack, allowing use of the phone for VOIP. Traffic marked by these adapters will therefore give priority to your VOIP traffic as it traverses the internet.

    VoIP calls via SIP in fact consist of SIP traffic that set initially up the call, and RTP traffic that actually carries the voice. Some devices can mark these two types of packets differently - so you could prioritise them differently if you had the hardware to do so.

    But now for the big problem. You see, routers on the internet mostly ignore any of these marks, so it doesn't actually work. Even if it did, everyone and his dog would immediately just classify all of their own traffic to give priority and since EVERYONE has "priority", NONE actually have it. It is quite well known that even Microsoft did this! Microsoft Windows 2000 always tags its traffic with IP precedence 5 [citation needed], which fact alone would have made the global traffic classing completely useless. So let's stop even thinking about it - OK?

    Next for all the pundits who keep muttering on about how router QOS doesn't accomplish anything for incoming traffic except drop packets when the link is saturated. Yes, of course it does you idiots! Thats exactly what it's supposed to do, that is how it works (for TCP anyway). That forces the distant server to back off and retransmit thus slowing down the link. That is exactly what happens anyway with TOS and DSCP marked traffic! What difference do they think marking can make ? If the downlink to your router is saturated - it can't be delivered and will be dropped, marking or not - causing the distant server to back off. It's exactly the same. So now please shut up about it and think.... you know that soft spongy stuff in your head? Use it.

    Our simple router QOS as used in the vast majority of SOHO routers does not mark traffic in this way. We have to classify each protocol/application by ports, L7 filters, etc. which can be a challenge. Packets are then marked, sure. But this marking is only used internally by the router.

    This thread tries to show how this may be achieved.

    I hope this clears up some of the confusion.
    RichtigFalsch likes this.
  30. Toastman

    Toastman Super Moderator Staff Member Member

  31. Toastman

    Toastman Super Moderator Staff Member Member

  32. Toastman

    Toastman Super Moderator Staff Member Member

    QOS in the presence of P2P again...

    Hi chadrew !

    Some things are indeed not quite what they seem. The big problem is that there is not much genuine information about QOS operation, and most of what you read is actually just somebody's assumption. Even the wikis have misleading data. Because of this, it's really necessary to test things out yourself and try to see if the claims hold water. The reason I had my arm twisted to begin this thread was to help make people aware of the reasons why they were not being successful and offer some suggestions and explanations as to why. I searched this forum and others and found that there was so little factual information that thought it would be a great idea. I'm still learning too, and whenever I find something I have written is incorrect or badly explained, rather than leave it on the forum to spread even more misinformation, I go back periodically and update it.


    Proritising ACK's, which, incidentally, places them in the Highest class, is often done to improve speed of normal connections.

    If P2P is used on your network, however, MOST of the outgoing traffic consists of ACKS, and the rest is usually tracker traffic. Prioritizing ACKS for P2P can therefore effectively give a high priority to your P2P by moving most of its traffic that QOS is able to control out of its own class into Highest. That is often exactly the opposite of what we intended to do. It is better on reflection to uncheck this box. Unchecking this box places ACKS back in their appropriate classes.


    Regarding overstatement of rates, clearly 250% (total) allocation of initial bandwidth can't be done. Tomato therefore must adjust the figures dynamically. And yes, it seems to take a significant time to do so. This is WHY we sometimes overstate, so that the initial rate allocation is sure to be big enough for that class.

    For example, if we have only two classes in use, WWW and Mail - and state each one as 100%. If a web connection begins, it is allocated 100% and that's as fast as it gets. Now if a Mail connection also opens, Tomato recalculates, and each would be allocated 50%.

    As for WHY we do it:

    If we started with say 1% for everything, which is perfectly valid, then the initial allocation is usually NOT big enough and has to be increased dynamically, taking some time to stabilise. But if we put 50% for every class - that's a total of several hundred percent. BUT if we follow conventional "wisdom" and enter figures that add up to 100%, then our WWW "rate" may be only 20% for example. And that may not be big enough, but unfortunately you've set a hard limit! Clearly, this is not satisfactory.

    If you experiment you will find that it is often necessary to overstate your "rate" to get the fastest response times / best latency. For example, for WWW, allocating 1% and 100% will take a while to pick up, whereas 50% / 100% rate/limit apparently results in a much faster response. To illustrate, it is the difference between opening a typical web page in 4 seconds instead of 9. In fact, many of use use 100/100 for our DNS and WWW classes - there's 200% right away!

    It may be better to imagine that this "rate" figure is the allocated minimum bandwidth for a particular class IF that much bandwidth is available, if not, all classes will be scaled down to fit.

    Your last comment on the P2P: This is the area which is least understood and the cause of most of the complaints about QOS not working, so it is very important for me to get this across correctly. I will *try* to explain more clearly but you need to re-read my earlier again posts in conjunction.

    The bottleneck that causes poor performance is generally the download link from the ISP to our router. If too much data is sent to us, it cannot be delivered to the router quickly enough, and a queue will build up at the ISP. Many packets will time out before being sent to us, and are lost. The remote server receives no acknowledgement for them, so it "backs off" and resends the data. Again some packets will be lost, so the server backs off again (exponentially), and sends the data yet again. This time, let's say the delay is sufficient to slow down the data stream, and the queue is reduced just enough so that our PC receives the packets, and sends acknowledgements. Now the server proceeds at the new data rate, and the bottleneck is cleared. That is a normal TCP mechanism and there is plenty of information if you google for it.

    The problem is, we are not just dealing with a simple case of ONE server sending data to one application. In my case, with residential complexes, I always have several hundred to several thousand open connections. And of course, the link NEVER stabilises, because new connections are opening and old ones closing all the time. Over a period of time, the "average" effect of our QOS system is what matters, but it is very difficult to step back and see the "big picture".

    You can see that something else is needed to prevent the buildup of incoming data at the ISP. Generally, we want to allow fast WWW browsing, so we give that priority and unlimited bandwidth. It is the P2P applications that are the worst, so let's concentrate on that.

    Let's go back for a moment to the analogy in the introduction:

    Suppose there are a thousand people out there who will send you letters or parcels in the mail if you give them your address and request it. Until you request it, they don't know you and will not send you anything. Send them your address and a request for 10 letters and 10 parcels and they will send you 10 letters and 10 parcels. Ask for that number to be reduced or increased, or ask for only letters and no parcels, and they will do so. If you get too much mail, you stop sending the requests or acknowledgements until it has slowed down to a manageable level. Unsolicited mail can be dealt with by ignoring it or delaying receipt and the sender will send less and give up after a while.

    The amount of mail you receive is usually directly proportional to the requests you send. If you send one request and get 10 letters, that is a 1:10 ratio. You've controlled the large amount of letters you receive with only the one letter which you sent. Sending 1,000 requests at a 1:10 ratio would result in 10,000 letters received - more than your postman can deliver. So based on your experience, you can figure out the ratio of letters you are likely to receive from a particular request, and then LIMIT the number of your requests so that your postman can carry the incoming mail. But if you don't limit what you ask for, then the situation quickly gets out of control.

    OK, so we need to understand how to prevent too much incoming P2P from saturating our link. Now the above analogy introduces the concept of an approximate "ratio" between what we send, and what we typically receive for a given application. We send out a *small* amount of data in the form of download requests and acknowledgements, but we actually get back between 10 and 50 times what we send - this ratio varies with protocol and application, amongst other things.

    With P2P applications we get back about 20 to 25 times what we send (my estimated figure, but this varies all the time so you need to find your own figure). So if we send out 100kbps of P2P data, (which are mostly ACKS) we will get well over 2Mbps of data back! That strangles our connection immediately and leaves almost no chance for other applications to work.

    1) Restricting outgoing P2P traffic

    So if we have a 1Mbps download bandwidth, we must send out LESS THAN 50kbps - or the returning P2P will begin to saturate the link. That is why I suggest a 1% rate and no more than 10-20% limit on the P2P class uplink. This will immediately place a check on the amount of P2P files (data) sent back to our client PC's. This is the point at which P2P will not usually saturate your link badly, but will do so from time to time. This will give best latency. You can increase this to allow more downloads, but the latency will suffer.

    2) Restricting Incoming P2P Traffic

    NOTE this is quite different to sending out 500k of data and then dealing with the resulting incoming 10Mbps+ of data by purely trying to place an incoming limit on it, which would not succeed immediately. There would be a delay, while packets are dropped, the server sends them again after a backoff delay, dropped packets again, another backoff before sending again, and so on - before the connection becomes stable at the new lower speed. At the same time, new connections will open and the process never ends. The latency will be worse.

    Both ways of adjusting the throughput have their place, but the incoming limit is best used in conjunction with an outgoing one, as a "backup" in case the "ratio" that we have guessed at is wrong for a given user/day/seeder or whatever.

    By way of example, by limiting outgoing P2P to 10%, and INCOMING LIMIT 66% I can reduce ping times to 21mS - 30mS most of the time. By increasing outgoing limit to 80% to allow more P2P the ping time goes out to 100mS plus. You must "juggle" both figures.

    I always set an incoming limit on P2P of around 50% - this will help to stabilise things by forcing the normal TCP backoff mechanisms to cut in IF the incoming P2P does rise too high. It also ensures that there is always SOME room for other applications to get a look in. (NOTE - although I want to emphasize the importance of limiting outgoing requests, you will almost certainly find this incoming limit on P2P bandwidth to be essential).

    10-20% outgoing seems like a *very* small figure, and many people are extremely reluctant to set this limit, or even 50%. You will see people insisting quite vehemently that they want to allow uploads to fill their bandwidth - which is why you will see so many complaints in the forums that QOS doesn't work. They have no BALANCE. Just imagine what would happen if we let our P2P applications send out a total of 512kbps of data? The returning P2P file downloads might be of the order of 50 times bigger, say 25Mbps, and the router cannot accept it, thus a huge queue forms. The reply to any ping we send out will often even time out before it gets back to us. In this eventuality, an incoming limit of say 80% does allow the whole link to stabilize by normal TCP backoff mechanisms, BUT it takes time - and new connections are being opened all the time, so this isn't the best approach. The latency is better by setting a lower limit on outbound traffic AND a lower incoming limit.

    [NB - If you are using uTorrent, the best way to get files fast is to prevent seeds altogether, or limit them to say 10kbps. Just try it and see. By doing this I am able to max out incoming P2P if I wish at 8Mbps.]

    So, let's examine what happens if we set our QOS to allow P2P at a rate that actually does allow it to reach 100% of outgoing bandwidth when no other application is using it. Let's suppose we then PING the gateway at our own ISP. The returning ping reply is held up in the queue of P2P data files waiting to be sent to our router. There is no priority on it, it has to wait it's turn in the queue. And there will ALWAYS be a queue. Our aim is to try to make sure that the queue is small enough that the returning ping is not held up longer than is necessary for our application to work. Now, if that application is a game requiring fast response, or VOIP audio, then we have to make a choice between it and the p2P. Reducing other traffic will improve your response time, using an "upload" or "crawl" class MAY help, but problems still occur. A compromise is necessary!

    And this answers the last part of your questions. Into the "P2P Upload" or "crawl" class we dump UDP, uTP, P2P uploads, and anything else we don't want to influence our normal data. By increasing the limit on this you can see other applications slowly lose their edge. You have to strike a balance which suits your own useage. It may be difficult to generate enough P2P traffic to actually see the effect of much of the above, so be careful with drawing conclusions quickly.

    To recap - if the P2P class is running amok, always REDUCE the outgoing P2P class data to something small, like 5%, and you will usually see an immediate reduction. Remove any incoming limit on that class just so you can see the effect of your changes. Then slowly increase to a safe figure. Keep experimenting until you are sure you see the relationship. Now reinstate an incoming limit. Use the client monitor feature in the latest Toastman compiles to see what is happening to your traffic.

    So, let me be quite brutal about this. YOU CANNOT HAVE YOUR CAKE AND EAT IT TOO, as my father used to say.

    In a large multi-user environment there has to be an emphasis on speed or customers will complain. We aren't so concerned about P2P or file downloads and their throughput figures.

    But, what setting would allow good P2P downloads with reasonable latency and good response for WWW browsing? Try P2P outgoing 5% rate 80% limit - incoming limit on that class 80% to 100%. Overall INBOUND LIMIT set to the 80% of measured speed. WWW class outgoing rate 40% limit 100% OR 100 - 100 and incoming limit NONE.

    As a home user, you may not need to go quite as far as I have, I need to make sure that everything works sufficiently well that most people do not even realize they are on a shared connection with around 100 other users.
    RichtigFalsch likes this.
  33. Toastman

    Toastman Super Moderator Staff Member Member

    Unclassified Connections - an explanation

    One of the biggest puzzles for many users has been the plethora of connections cluttering up the "unclassified" section of the QOS page. The wiki states that this is traffic destined for the router, and offers no more explanation. Users can easily see, however, that most of it has a source and destination port that is not the router, and is usually associated with p2p traffic. They find it impossible to classify this with any rule.

    OK, here's are some clues. See the source (=local client) port in most of these offending connections? Using that port number, let's make a new QOS rule, let's make it for both source and destination port, put into some class we can monitor, it doesn't matter what as long as you know what it is. Commit. Now all of this traffic should be classified, right? Er .. no, it's still there.

    Now switch off our p2p client. Go to ADVANCED/CONNTRACK and click on "drop idle" connections. Switch back to QOS graphs. The junk is still there, right? And it keeps changing, some disappear, some new ones open, right? Wait for some time, even after several hours. Some are still there, yes? Wait a week, they're (mostly) gone...

    Now, force a reconnect to your ISP to change your WAN IP address. Immediately they all stop.

    So what is the explanation?

    They are usually incoming connections from remote computers that have been contacted by your p2p application (etc) and are attempting to connect TO your assigned incoming port to download your shared files. Your p2p client may not acknowledge some or all of these, as they may have already been timed out by the router, the port is probably no longer open. Or - you may in fact have switched off your computer and gone to bed, or maybe you restarted your p2p client with a new randomly selected port number. But the remote site will not know that and will still keep trying to connect to your p2p application. Depending on what P2P software it is using and how the remote computer is configured, it can do this for a very long time. And there can be many of these connections which don't go anywhere -i.e. they stop at your router. I often have over 100, some of them are days old. To check this, I changed the p2p port assignment, but many incoming connections were still trying to use the OLD port several hours later. Often, these are machines trying to contact your P2P client because the tracker registered that you have a file part that they wish to download.

    So, they are not classified because they were unsuccessful attempts by a remote server at making a connection - and they STOPPED AT YOUR ROUTER because there was nothing listening for them to connect to. They will time out, but new ones will keep being opened until the remote site(s) gives up altogether.

    They do no harm in the "unclassified" section, so just leave them alone....

    NB - There are many connections that also stop at the router, you should examine the source and destination ports very carefully for clues. Don't assume your QOS is faulty and screw it up by fixing things that weren't broken!

    There are quite a few cases where incoming connections have been stopped by Tomato's firewall etc. and these may show up and confuse you. Rest easy. Tomato did it's job! Stop worrying, don't keep posting complaints about something being broken, when it isn't.

    Be aware that many P2P programs will cause connection storms that will crash your router. It's up to you to find ways to protect it.
    RichtigFalsch likes this.
  34. Toastman

    Toastman Super Moderator Staff Member Member

  35. Toastman

    Toastman Super Moderator Staff Member Member

    YouTube used to be really quite difficult to do much about. But things are a bit better now. There are supposedly two protocols used, normal HTTP and RTSP. RTSP can be covered but HTTP is not so easy. TBH I think that there are many YouTube videos which seem not to even come from YouTube servers, and I think several protocols may actually be in use.

    Happily, a new youtube filter has recently been designed by "Porter" (youtube-2012) and is worth as try, it's added to all latest versions of Toastman Tomato. I find it works extremely well! So - belt and braces time:

    We will use the FLASH and HTTPVIDEO L7 filters which seem to work most of the time. To test them I used a quite high quality video (480p) on youtube.

    This video is 10 minutes long, and just so I could see if the classifications worked, I placed them in HIGH (Games) class above WWW, where it streamed happily without a single hiccup. After confirming that it worked reliably on several other streams, I moved it back to the MEDIUM (Media) class.

    After veriying that these settings work to some degree, I enabled the new youtube filter, being careful to put it above the others in the list. Now all youtube videos appear to be correctly classified.

    I am currently streaming a shoutcast TV from Germany, another HD shoutcast from USA, 8 local IPTV sessions, while downloading uTorrent at 7Mbps, and I have just opened a 2 way web TV session with my brother. There are also nine other people online, everything is still working, and I've been streaming the same shoutcast station from Germany all day now. So I guess it's Bingo!

    I will add these new rule to the last QOS example setup in the QOS threadhttp://www.linksysinfo.org/forums/showpost.php?p=357556&postcount=135 . If you try them, please let me know how well it works for your YouTube viewing.

    The fact that Youtube delivers their files over HTTP is precisely why the service (and online video in general) were able to become so popular. While streaming protocols have (and still do) fail over small, unstable or walled connections, a simple download always succeeds. No special cases, no proprietary protocols; just the same old HTTP packets any internet-related hardware and software is optimized for. But - we do need to identify and give it a priority if we run a busy router.

    I think that, in common with shoutcast streaming over HTTP, after a series of dropped packets the client opens a new connection to the server. Thus, after a while, there are many old connections waiting to time out. So you need to be very harsh when setting conntrack timeouts. FAST Conntrack timeouts are very important to getting the router's QOS working well. And I do mean FAST. Pare it right down to the bone.

    In fact, it is said that globally, streaming video represents 36% of all HTTP traffic. And of that 36%, YouTube represents 20% – equating to 10% of all internet traffic. And there's very little anyone can do about it. So enjoy it!

    TIP - A big improvement in uTube video (accelerators) can often be obtained by opening several streams for the same video and recombining them at the receiving end - for example, take a look at this:


    Useful information site on Video providers: http://en.wikipedia.org/wiki/Comparison_of_video_services

    RichtigFalsch likes this.
  36. Toastman

    Toastman Super Moderator Staff Member Member


    nvram set qos_orules="0<<-1<d<53<0<<0:10<<0<DNS>0<<-1<d<37<0<<0:10<<0<Time>0<<17<d<123<0<<0:10<<0<NTP>0<<-1<d<3455<0<<0:10<<0<RSVP>2<<-1<a<<0<<<<0<TESTER>0<<-1<d<9<0<<0:50<<4<SCTP, Discard>0<<-1<x<135,2101,2103,2105<0<<<<4<RPC (Microsoft)>0<<17<d<3544<0<<<<-1<Teredo Tunnel>0<<6<x<22,2222<0<<<<3<SSH>0<<6<d<23,992<0<<<<3<Telnet>0<<6<s<80,5938,8080,2222<0<<<<3<Remote Access>0<<-1<x<8050,34567<0<<<<1<DVR>0<<-1<x<3389<0<<<<3<Remote Assistance>0<<-1<x<6970:7170,8554<0<<<<2<Quicktime/RealAudio>0<<-1<d<1220,7070<0<<<<2<Quicktime/RealAudio>0<<-1<x<554,5004,5005<0<<<<2<RTP, RTSP>0<<-1<x<1755<0<<<<2<MMS (Microsoft)>0<<-1<d<3478,3479,5060:5063<0<<<<1<SIP, Sipgate Stun Services>0<<-1<s<53,88,3074<0<<<<1<Xbox Live>0<<6<d<1718:1720<0<<<<1<H323>0<<-1<d<11031,11235:11335,11999,2300:2400,6073,28800:29100,47624<0<<<<1<Various Games>0<<-1<d<1493,1502,1503,1542,1863,1963,3389,5061,5190:5193,7001<0<<<<6<MSGR1 - Windows Live>0<<-1<d<1071:1074,1455,1638,1644,5000:5010,5050,5100,5101,5150,8000:8002<0<<<<6<MSGR2 - Yahoo>0<<-1<d<194,1720,1730:1732,5220:5223,5298,6660:6669,22555<0<<<<6<MSGR3 - Additional>0<<-1<d<19294:19310<0<<<<6<Google+ & Voice>0<<6<d<6005,6006<0<<<<6<Camfrog>0<<-1<x<6571,6891:6901<0<<<<6<WLM File/Webcam>0<<-1<x<29613<0<<<<6<Skype incoming>0<<17<x<3478:3497,16384:16387,16393:16402<0<<<<6<Apple Facetime/Game Center>0<<-1<a<<0<skypetoskype<<<1<Skype to Skype>0<<-1<a<<0<skypeout<<<-1<Skype Phone (deprecated)>0<<-1<a<<0<youtube-2012<<<2<YouTube 2012 (Youtube)>0<<6<d<119,563<0<<<<7<NNTP News & Downloads>0<<-1<a<<0<httpvideo<<<2<HTTP Video (Youtube)>0<<-1<a<<0<flash<<<2<Flash Video (Youtube)>0<<-1<a<<0<rtp<<<2<RTP>0<<-1<a<<0<rtmp<<<2<RTMP>0<<-2<a<<0<rtmpt<<<2<RTMPT (RTMP over HTTP)>0<<-1<a<<0<shoutcast<<<2<Shoutcast>0<<-1<a<<0<irc<<<6<IRC>0<<6<d<80,443,8080<0<<0:512<<4<HTTP, HTTPS, HTTP Proxy>0<<6<d<80,443,8080<0<<512:<<7<HTTP, SSL File Transfers>0<<6<d<20,21,989,990<0<<<<7<FTP>0<<6<d<25,587,465,2525<0<<<<5<SMTP, Submission Mail>0<<6<d<110,995<0<<<<5<POP3 Mail>0<<6<d<143,220,585,993<0<<<<5<IMAP Mail>0<<17<d<1:65535<0<<<<9<P2P (uTP, UDP)"
    nvram commit
    Toxic, JJohnson1988 and RichtigFalsch like this.
  37. Toastman

    Toastman Super Moderator Staff Member Member

    Several people have asked what my focus is. It's easily answered - just a stable and minimalist build without bloat, unnecessary changes that don't add value to Tomato. You won't find torrent client, photoshop plugins, or spyware. It is intended for routing, not a plaything or a kid's toy. It is intended for use 24/7 use with 200-250 users online simultaneously. If it's not stable under these conditions, it's no use to me.

    I hope you also find it useful.

    Be warned, several recent Tomato releases by other developers have very broken QOS. Be careful!
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice