1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

QoS Rules: howto classify upload to webserver

Discussion in 'Tomato Firmware' started by jochen, Nov 23, 2013.

  1. jochen

    jochen LI Guru Member

    I have Owncloud sync installed, which often uploads much data via webdav to my web server on port 443.

    The standard QoS rules have a rule that classifies tcp traffic to ports 80,443 as download. But uploads to a web server should be classified as "remote".

    Any ideas how to distinguish uploads from downloads?
     
  2. Porter

    Porter LI Guru Member

    Are you aware of the consequences classifying uploads of large amounts of data as "remote" would have on your connection? I'd recommend leaving the classification untouched and just increase the maximum upload of the class "FileXfer" to 90%, maybe even 100%. Test it. This seems like the better idea.
     
  3. jochen

    jochen LI Guru Member

    Yes I am. If you have read Toastmans tutorial how qos works, you would know that you can't control incoming traffic, only outgoing traffic. So you have to estimate the relationship between incoming and outgoing traffic for each kind of traffic, and limit the outgoing side of this traffic. On normal desktop pc usage, you have small amounts of outgoing data, which returns big amount of incoming data. With servers on your net this relationship is reversed, and for this the remote class is right. Uploading data through a browser is the same as having a web or ftp server.
     
  4. Porter

    Porter LI Guru Member

    I read Toastman's tutorial and if it says we don't have control over incoming traffic it's outdated. We have very good control over outgoing traffic, but we still have somewhat good or mediocre control over incoming traffic nowadays. Please forget that we limit our downstream by limiting our upstream. This is unneccessary.

    So maybe I've gotten you wrong the first time: you are saying the webserver is in your LAN? I assumed it was on your WAN side. Could explain to me in more detail which traffic is coming from where and traveling to where?
     
  5. jochen

    jochen LI Guru Member

    No, the webserver is on the WAN side. But I'm uploading, not downloading. So there is much traffic out, and small traffic in (only ack's)
     
  6. Porter

    Porter LI Guru Member

    Ok, then I don't understand why you don't want your cloud-traffic in the FileXfer-class. Is it too slow? Do you have other webtraffic uploads that are stealing bandwidth?
     
  7. jochen

    jochen LI Guru Member

    Yes, my son is downloading around the clock :-(
     
  8. Marcel Tunks

    Marcel Tunks Networkin' Nut Member

    You could apply bandwidth limiter to his devices instead of using QoS.
     
  9. Porter

    Porter LI Guru Member

    How much bandwidth do you have? What is the maximum bandwidth in % for the FileXfer class in outbound direction? Just downloading shouldn't use that much upstream bandwidth.

    There are at least two solutions to this problem: You can either make a filter with your sons IP and put _all_ his traffic in a low class. The problem is that he might find out and if he then changes his address, you'll have to enforce it. Another option would be to disable QoS and use the Bandwidth Limiter. But I'm not familiar with this. What's certain is that you would lose prioritization of certain traffic (i.e. Youtube vs. downloads).

    The next option has some difficulties, too. If the server you are uploading to has a static IP (which would mean it's not part of a CDN) then you could just make a filter with this IP and put the traffic in a higher class.
     
  10. Toastman

    Toastman Super Moderator Staff Member Member

    The traffic counter in the QOS system counts outgoing traffic, not incoming. So actually, since you wish to control upload traffic, that is precisely what you want.

    As soon as your upload exceeds the figure allowed for website browsing, (256k?) then all further upload traffic will switch to the FILEXFER class. You can change the settings for that class to suit your own needs or place it in a different class to suit yourself.

    (BTW - Anyone who thinks that my QOS notes have ever said that we have no control over incoming traffic either did not read the thread, or totally failed to understand it. In fact I repeatedly emphasize in several places to totally ignore persons who make that statement).
     
  11. Porter

    Porter LI Guru Member

    I've just checked and it seems as if the traffic counter always checks both directions. So the direction doesn't matter.

    As far as I understood jochen, his son uses too much upstream. Therefore he probably needs to setup new filters. I'm just curious how much upstream jochen actually has, because I find it difficult to believe that he gets so little upstream, even while downloading.
     
  12. jochen

    jochen LI Guru Member

    My bandwidth is 500kbit/s upstream and 6000kbit/s downstream. I configured 450/5000 in QoS.
    The Download class is configured with:

    Outbound rate 10% - 80%
    Inbound rate 80% - 100%

    I think these are Toastmans defaults.

    I already tried to apply bandwidth limiter to my sons pc, but it seems that qos and bandwidth limiter do not work together. When I enable both, my internet becomes mostly unusable.

    As I understand QoS, the inbound traffic is limited by controlling outgoing traffic (the more goes out, the more comes in). Each kind of traffic has a typical ratio incoming/outgoing. Assume the download class has a ratio of 100:1, we limit outgoing to 1/100 of what we allow for incoming. This ratio is reversed for uploading. So we need a separate "upload" class, but we cannot distinguish this on a port basis.
     
  13. Porter

    Porter LI Guru Member

    You probably don't need to limit your overall bandwidth that much. If you are using DSL and you are from Germany, try the ADSL-overhead feature on QOS/Basic-Settings. Choose a value of 32, it doesn't matter which entry you choose.

    Why does your inbound rate have a guaranteed 80%?! This is very wrong. Did you change this? Please set it back to 5%, maybe 10%. The sum of the left values of all classes in one direction must not exceed 100%. Which firmware are you using, by the way? You probably should post some screenshots of your QoS-config.

    Bandwidth-Limiter and QoS don't work together! It's either the one or the other!

    Please forget what you think about QoS. I already told you it's wrong and Toastman told you it's wrong, while also explicitly telling you it's not even in his guide!

    The only question here is, whether you get enough upload bandwidth to your cloud service. Did you measure how much you can get? Are you sure your son only downloads and doesn't upload at the same time? You can set the outbound rate of your FileXfer-class to 100%. This shouldn't have big adverse effects. But you'll have to see how it influences your connection in real life.
     
  14. jochen

    jochen LI Guru Member

    Ok, I can try that. What limits would you then suggest for a german DSL-6000 line?

    I think my setup was based on this: http://tomatousb.org/tut:easy-toastman-qos-setup

    It is latest Shibby mod for Asus RT-N16. I attach screenshots later, I have to make them on some other pc, because of problems in Ubuntu with AMD drivers.

    I don't think I misunderstood Toastmans tutorial.
    http://www.linksysinfo.org/index.php?threads/using-qos-tutorial-and-discussion.28349/

    That's exactly what I said above. You cannot directly control incoming traffic. You only can limit it by not requesting more data.

    Why do you mean this is wrong?
     
  15. Toastman

    Toastman Super Moderator Staff Member Member

    Your statement was:

     
  16. jochen

    jochen LI Guru Member

    Yes, and that still is true. Maybe I had to be more specific and should have said:

    "you can't directly control incoming traffic, only outgoing traffic."
     
  17. Porter

    Porter LI Guru Member

    My modem says I'm getting 570/5600 (roughly). I'm using 530/5400. But make sure you use the DSL-Overhead feature!

    The setup you used is outdated. Shibby probably has good default values. Please use these. Somewhere in this thread there probably are newer examples so you don't have to reset your config.
    Those are mine, but I altered them: http://linksysinfo.org/index.php?threads/using-qos-tutorial-and-discussion.28349/page-11#post-234920


    Ok, right now I read Toastman's guide as if he's argueing against controlling donwstream by limiting upstream, but then he explains exactly that as a means of controlling traffic. (Toastman: correct me if I'm wrong. I just read those few paragraphs.)

    Here's how I think this works:
    The one big mechanism that's missing in that argument is TCP's own mechanism for controlling traffic. TCP knows when packets are not reaching the receiver. If this occurs, it will decrease the rate it sends out packets and resend the missing ones. We use this mechanism in TCP to artificially drop packets when artificially imposed limits are being overstepped. Those are artificial because they are lower than our true, physical line capacity so we prevent large queues. This is how QoS works in both directions (that's a rough explanation and not true for every protocol).

    Also, we don't limit ACK packets that are responsible to increase the packet data flow that is being send by the sender. But it can happen that your upstream is so little, that is is purely saturated by ACK-packets and by that a limit on your downstream occurs because the receiver simply cannot acknowledge all the received packets. This usually isn't the case. Upstreams are usually large enough to accomodate saturation of the downstream and have some bandwidth to spare. So in essence, we don't limit how much data we request, because then we would never be able to upload with our full upstream line speed, as ACK packets are very small, but data packets are really big. To limit downstream by upstream ACK-packets, we would have to dramatically reduce our upstream and simply waste bandwidth. We are not indirectly limiting how much downstream we are receiving, but we are directly throwing away packets on our downstream and then TCP recognizes packet loss and will resend lost packets and decrease the speed packets are being send at. You could argue that this is an indirect approach, too, but it's a different one.

    One last thing: you really can't know in advance how much traffic is going to hit your router, this is why control over your downstream is worse than the control you have over your upstream and this is why you probably will need a higher safety margin in downstream direction. But downstream limiting works on its own.
     
  18. jochen

    jochen LI Guru Member

    Ok, I will try it with your settings and DSL overhead enabled. Do you prioritize small packets?
     
  19. Porter

    Porter LI Guru Member

    Yes, I prioritize ACK, SYN, FIN, RST. As for the ACK I never actually know if it helps or not.
     
  20. koitsu

    koitsu Network Guru Member

    Note: I have no familiarity with QoS.

    You're discussing TCP flags here, and what isn't made clear (to you? Not sure) is that these flags can come in combinations, i.e. SYN+ACK, FIN+ACK, ACK+PSH, etc.

    The payload of most packets is going to come in the form of ACK in addition to ACK+PSH. (URG is possible too but I've actually never seen it used, but I'm sure it is on some networks)

    SYN is what a client sends to a server to initiate a new TCP connection. The server responds with SYN+ACK (both flags set), then the client responds with an ACK. Details: https://en.wikipedia.org/wiki/Transmission_Control_Protocol#Connection_establishment

    From that point forward, depending on what is negotiated (TCP has many capabilities/options/extensions), repeated ACKs may go back and forth between client/server to deliver payload (1:1 ratio of ACKs), or repeated ACKs before a response ACK is given (ex. 4:1 ACK ratio) which is commonly used for delivering large amounts of data with a decrease in latency (less time waiting for the client to send the server a "yeah I got that last packet" response). This kind of packet flow gets very complex very quickly depending on what TCP options/capabilities are negotiated between client and server during the initial handshake (and sometimes subsequent ACK packets, i.e. window resizing).

    FIN is what a client and server will send to one another (mainly FIN sent, responded to by ACK, responded to by FIN, responded to by another ACK) during socket closure. You may have heard of things like TIME_WAIT or FIN_WAIT_1 or FIN_WAIT_2 states -- these are what these are. It's a way for a client or server to inform the other end "I'm done with you, but I'm still watching for packets you send me past this point".

    RST can be sent by either client or server and can indicate a broken or finally closed TCP connection (something crapped out, socket closed); it's abrupt and "rude" per se. It's also been known to be injected by middleman throttling devices (ex. Sandvine) to induce slowdowns. RST can, effectively, be treated as an error condition.

    If you're trying to qualify packets for "high priority" processing, i.e. you want that connection to come up fast and and packets to start flowing ASAP, you need to prioritise SYN and SYN+ACK. For fast closure, prioritise FIN and/or RST.

    I can't help past this point.
     
  21. Porter

    Porter LI Guru Member

    Tomato's QoS as default prioritizes small packets where those flags are set: SYN, FIN, RST. As I understand it, QoS doesn't care whether more than one flag is set.

    The discussion evolves around the question, whether prioritizing small ACK-packets actually helps. I don't know the answer, but from my perspective it does seem to make sense and I haven't seen adverse effects.

    http://www.bufferbloat.net/projects/bloat/wiki/ACK_prioritization
     
  22. Marcel Tunks

    Marcel Tunks Networkin' Nut Member

    @koitsu
    That was a really nice and concise explanation!

    It fits with the theory of Toastman, Porter, and the bufferbloat guys' comments that ACK prioritization should reduce latency under some conditions, but like Porter, I haven't seen a difference in a SOHO setting. Mind you, I'm not a gamer anymore...

    The bufferbloat folks do lots of testing, but haven't posted any results on that specific topic.
     
  23. koitsu

    koitsu Network Guru Member

    It might not make a difference given how the NAT layer handles stateful connections -- state tracking (ex. conntrack) is very common and is used to skip parts of the networking stack for speed reasons (i.e. the networking stack already knows the TCP handshake has completed, so there's no need to keep sending all the subsequent ACK/ACK+PSH packets through certain layers of netfilter).

    I don't know ""where"" the QoS layer on Tomato sits within the full realm of the entire networking stack on Linux, so I can't tell you if prioritising ACK in QoS rules would have any effect.

    For what I'm talking about, re: state tracking, take a peek at /proc/net/nf_conntrack or /proc/net/ip_conntrack sometime (just cat them). There is similar on OpenBSD/FreeBSD pf (firewalling layer), but looking at things is done with pfctl -s state; example from my FreeBSD VPS box:

    Code:
    root@omake:~ # pfctl -s state
    all tcp 208.79.90.130:39610 -> 192.81.135.201:6667  ESTABLISHED:ESTABLISHED
    all tcp 208.79.90.130:6667 <- 76.102.14.35:2642  ESTABLISHED:ESTABLISHED
    all tcp 208.79.90.130:6667 <- 176.11.146.197:49245  ESTABLISHED:ESTABLISHED
    all tcp 208.79.90.130:995 <- 76.102.14.35:39247  FIN_WAIT_2:FIN_WAIT_2
    all icmp 208.79.90.130:37143 -> 192.81.135.201:37143  0:0
    all icmp 208.79.90.130:37655 -> 76.102.14.35:37655  0:0
    all tcp 208.79.90.130:995 <- 76.102.14.35:11313  FIN_WAIT_2:FIN_WAIT_2
    all icmp 208.79.90.130:47145 <- 76.102.14.35:47145  0:0
    all tcp 208.79.90.130:22 <- 76.102.14.35:35140  ESTABLISHED:ESTABLISHED
    all udp 208.79.90.130:123 -> 208.90.144.52:123  MULTIPLE:SINGLE
    
    root@omake:~ # pfctl -s info
    Status: Enabled for 4 days 07:22:49  Debug: Urgent
    
    Interface Stats for em0  IPv4  IPv6
      Bytes In  365193755  0
      Bytes Out  371165557  0
      Packets In
      Passed  4939761  0
      Blocked  3002  0
      Packets Out
      Passed  4936940  0
      Blocked  0  0
    
    State Table  Total  Rate
      current entries  9
      searches  9879401  26.5/s
      inserts  69588  0.2/s
      removals  69579  0.2/s
    Counters
      match  72353  0.2/s
      bad-offset  0  0.0/s
      fragment  2  0.0/s
      short  0  0.0/s
      normalize  0  0.0/s
      memory  0  0.0/s
      bad-timestamp  0  0.0/s
      congestion  0  0.0/s
      ip-option  0  0.0/s
      proto-cksum  0  0.0/s
      state-mismatch  3  0.0/s
      state-insert  0  0.0/s
      state-limit  0  0.0/s
      src-limit  0  0.0/s
      synproxy  0  0.0/s
    
    root@omake:~ # pfctl -s memory
    states  hard limit  10000
    src-nodes  hard limit  10000
    frags  hard limit  5000
    tables  hard limit  1000
    table-entries hard limit  200000
    
     
  24. cloneman

    cloneman Networkin' Nut Member

    Simple solution: Remove port 443 from One of your high classes. Leave it as port 80 only.

    Create a new rule that gives port 443 20% minimum and 100% maximum, and stuff it near the bottom e.g. class "bulk" or class #9, whatever the 2nd last one is.

    This will

    - allow uploads to be fast when the connection is idle (100% speed)
    - guarentee 20% available for secure websites when the internet connection is saturated

    - allow more important stuff to use up to 80% of the bandwith when SSL uploads are in progress
     

Share This Page