1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Dilemma with Tomato QOS and the Netgear R7000

Discussion in 'Tomato Firmware' started by AllenJ, Oct 8, 2017.

  1. AllenJ

    AllenJ Serious Server Member

    I've been using Tomato for more than 10 years now... my first few were all Linksys WRT54G(L). I migrated over to ASUS as they had a pretty good run, and even ran RMerlin's excellent code for a while. I found that my ASUS routers weren't keeping up with the traffic in my house and bugs in the QOS code prevented me from isolating devices that would swamp the network and kill VOIP conversations and other real-time sensitive devices. I switched to Toastman on all my devices a couple of years ago specifically for QOS, and all was right with the world.

    I recently upgraded from Comcast 50/10 to 300/25, which is a major improvement. However, I learned several things in the process. Even with my speedy R7000 ARM-based router, the only way to get to 300Mbps+ is to disable QOS and turn on Cut-Through Forwarding (CTF). I used the DSLReports speed test BufferBloat to gauge how easy it was to saturate my network... which was VERY easy without some sort of QOS or BW limit applied to control spikes in network activity. I found many causes over the years from remote CrashPlan backups, P2P clients, etc. that would just suddenly kill us during the work day. QOS was pretty fool proof on Tomato when configured to operate 10-20% underneath the maximum speeds.

    HOWEVER, now with 300Mb I found two big issues with my configuration:
    1) Cisco DPC3008 was not adequate to support "up to 340Mbps" as claimed (250-260 max)
    2) With QOS on, the max speed was capped at no more than about 160Mbps

    Upgrading to the Netgear CM1000 solved the cable modem problem in minutes using the Comcast autoconfig! And using CTF seems to resolve the QOS handicap, but then allows clients to be bad citizens and wreck my network. So I've implemented bandwidth limiting on various ranges that I already had configured with Static IPs, and I have deliberately crippled the ranges to operate well below the max tested speed of about 360/30Mpbs. The fastest I allow even my own desktop to operate is about 150Mb.

    There are several benefits that I like with this config -- I can allocate the most bandwidth and priority to our desktops, laptops, and VOIP devices, while stepping down speeds for other less critical clients. I really like the "Default Class for unlisted MAC / IPs in LAN (br0)" bucket where I can put any unknown devices (they get about 5/10% rate/ceiling access to the WAN).

    So my dilemma... the old-timers are leaving (or left) the Tomato ecosystem and have said for years what a mess the codebase is. Other than a few nvram scripts to save and transfer my static DHCP addresses, I'm not a big tweaker except to the extent that I want my network to have reliable, predictable performance. Should I stay with Tomato and jump to a new branch? Should I just keep the (stable) version I have on my router and 3 APs? Or (gasp) jump to DD-WRT or (double gasp) go back to stock firmware?
     
  2. AllenJ

    AllenJ Serious Server Member

    Incidentally, I'm sure most of you know the issue with having your network swamped is more problematic with regard to what is flowing out typically. Running a cloud backup, sending a large e-mail, or any large upload from your network can easily bog down the entire link even with 10x as much bandwidth available on the download side. Here are my results with only CTF managing traffic and with Bandwidth Monitoring in effect, respectively (dunno why one is HTTPS):

    [​IMG]
    [​IMG]
     
  3. cloneman

    cloneman Networkin' Nut Member

    you could try to move to another platform that has bufferbloat control, like ERX smartqueue . Most of the magic in tomato QoS isn't the classification, it's the fq_codel or sfq + global bandwith limit that effectively slows large file transfers in favor of smaller flows like web browsing and voip automatically, by keeping track in real-time of flows/connections that use unfairly high amounts of bandwith As such, smaller real-time traffic is never dropped (on purpose) even without any special rules.

    On the tomato platform you can try to bring down the ifb0 interface (for mips its imq0) which for me disables download QoS while keeping upload QoS enabled. Of course this doesn't turn on CTF but could help someone on an asymmetrical connection.
     
    Monk E. Boy likes this.
  4. thomaz

    thomaz Networkin' Nut Member

    how do i do this?

    AllenJ: i only get ~118mbps with qos enabled and the default values :(
    [​IMG]
     
    Last edited: Oct 11, 2017
  5. cloneman

    cloneman Networkin' Nut Member

    for me, the command is
    Code:
    ip link set dev imq0 down
    . For the ARM routers like yours it might be ifb0 instead of imq0, you can get a list of interfaces with just 'ip link'.

    This will likely have to be performed at every reboot (wan up script ?) and every time you manually press save on a QoS page.

    I don't have any idea what the real impacts of this command are, but it might be interesting to try. In my case I get nearly 80% CPU usage with only 50mbps on MIPS, so it's possible that this command will not help you at all. Make sure your PPTP Server is disabled if you turned it on, as in my experience it doubles CPU usage.
     
  6. thomaz

    thomaz Networkin' Nut Member

    thx a lot but it doesnt work for me.
    i enabled qos and i do "ip link set dev ifb0 down"
    but then i have no internet connection anymore.

    this is what ip link outputs with enabled qos (i xed the mac adresses):
    Code:
    1: lo: <LOOPBACK,MULTICAST,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN mode DEFAULT  
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 
    2: ifb0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc htb state UNKNOWN mode DEFAULT qlen 32 
        link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff 
    3: ifb1: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 32 
        link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff 
    4: ifb2: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 32 
        link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff 
    5: ifb3: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 32 
        link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff 
    6: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN mode DEFAULT qlen 1000 
        link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff 
    7: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN mode DEFAULT qlen 1000 
        link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff 
    8: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN mode DEFAULT qlen 1000 
        link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff 
    9: vlan1@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT  
        link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff 
    10: vlan2@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb state UP mode DEFAULT  
        link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff 
    11: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT  
        link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff 
    12: wl0.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN mode DEFAULT qlen 1000 
        link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff 
    13: wl0.2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN mode DEFAULT qlen 1000 
        link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff 
    14: br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT  
        link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff 
    15: br2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT  
        link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff 
    16: br3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT  
        link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff 
    
     
  7. thomaz

    thomaz Networkin' Nut Member

    another question.
    is there any other way to throttle speed except qos and bw limiter?
    i only need a little little bit of lowering my upload speed and pings stay under 100ms.
    or can i throttle wifi only with some other kind of mechanism?
    as i say my linux experience is = 0
    on windows tools like netlimiter has nearly 0% cpu usage. :)
     
  8. cloneman

    cloneman Networkin' Nut Member

    hmm that exceeds my knowledge level then especially with no access to ARM routers. In theory you can poke around in /etc/qos to see how to disable the inbound queuing portion , but there's several layers of complexity to overcome.

    If you upload your /etc/qos maybe someone can offer you a command that will turn off inbound shaping. No guarantees that this will be enough to reach 300mbps.

    The Edgerouter X supports upload-only smartqueue, so I would probably switch to that.
     
  9. thomaz

    thomaz Networkin' Nut Member

    in /etc there is no qos.
    i only found a file called wan_qos
    Code:
    #!/bin/sh
    WAN_DEV=vlan2
    IMQ_DEV=ifb0
    TQA="tc qdisc add dev $WAN_DEV"
    TCA="tc class add dev $WAN_DEV"
    TFA="tc filter add dev $WAN_DEV"
    TQA_IMQ="tc qdisc add dev $IMQ_DEV"
    TCA_IMQ="tc class add dev $IMQ_DEV"
    TFA_IMQ="tc filter add dev $IMQ_DEV"
    Q="fq_codel"
    
    case "$1" in
    start)
    	tc qdisc del dev $WAN_DEV root 2>/dev/null
    	$TQA root handle 1: htb default 90 r2q 22
    	$TCA parent 1: classid 1:1 htb rate 10240kbit ceil 10240kbit 
    # egress 0: 5-100%
    	$TCA parent 1:1 classid 1:10 htb rate 512kbit ceil 10240kbit   prio 1 quantum 1500
    	$TQA parent 1:10 handle 10: $Q
    	$TFA parent 1: prio 10 protocol ip handle 1 fw flowid 1:10
    	$TFA parent 1: prio 110 protocol ipv6 handle 1 fw flowid 1:10
    # egress 1: 5-30%
    	$TCA parent 1:1 classid 1:20 htb rate 512kbit ceil 3072kbit   prio 2 quantum 1500
    	$TQA parent 1:20 handle 20: $Q
    	$TFA parent 1: prio 20 protocol ip handle 2 fw flowid 1:20
    	$TFA parent 1: prio 120 protocol ipv6 handle 2 fw flowid 1:20
    # egress 2: 5-100%
    	$TCA parent 1:1 classid 1:30 htb rate 512kbit ceil 10240kbit   prio 3 quantum 1500
    	$TQA parent 1:30 handle 30: $Q
    	$TFA parent 1: prio 30 protocol ip handle 3 fw flowid 1:30
    	$TFA parent 1: prio 130 protocol ipv6 handle 3 fw flowid 1:30
    # egress 3: 5-70%
    	$TCA parent 1:1 classid 1:40 htb rate 512kbit ceil 7168kbit   prio 4 quantum 1500
    	$TQA parent 1:40 handle 40: $Q
    	$TFA parent 1: prio 40 protocol ip handle 4 fw flowid 1:40
    	$TFA parent 1: prio 140 protocol ipv6 handle 4 fw flowid 1:40
    # egress 4: 5-70%
    	$TCA parent 1:1 classid 1:50 htb rate 512kbit ceil 7168kbit   prio 5 quantum 1500
    	$TQA parent 1:50 handle 50: $Q
    	$TFA parent 1: prio 50 protocol ip handle 5 fw flowid 1:50
    	$TFA parent 1: prio 150 protocol ipv6 handle 5 fw flowid 1:50
    # egress 5: 5-70%
    	$TCA parent 1:1 classid 1:60 htb rate 512kbit ceil 7168kbit   prio 6 quantum 1500
    	$TQA parent 1:60 handle 60: $Q
    	$TFA parent 1: prio 60 protocol ip handle 6 fw flowid 1:60
    	$TFA parent 1: prio 160 protocol ipv6 handle 6 fw flowid 1:60
    # egress 6: 5-70%
    	$TCA parent 1:1 classid 1:70 htb rate 512kbit ceil 7168kbit   prio 7 quantum 1500
    	$TQA parent 1:70 handle 70: $Q
    	$TFA parent 1: prio 70 protocol ip handle 7 fw flowid 1:70
    	$TFA parent 1: prio 170 protocol ipv6 handle 7 fw flowid 1:70
    # egress 7: 5-100%
    	$TCA parent 1:1 classid 1:80 htb rate 512kbit ceil 10240kbit   prio 8 quantum 1500
    	$TQA parent 1:80 handle 80: $Q
    	$TFA parent 1: prio 80 protocol ip handle 8 fw flowid 1:80
    	$TFA parent 1: prio 180 protocol ipv6 handle 8 fw flowid 1:80
    # egress 8: 5-30%
    	$TCA parent 1:1 classid 1:90 htb rate 512kbit ceil 3072kbit   prio 9 quantum 1500
    	$TQA parent 1:90 handle 90: $Q
    	$TFA parent 1: prio 90 protocol ip handle 9 fw flowid 1:90
    	$TFA parent 1: prio 190 protocol ipv6 handle 9 fw flowid 1:90
    
    	$TFA parent 1: prio 15 protocol ip u32 match ip protocol 6 0xff match u8 0x05 0x0f at 0 match u16 0x0000 0xffc0 at 2 match u8 0x02 0x02 at 33 flowid 1:10
    
    	$TFA parent 1: prio 17 protocol ip u32 match ip protocol 6 0xff match u8 0x05 0x0f at 0 match u16 0x0000 0xffc0 at 2 match u8 0x01 0x01 at 33 flowid 1:10
    
    	$TFA parent 1: prio 19 protocol ip u32 match ip protocol 6 0xff match u8 0x05 0x0f at 0 match u16 0x0000 0xffc0 at 2 match u8 0x04 0x04 at 33 flowid 1:10
    
    	$TFA parent 1: prio 13 protocol ip u32 match ip protocol 1 0xff flowid 1:10
    
    	tc qdisc del dev $I ingress 2>/dev/null
    	$TQA handle ffff: ingress
    
    	ip link set $IMQ_DEV up
    	tc qdisc del dev $IMQ_DEV 2>/dev/null
    	$TQA_IMQ handle 1: root htb default 90 r2q 427
    	$TCA_IMQ parent 1: classid 1:1 htb rate 204800kbit ceil 204800kbit
    
    	$TFA parent ffff: prio 10 u32 match ip dst 0.0.0.0/0 action mirred egress redirect dev $IMQ_DEV
    
    	# class id 10: rate 10240kbit ceil 204800kbit
    	$TCA_IMQ parent 1:1 classid 1:10 htb rate 10240kbit ceil 204800kbit prio 1 quantum 1500
    	$TQA_IMQ parent 1:10 handle 10: $Q
    	$TFA_IMQ parent 1: prio 10 protocol ip handle 1 fw flowid 1:10 
    	$TFA_IMQ parent 1: prio 110 protocol ipv6 handle 1 fw flowid 1:10 
    
    	# class id 20: rate 4096kbit ceil 40960kbit
    	$TCA_IMQ parent 1:1 classid 1:20 htb rate 4096kbit ceil 40960kbit prio 2 quantum 1500
    	$TQA_IMQ parent 1:20 handle 20: $Q
    	$TFA_IMQ parent 1: prio 20 protocol ip handle 2 fw flowid 1:20 
    	$TFA_IMQ parent 1: prio 120 protocol ipv6 handle 2 fw flowid 1:20 
    
    	# class id 30: rate 10240kbit ceil 204800kbit
    	$TCA_IMQ parent 1:1 classid 1:30 htb rate 10240kbit ceil 204800kbit prio 3 quantum 1500
    	$TQA_IMQ parent 1:30 handle 30: $Q
    	$TFA_IMQ parent 1: prio 30 protocol ip handle 3 fw flowid 1:30 
    	$TFA_IMQ parent 1: prio 130 protocol ipv6 handle 3 fw flowid 1:30 
    
    	# class id 40: rate 20480kbit ceil 184320kbit
    	$TCA_IMQ parent 1:1 classid 1:40 htb rate 20480kbit ceil 184320kbit prio 4 quantum 1500
    	$TQA_IMQ parent 1:40 handle 40: $Q
    	$TFA_IMQ parent 1: prio 40 protocol ip handle 4 fw flowid 1:40 
    	$TFA_IMQ parent 1: prio 140 protocol ipv6 handle 4 fw flowid 1:40 
    
    	# class id 50: rate 40960kbit ceil 184320kbit
    	$TCA_IMQ parent 1:1 classid 1:50 htb rate 40960kbit ceil 184320kbit prio 5 quantum 1500
    	$TQA_IMQ parent 1:50 handle 50: $Q
    	$TFA_IMQ parent 1: prio 50 protocol ip handle 5 fw flowid 1:50 
    	$TFA_IMQ parent 1: prio 150 protocol ipv6 handle 5 fw flowid 1:50 
    
    	# class id 60: rate 10240kbit ceil 184320kbit
    	$TCA_IMQ parent 1:1 classid 1:60 htb rate 10240kbit ceil 184320kbit prio 6 quantum 1500
    	$TQA_IMQ parent 1:60 handle 60: $Q
    	$TFA_IMQ parent 1: prio 60 protocol ip handle 6 fw flowid 1:60 
    	$TFA_IMQ parent 1: prio 160 protocol ipv6 handle 6 fw flowid 1:60 
    
    	# class id 70: rate 10240kbit ceil 143360kbit
    	$TCA_IMQ parent 1:1 classid 1:70 htb rate 10240kbit ceil 143360kbit prio 7 quantum 1500
    	$TQA_IMQ parent 1:70 handle 70: $Q
    	$TFA_IMQ parent 1: prio 70 protocol ip handle 7 fw flowid 1:70 
    	$TFA_IMQ parent 1: prio 170 protocol ipv6 handle 7 fw flowid 1:70 
    
    	# class id 80: rate 10240kbit ceil 204800kbit
    	$TCA_IMQ parent 1:1 classid 1:80 htb rate 10240kbit ceil 204800kbit prio 8 quantum 1500
    	$TQA_IMQ parent 1:80 handle 80: $Q
    	$TFA_IMQ parent 1: prio 80 protocol ip handle 8 fw flowid 1:80 
    	$TFA_IMQ parent 1: prio 180 protocol ipv6 handle 8 fw flowid 1:80 
    
    	# class id 90: rate 10240kbit ceil 61440kbit
    	$TCA_IMQ parent 1:1 classid 1:90 htb rate 10240kbit ceil 61440kbit prio 9 quantum 1500
    	$TQA_IMQ parent 1:90 handle 90: $Q
    	$TFA_IMQ parent 1: prio 90 protocol ip handle 9 fw flowid 1:90 
    	$TFA_IMQ parent 1: prio 190 protocol ipv6 handle 9 fw flowid 1:90 
    
    	# set up the IMQ device (otherwise this won't work) to limit the incoming data
    	ip link set $IMQ_DEV up
    	;;
    stop)
    	ip link set $IMQ_DEV down
    	tc qdisc del dev $WAN_DEV root 2>/dev/null
    	tc qdisc del dev $IMQ_DEV root 2>/dev/null
    	tc filter del dev $WAN_DEV parent ffff: prio 10 u32 match ip dst 0.0.0.0/0 action mirred egress redirect dev $IMQ_DEV 2>/dev/null
    	;;
    *)
    	echo "..."
    	echo "... OUTGOING QDISCS AND CLASSES FOR $WAN_DEV"
    	echo "..."
    	tc -s -d qdisc ls dev $WAN_DEV
    	echo
    	tc -s -d class ls dev $WAN_DEV
    	echo
    	echo "..."
    	echo "... INCOMING QDISCS AND CLASSES FOR $WAN_DEV (routed through $IMQ_DEV)"
    	echo "..."
    	tc -s -d qdisc ls dev $IMQ_DEV
    	echo
    	tc -s -d class ls dev $IMQ_DEV
    	echo
    esac
    
     
  10. Monk E. Boy

    Monk E. Boy Network Guru Member

    Based on my limited reading of qos & wan_qos it looks like imq0 and ifb0 are used for both incoming and outgoing qos, so bringing the interface down would seem to bring down all QoS, not just incoming. It sounds like there is no fallback under ARM if ifb0 goes down while MIPS has some kind of fallback when imq0 is down. Its probably something buried in iptables but without physical access to an ARM router and sufficient time it's beyond me (I have plenty of MIPS routers to poke and prod at).

    If all you want is speedy internet at 300Mb its hard to go wrong with an edgerouter x. They keep adding hardware accelerated features to it all the time, when it launched it was almost entirely cpu bound but now most of the important features are offloaded to hardware. Even some forms of encryption have hardware offload now. I have one driving a 150/30 line and even with SmartQoS enabled and the only time the CPU goes above 10% is when I'm poking around the web interface. For $50 it offers incredible performance.
     

Share This Page