Tomato Toastman's Releases

Discussion in 'Tomato Firmware' started by Toastman, Dec 18, 2011.

  1. dupondje

    dupondje Serious Server Member

    It affects indeed only the IP address assigning of the LAN clients.
    Your ISP prolly gives you a /64 or bigger.
    Depending the method you get an IP with SLAAC/DHCPv6 or both.

    Please see my blog for more extended info :)
  2. Morac

    Morac Network Guru Member

    I read your blog, I'm just not seeing what you saw. Clients are getting an IPv6 address via DHCPv6 on my LAN. As far as I'm aware, they should also get SLAAC addresses automatically, but they are not.

    And yes I have a /64 prefix.

    Edit: Is the second IP address the link-local one beginning with fe80? I do see that on my clients, but my understanding is that is normal and isn't a public address.

    Edit 2: Well now I'm really confused as my iOS devices are showing 3 different IPv6 addressed with the assigned prefix in the NDP list. None contain the MAC address.

    Edit 3: I decided to use a Windows PC since that makes seeing the ip addresses easier. My PC actually has four IPv6 addresses: Two "IPV6 addresses", one "Temporary IPV6 address" and one "Link-local IPv6 Address". All are showed as "preferred", but none of them contain the MAC address. Of the two "IPV6 addresses", one appears to contain the IPV4 address since it only contains 32 bits in the non-prefix part. The address used publicly is the "Temporary" one.
    Last edited: Aug 10, 2014
  3. Morac

    Morac Network Guru Member

    I'm not seeing this version in the 4Shared site. The latest version I see there is 7506.
  4. dupondje

    dupondje Serious Server Member

    @Morac: what does "netsh interface ipv6 show address" give exactly?
    There it normally shows Other/Public/Dhcp
  5. Morac

    Morac Network Guru Member

    Ok I see a DHCP, a public and a temporary IPv6 address. I did some digging and apparently Windows doesn't base the SLAAC address on the Mac Address for some reason. Also Windows will use a temporary IPv6 address when initiating outbound connections for privacy reasons.

    So it looks like I am getting a SLAAC and a DHCP ipV6 address.

    I tested pinging all 3 (including the temp one) using a ping ipv6 web page and they all worked, which means all three are public available.

    That makes me wonder why bother disabling one or the other? There's plenty of IPv6 addresses available and obviously an interface isn't limited to just one, so I don't see the downside of having two (or more) IPV6 addresses per client.
    Last edited: Aug 10, 2014
  6. dupondje

    dupondje Serious Server Member

    - You might only want 1 IP
    - You might not want a SLAAC IP address because others then know your MAC address
    - You want SLAAC because android doesn't support DHCPv6
    - You want static IP's without reservation and don't care about the MAC address privacy

    And if you still want both, you can just tick both options. Issue solved :)
  7. Morac

    Morac Network Guru Member

    What's the default setting?

    And I still don't see the update.
  8. dupondje

    dupondje Serious Server Member

    SLAAC & DHCPv6 are enabled by default.
  9. Toastman

    Toastman Super Moderator Staff Member Member

    Yep, sources will be updated on repo.or when time is available, and remember, some of us don't have 100Mbps upload speeds! Similarly, files will appear as they are uploaded, it takes awhile! Be patient. There are 2GB of files.
  10. Grimson

    Grimson Networkin' Nut Member

    I have at best 300kbit upload speed, so I know how it is. Thats why I prefer the sources and build it myself, as updating the repo should be a lot faster than uploading all the compiled images. Though I understand you prefer to get the images out first and will wait, thanks again for your work.
    HitheLightz and Toastman like this.
  11. lancethepants

    lancethepants Network Guru Member

    Wow, and I thought that the 1Mbps upload that centurylink has (US) was terrible. Yes, source should be pretty speedy to UL, only a minute or so. I like to roll my own as well :)
  12. leandroong

    leandroong LI Guru Member

    Self compilation is becoming SOP. Even in my rt-n56u, Padavan FW, they don't give compiled version. They just teach you how to compile. Here is a sample of up-to-date changes for example
    I'm curious how to update FW library, they dont just replace old source code, they do modification and even folder deletion. How should I for instance, go about updating outdated library, say curl to latest.
  13. Morac

    Morac Network Guru Member

    I realize it takes time to upload the files and have no problems with that, but I would suggest holding out announcing a version has been released until the files have actually been uploaded. It would cut down on the confusion.
  14. Morac

    Morac Network Guru Member

  15. lancethepants

    lancethepants Network Guru Member

    I created the gui that shibby and raf use, and can create a patch for Toastman if he wants it. (hoping for a 'git push' sometime too). They have enabled by default. Toastman seems like a disabled by default kind of guy, but sure would be nice to have the gui option for us Comcast users.
    HitheLightz, Toastman and bripab007 like this.
  16. Monk E. Boy

    Monk E. Boy Network Guru Member

    Except then people would be coming here and posting questions about the new version on the server while he's in the process of uploading it. Unless he could block people from accessing the folder until he's done uploading, confusion is inevitable. At least this way it's just the people who refuse to read who get confused, and there's not much you can do for them.
  17. Toastman

    Toastman Super Moderator Staff Member Member


    I leave the upload running after I finish it, and go about my normal life. It finishes uploading - when it finishes uploading, I may be 400km away by then.

    BTW, I'll take a look at the comcast fix, see what it is about.
  18. Morac

    Morac Network Guru Member

  19. Toastman

    Toastman Super Moderator Staff Member Member

    Looks like a good addition, I will add the fix and Lance's GUI - thanks Lance!
    Morac likes this.
  20. HunterZ

    HunterZ Network Guru Member

    Edit: Figured out what I had originally posted about. It's off-topic for this thread.
    Last edited: Aug 13, 2014
  21. Grimson

    Grimson Networkin' Nut Member

    There is a bug in the commit for the Comcast dhcp fix, the line:
    { title: 'Enable SYN cookies', name: 'f_syncookies', type: 'checkbox', value: nvram.ne_syncookies != '0' }
    in advanced-firewall.asp is missing the comma "," at the end, so the input fields for the firewall section aren't generated.
  22. lancethepants

    lancethepants Network Guru Member

    Another slight issue, for building anyway. I remember this popping up when running my own rolled version (with updated OpenVPN) a while ago.
    I think with version 2.3.4 the development OpenVPN system had a different version of automake or something. Trying to build the latest has issues and stops with OpneVPN automake issues.
    I would throw in the following
    autoreconf -vi
    just before ./configure is run. Doesn't hurt to have it whether it's needed or not, certainly wouldn't hurt things for future versions.

    edit: ah, yes, openvpn wants automake1.13 specifically. Debian doesn't have a package, and I've manually added automake 1.14 some time ago, but it wants that specific version, which I think is a little erroneous on their part.
    Last edited: Aug 13, 2014
  23. Grimson

    Grimson Networkin' Nut Member

    Yes, it want's automake 1.13.2, it even complains about 1.13.4. I also had to update autoconf from 2.68 to 2.69, because building openvpn randomly failed to build even with the right automake version.
  24. leandroong

    leandroong LI Guru Member

    Sounds similar to my entware gettext repo compilation problem. Error like this
    make[3]: Entering directory `/home/leandroong/entware/openwrt_trunk/build_dir/host/gettext-0.19.2'
    CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/bash /home/leandroong/entware/openwrt_trunk/build_dir/host/gettext-0.19.2/build-aux/missing aclocal-1.13 -I m4
    /home/leandroong/entware/openwrt_trunk/build_dir/host/gettext-0.19.2/build-aux/missing: line 81: aclocal-1.13: command not found
    WARNING: 'aclocal-1.13' is missing on your system.
             You should only need it if you modified 'acinclude.m4' or
             '' or m4 files included by ''.
             The 'aclocal' program is part of the GNU Automake package:
             It also requires GNU Autoconf, GNU m4 and Perl in order to run:
    make[3]: *** [aclocal.m4] Error 127
  25. koitsu

    koitsu Network Guru Member

    @Grimson @leandroong @lancethepants, I welcome you all to GNU autotools hell. You can include libtool in that mix. Here's a tip of advice coming from someone (me) who used to be a FreeBSD ports committer and had to deal with autoconf/automake idiocy all the time:

    Open-source programs that come with tarballs that already contain a configure script or Makefile should, generally speaking, not have automake or autoconf (autoreconf) run on them. Try to use the ones that were included with the software. The versions of automake and autoconf used to generate those files are provided within those files -- do not try to use any other version than what you see in those files.

    The GNU autotools have a horrible, HORRIBLE history (we're talking almost 2 decades worth) of not being backwards compatible or shitting the bed when it comes to even minor revisional changes.

    The GNU autotools also have a horrible, HORRIBLE history (just as long) -- especially autoconf -- of writing improper shell scripts (that's all configure is! libtool is similar in this regard) that do not work on some platforms (Solaris is a common pain point).

    On FreeBSD, we actually have a whole series of "wrapper" scripts that get used to patch/fix bugs on a per-autoconf/per-automake version basis, plus the one that comes with the ports framework by default (read the commit logs for some laughs and tears), and special one-offs for autoconf 2.13 and automake 1.4. No I'm not kidding. And libtool is also a pain point. The port maintainer for those wrappers (ade@) is a guy nobody dares upset or irritate because his stress level is through the roof already having to deal with it all. This is why you'll see most BSD-centric people writing Makefiles by hand (and they work on GNU make as well), and not bothering with configure. And yes, many of the BSDs are multi-arch, so don't tell me they're needed. :) It's just that for whatever reason Linux people have this awful fixation with GNU autotools when the things should be shipped off to an island never to be heard from again (oh wait... Australia proves that doesn't work either... ;-) ).

    Alternately there's CMake, but that's entering a whole new/different world of hurt.

    TL;DR -- Don't regenerate configure or Makefiles if you can avoid it. If you need to, use the exact versions that are specified in the existing files. Yes you will have to go through utter hell to make that work and very likely break other software in the process. You can thank GNU for that.

    And gettext is always a PITA in every way too, by the way. I actually make a deliberate effort to build all my FreeBSD boxes without NLS support across the board (usually that's --disable-nls during the configure phase of software dependent upon gettext, but sometimes it's a lot more than that because the authors don't actually test --disable-nls all the time, sigh) just so I don't have to deal with gettext. It really doesn't serve a purpose unless you plan on converting between character sets (i18n), including Unicode.
    Last edited: Aug 14, 2014
  26. lancethepants

    lancethepants Network Guru Member

    @koitsu Maybe you can give some additional insight on an issue. I cannot figure out a way to commit the OpenVPN source, re-clone the git, and have it build successfully. From what I can tell, if you commit the source, and keep the repo without re-cloning ever, it seems to work fine. If however you re-clone the repo from scratch, somehow the OpenVPN source or something else has changed, causing the build to fail on OpenVPN. It doesn't/hasn't had this issue with other previous versions. I'm wondering if it could be a timestamp issue. Maybe something similar to this thread?

    I've also had the issue with the latest pre-release version of tinc and not previous versions. My solution there was, instead of using the prepared source tarball, to clone the tinc git repo, checkout out the latest tag. Delete the .git directory, and commit that. configure isn't created in git, so is generated using autoreconf. I thought to try something similar with OpenVPN, but am having issues.

    It's just funny. The source tarball works fine immediately after extracting it, and doesn't give autoconf issues, only after the individual source files have been commited to git and recloned.
  27. HunterZ

    HunterZ Network Guru Member

  28. koitsu

    koitsu Network Guru Member

    The issue to me sounds like one of two things (maybe both):

    a) Clock-related. I forget, but doesn't Tomatoware run on the router itself? If so, I'm not surprised of this problem -- clock drift there happens regularly/constantly due to the lack of any decent time counter. ntpd can help alleviate some of this, but I'm really not so sure how reliable ntpd would be on a embedded device that lacks decent timecounters. I imagine ntpd would be having to do adjustments a lot more often than usual.

    This often manifests itself during make-time as things "randomly" not building/being rebuilt, and sometimes in other ways like make skipping over things that do need to be built. Welcome to how important decent timecounters are!

    If it's happening on actual Linux systems, then they probably aren't running ntpd or chrony. A cronjob running ntpdate/rdate is the wrong approach (and will definitely break make and other things very badly during major clock skew) -- don't let anyone tell you that using a cronjob + ntpdate/rdate is the way to go. (That method is used on TomatoUSB, however, because the desire to try and keep accurate time via ntpd isn't justified in 99% of the cases out there because there isn't all that much going on that demands "accurate time". Plus there's the complexity of NTP server selection vs. geographic region and so on... And is not really all that reliable, FYI -- there are broken NTP servers in their pool all the time). The only time ntpdate/rdate should be used is immediately at boot, immediately prior to ntpd being launched; ntpd can take care of clock skew gradually from there.

    b) Badly-written Makefiles. It's remarkable how common this is. The more common situation I see here is when people try to use "make -j" and it barfs horribly all over everything because none of the Makefiles are written with parallel jobs in mind (all sorts of things can break this, all the way down to using pipes without a semaphore). But I have seen Makefiles that had incorrect dependencies in their target lines. "touch foo.c && make. What do you mean there's no changes?! rm foo.o && make. Oh that works?!?!?! Um..."

    You could use GNU touch (Busybox touch does not support any of these flags) to recursively change both the access and modification times of all the files within the directory before doing the tar if you wanted. The way I usually do this is through something like this:

    touch /tmp/tempfile
    find dir | xargs touch -amc -r /tmp/tempfile
    rm /tmp/tempfile
    tar -pcf tarball.tar dir
    gzip -9 tarball.tar
    You could also remove use of /tmp/tempfile entirely and use the -A flag, assuming you're willing to work out an exact date/time in the format of YYYYMMDDhhmm.SS (I think date +'%Y%m%d%H%M.%S' would get you this; stick that into a variable in the script then refer to that. It's probably a better way, IMO, than relying on a file in /tmp).

    This is how I do it as part of my bsdhwmon project (Makefile syntax coming up): no I don't touch all the files exported with cvs export, but at least you can get an idea of what I do.

    # Assign YYYYMMDD to $NOW variable
    NOW!=   /bin/date +%Y%m%d
            @echo cvs tag bsdhwmon-${NOW}
            @echo cd /tmp
            @echo cvs export -d bsdhwmon-${NOW} -r bsdhwmon-${NOW} bsdhwmon
            @echo tar -pcf bsdhwmon-${NOW}.tar bsdhwmon-${NOW}
            @echo gzip -9 bsdhwmon-${NOW}.tar
            @echo chmod 0644 bsdhwmon-${NOW}.tar.gz
            @echo rm -fr bsdhwmon-${NOW}
            @echo ls -l /tmp/bsdhwmon-${NOW}.tar.gz
    The spaces before @echo and after NOW!= are a single literal tab (yes it matters -- GNU make lets you get away with spaces, but all other makes do not. Good habit with Makefiles = use tabs). And the reason I use @echo rather than just run the actual commands is that I prefer to do just type "make release" and have it show me all the commands I need to run/copy-paste. What can I say, I like doing things manually.
    Last edited: Aug 15, 2014
  29. HunterZ

    HunterZ Network Guru Member

    Just a side note about NTP: ntpd has command line parameters that allow it to perform a one time large initial adjustment, rendering ntpdate wholly unnecessary.

    Also, I run an ntpd cluster on my LAN using 3 routers and two real Linux boxes, with the gateway bringing in time from the WAN and intercepting NTP requests made by LAN clients to addresses outside of the LAN. It all works quite well.
  30. koitsu

    koitsu Network Guru Member

    You're referring to the -g flag. Yup -- but it hasn't always behaved that way (I'm an old UNIX admin, old habits die hard), and it's always important to review the behaviour of the exact version of ntpd you plan on using beforehand.

    But none of that addresses the fact that consumer routers do not have good timecounters (if any at all) -- the hardware is simply missing. I've talked about this in the recent past. The fact that you have a total of 5 separate timekeeping devices all running ntpd as part of an NTP server pool is good (the more the better) -- but the reality is that most people would be running ntpd on their router and then making their LAN clients sync off that (see previous link for proof), when the routers themselves have crap for actual timecounters and skew time way worse than most x86 PCs.
  31. Toink

    Toink Network Guru Member

    Currently flashed my E3000 with tomato-E3000USB-NVRAM60K-1.28.0506.2MIPSR2Toastman-RT-N-VLAN-VPN-NOCAT.bin

    I've configured my IPv6 using this method

    With the current firmware, could someone please point me to where the 'Respond to ICMP Ping' went in the Firewall section? Without it I'm getting this error:

    'IP is not ICMP pingable. Please make sure ICMP is not blocked. If you are blocking ICMP, please allow through your firewall."

    I appreciate any help. Thanks!
  32. Grimson

    Grimson Networkin' Nut Member

    As I posted earlier, the firewall section is currently broken in the gui. You'll need to change the nvram settings by hand, set "block_wan" to "0".
  33. QSxx

    QSxx Network Guru Member

    Quick question: I'm going for an upgrade so I'm wondering can i use Toastman's firmware on RT-AC66U? If yes, which one? Or do I simply go for standard K26USB RT-N version? Does 5 GHz band work on AC66U?
  34. Toastman

    Toastman Super Moderator Staff Member Member

    I'm home next few days, I will fix the missing comma issue, thanks to Grimson for spotting it!

    I just lost 2 hard disks, both approx 5 years old, on two different machines, on the same day. How about that for coincidence? Uptime about 43,000 hours, not bad, I suppose.
  35. Grimson

    Grimson Networkin' Nut Member

    If those disks are on windows machines that BSOD while booting look here:

    I too thought one of my disks died, but it was just that faulty windows update.
  36. PBandJ

    PBandJ Addicted to LI Member

    That's a bummer, man.
    If those two HDD are from the same make/model/production series maybe it's not that big of a coincidence.
    Check this out (carefull, PDF):
    There are some other nice articles with a bit more consumer appeal that can be found here:
  37. Toastman

    Toastman Super Moderator Staff Member Member

    Grimson, not the BSOD problem. One totally screwed but data can actually be recovered by exchanging electronics. The other has some platter problem in the first 2MB - it is unformattable, and low level diags doesn't appear to help. It would be useable by creating a 2MB partition and then not formatting it, not assigning a drive letter, and then not using it. I may do that.

    NB - There is no longer much of a choice in HDD's. Seagate and WD have bought up and asset stripped most everyone else and now reduced their own warranty periods probably because of lack of competition. The best HDD's I ever used were Samsung Spinpoint, fast, reliable, and inexpensive! I still have many, and the failure just is the first trouble I had with them. I can't say the same for Seagate or WD - always had the occasional bad drive. (Don't ever mention Maxtor ... shudder...)

    The disks can be RMA'd. nothing of value lost, except time to restore from backups. Does anyone know if a returned Samsung disk would be replaced with a new (Seagate) drive or a reconditioned one?

    A few errors fixed in Tomato.

    1.28.7506.3 and variants will be uploading shortly. Keep an eye out for it and please be patient while it uploads!
    Monk E. Boy likes this.
  38. HunterZ

    HunterZ Network Guru Member

    Not to go too far off topic, but my experience is that WD drives at least die within the first 30 days, so I give them a good burn-in trial period. WD has a good RMA process (at least in the US), so it's not too painful to get them replaced if they die before I'm too dependent on them.

    Seagate drives, on the other hand, like to act flaky and then die sometime later. I also have some Seagate SATA drives that have SMD inductors on a circuit board on the outside of the drives, which makes them frustratingly vulnerable to damage while they (or adjacent drives) are being inserted into a drive bay.
  39. Toastman

    Toastman Super Moderator Staff Member Member

    August 19 2014 - 1.28.7506.3 and variants

    fix missing comma and some strange \r additions in several files

  40. jbarbieri

    jbarbieri Network Newbie Member

    I was successful (I think) in building my own version.

    I followed the many tutorials out there to download the source code via git, and just to make sure, I compiled without making any changes, and it was to big for my e1200.

    I then removed a couple things (most notably, some of the webgui themes).

    Whether or not I am truly on the "newest" I don't know, but it has been working fine for me.

    Took 15 tries of changing things around so I could get it built to make it small enough for the e1200.
  41. Monk E. Boy

    Monk E. Boy Network Guru Member

    In my experience they get replaced with Samsung drives that have been remanufactured and are issued under Seagate part#s. After Seagate acquired Samsung's HDD business they not-so-quietly reissued all the Samsung drives as Seagate drives, those drives bear both Seagate and Samsung part#s. The remanufactured replacement drives only bear the Seagate part#. However I've only had a couple Samsung drives die and none of them particularly recently so YMMV.

    The new ST#000DM00# drives are next-generation Samsung units that were under development at the time of the acquisition, though as usual Seagate has managed to destroy reliability of certain models. I swear Seagate drives used to be dependable. Its like they went straight to hell once they acquired Maxtor and their extremely unreliable product lineup.
  42. RMerlin

    RMerlin Network Guru Member

    We stopped selling Seagates a few years ago at work, after getting a failure rate of over 25% within the first 2-3 years on those 7200.11 and 7200.12. However, we get maybe one defective WD every 6 months at most.
  43. Toastman

    Toastman Super Moderator Staff Member Member

    @jbarbieri, if you would be kind enough to tell me what you changed (if you can remember!) I will try to incorporate them in next build. Themes alone made little difference.
  44. BikeHelmet

    BikeHelmet Addicted to LI Member

    Experience mirrored elsewhere, and statistically significant:


    Almost exclusively WD now for me.
  45. RMerlin

    RMerlin Network Guru Member

    I remember reading that Backblaze article a few months ago. Their results were indeed very similar to ours at work.

    On average we sell maybe 50 HDDs a year (in PCs, USB enclosures, NAses, etc...).

    This morning I'm going at a customer's office to replace a failing Seagate in their QNAP NAS. The Seagate has two or three years only, and the NAS is under light use.
  46. BikeHelmet

    BikeHelmet Addicted to LI Member

    I've had good luck with their Momentus XT drives, but that's their laptop line, and 3yr warranty is standard. Also, they don't make them anymore. (New SSHDs are different tech; not 7200RPM, so different motors, different platter density, different firmware, etc.)

    I did pick up some Barracudas to evaluate them. 100% of them are sitting around (still working) with a few CRC errors and a bad sector or two. I don't put anything mission critical on them... mostly Steam games and FRAPS footage. All the important business stuff goes on WD Blacks.

    Got a NAS filled with WD Greens and Reds at home. Greens originally were EARS drives that died and were RMA'd, but I suspect it was the NAS at fault rather than the drives. I had a weird NAS that mounted them upside down, and when I originally put them in there, I noticed their tone changed from the tone they had had the first 3 months of their lives in my main PC. They also made slight screeching noises the first time they were powered up in that orientation. They picked up bad sectors shortly after. Now the Green EARX RMAs and Reds are sitting in a NAS on their sides, and approaching 30k power on hours. No bad sectors or CRC errors or other issues.

    Got some WD Blacks near 50k power on hours - no bad sectors or odd noises or other issues with them. Just got some new 4TB Blacks which I've moved my data over to, but I keep the smaller older Blacks in use as mirrors of critical data, as I trust them more than the other drives.

    At work we're using Reds in our NAS's. No failures so far, but sample size is pretty small.

  47. Beast

    Beast Network Guru Member

    -BikeHelmet (New SSHDs are different tech; not 7200RPM, so different motors, different platter density, different firmware, etc.)

    My thinking was an SSHD = Solid State Hard Drive, more like a big USB stick. ???
  48. BikeHelmet

    BikeHelmet Addicted to LI Member

    SSHD = Solid State Hybrid Drive
    HDD - Hard Disk Drive
    SSD = Solid State Drive

    SSHDs are spinners just like HDDs, but have some extra NAND cache to store boot files, frequently used programs, and tiny 4KB files in. (Usually 8GB of cache)

  49. Marcel Tunks

    Marcel Tunks Networkin' Nut Member

    Most of my HDD failures have been with flaky input power. The external drive on my first computer (Kaypro II) used to crap out every time someone turned on an appliance or opened the garage door. More recently any of my failures have been with absent/failing/cheap UPS (though anecdotally more with Maxtor and Seagate). I don't think I will ever run a desktop, workstation, or server without a commercial grade UPS.

    Getting back to Toastman's firmware, and in keeping with the question from @QSxx does anyone know if @Toastman is planning to add AC66U support at any point, or just refining existing builds and eventually considering ARM?
  50. BikeHelmet

    BikeHelmet Addicted to LI Member

    I found most BSODs were caused by flaky power, but I think my drive failures were caused by other issues. I run APC BACK-UPS on all systems at home and all desktops at work - servers and NAS protected by APC Smart-UPS.

    I would say "not this year" is a likely answer. Most likely when Toastman has to do another big deployment, he'll get interested. Supporting ARM is not like tacking on support for another MIPS model - it's a rather large endeavour... I suspect he'd have to spend hundreds of hours working on it to get a stable version put out.

    Right now RMerlin and Shibby are doing the heavy lifting as far as ARM ports?

  51. Monk E. Boy

    Monk E. Boy Network Guru Member

    Indeed, Toastman's primary goal is to support hardware he uses, we just benefit from his installed hardware and other hardware that's trivial to add support for. ARM isn't trivial to add support for, but once the worst of the kinks are worked out by everyone working on that branch it may become easier to add that support. And he may end up needing to add support if MIPS routers become harder to find.

    I think of the ARM branch as a "development" branch while Toastman's branch as a "stable" branch. Stable is always back a few versions from development. It's not a perfect analogy because most of the other branches are what I'd call stable, but it kind of fits.
  52. Beast

    Beast Network Guru Member

    Thanks for the schooling BikeHelmet

    Looks like old Dogs can learn new tricks
  53. RMerlin

    RMerlin Network Guru Member

    Indirectly in my case, and my work is done. I fixed compatibility of ipt_account with the newer ARM kernel and added support for the new Ethernet switch to robocfg, but that was for my own firmware. The Tomato specific work is only done by Shibby and Victek, and they are working alone on this since both of their development is being done behind closed doors rather than on a public Git repository.

    Sent from my Nexus 4 using Tapatalk
  54. HunterZ

    HunterZ Network Guru Member

  55. Mercjoe

    Mercjoe Network Guru Member

    Bug report:

    Flashed v1.28.7506.3 MIPSR2Toastman-RT K26 USB VLAN-VPN-NOCAT to a WNR3500lv1

    CIFS will not mount. The USB drive automounts, but CIFS will sit there saying 'mounting' for as long as I let the router run.

    In the logs all I get is this:

    Dec 31 19:00:57 Tomato user.err kernel: CIFS VFS: Error connecting to IPv4 socket. Aborting operation
    Dec 31 19:00:57 Tomato user.err kernel: CIFS VFS: cifs_mount failed w/return code = -146
    Dec 31 19:01:07 Tomato user.err kernel: CIFS VFS: Error connecting to IPv4 socket. Aborting operation
    Dec 31 19:01:07 Tomato user.err kernel: CIFS VFS: cifs_mount failed w/return code = -146
    Dec 31 19:01:22 Tomato user.err kernel: CIFS VFS: Error connecting to IPv4 socket. Aborting operation
    Dec 31 19:01:22 Tomato user.err kernel: CIFS VFS: cifs_mount failed w/return code = -146

    I looked on the network and I could access the USB share without issues.

    I flashed back to 7506.1 and the CIFS mounted right up no problems.
  56. Toastman

    Toastman Super Moderator Staff Member Member

    @Mercjoe, I don't have that router to test now, but cifs works fine on all routers that I have. I'm unable to reproduce the problem or think of any reason for it. I tested it with W7 and 1TB drive, I got 850GB share ok.
  57. RMerlin

    RMerlin Network Guru Member

    No, it's actually a voluntary decision by their respective authors. One which, to be honest, I completely disagree with, for numerous reasons.
  58. Mercjoe

    Mercjoe Network Guru Member

    Bummer. I went though all the settings and tried all sorts of different ways to try and get it to mount. The usb drive comes right up, but I can not for the life of me get CIFS to mount. That is where I store all the bandwidth, IP traffic and such.

    This has happened to me before. One version has an error of some sort but the next one works fine.

    I will just have to wait a bit and see what happens.
  59. Toastman

    Toastman Super Moderator Staff Member Member

    @ercjoe I must admit, over the years, seeing many strange things like this reported on the forums with that particular router.
  60. koitsu

    koitsu Network Guru Member

    Re: cifs_mount errors: clearly userland is usable at this point, even if the CIFS/SMB mount won't mount. Sounds like the perfect opportunity to use tcpdump and capture packets going between the router and whatever the CIFS/SMB server is.

    The return code isn't very helpful, admittedly (almost looks like it's a signed number that overflowed -- sigh...), but the "Error connecting to IPv4 socket" seems to imply the router (CIFS/SMB client) isn't able to establish a TCP connection to the CIFS/SMB server on TCP port 445. I have a couple other theories as well (mainly on the server side, including why "rolling back" would suddenly "make things work again"), but I'd start there.

    I say all this with the knowledge that the CIFS/SMB client module used within the Linux kernel on TomatoUSB is not exactly the most stable thing on the planet. But I don't want to use that as an excuse when it is possible to troubleshoot what's going on in this situation -- the troubleshooting will just take a lot of time, and it'd be something I could do in an afternoon if I was physically present. Troubleshooting over the Internet takes a crapload of time.

    TL;DR -- there is not enough information presented to debug this problem. For starters I'd love to see a screenshot of the mounting options in the GUI (all of them; you can black out the username and password fields), or if it's accomplished via a manual mount line somewhere in Scripts, I want to see the entire line (change username and password into "XXX" but do not change any other part of the syntax). I'd also like to know what the CIFS/SMB server actually is -- is it something running Samba (and if so what version and what OS + kernel version), or is it something running Windows (and if so what version + have you looked in the Event Log?)
  61. jbarbieri

    jbarbieri Network Newbie Member

    Yea, I think next time I need to make a list of what I have changed, cause I honestly cannot remember.

    I also enabled EMF as well, although, it doesn't seem to help in my situation (I need to run multicast video, but not have it flood the wireless interface until requested).
  62. jbarbieri

    jbarbieri Network Newbie Member

    Pretty sure this is the line I used to build it.

    make e1200v1f TCONFIG_EMF=y NO_JFFS=y NO_CIFS=y NO_ZEBRA=y NO_SAMBA=y NO_XXTP=y V1=0506.03 V2=jbarbieri09

    Resulted in size 3818496, which I was able to load onto my router.
  63. Mercjoe

    Mercjoe Network Guru Member

    Mounting options? As in the USB support menu? Or on the admin/CIFS client page? both?

    What else would you like to see?

    To be honest, I have never had a reason to post screenshots so I have no idea how you do it. Bear with me while I figure it out.
  64. qingz

    qingz Network Guru Member

    How to download the firmware from 4shared?
  65. Marcel Tunks

    Marcel Tunks Networkin' Nut Member

    You have to create an account and sign in.
  66. qingz

    qingz Network Guru Member

    Got it.
    Thank you!
  67. Dr Strangelove

    Dr Strangelove Addicted to LI Member

    Installed Toastman 1.28.0506.3 (VLAN-VPN) onto my Linksys E4200v1 and E900

    Basic install running IP4 DNS and OpenVPN servers without any problems I'm aware off at this time.

    Thank you Toastman and those who contribute.
  68. Toastman

    Toastman Super Moderator Staff Member Member

    thanks, jbarbieri and Dr. Strangleove ... :D
  69. though

    though Network Guru Member

    ok i have mentioned this in the past but have still not come across a remedy for it. i am hoping some of you router/internet nerds can help me (and others) out. when i upload videos to youtube using my phone on the wireless network, the internet becomes 100% unusable. i am talking 100% ping loss for both wireless AND wired users for all users on the network until the upload completes.

    if you take the same video and upload it via a WIRED computer on the network, there is just a slight increase in ping times and the connection is 100% usable for all wired and wireless connections.

    i haven't tried the factory asus firmware but am wondering if this problem is specific to tomato firmware? maybe just certain wireless routers?


    Router: Asus RT-N66
    Firmware: Toastman recent release
    30/5 Internet Speeds
  70. HunterZ

    HunterZ Network Guru Member

    Maybe try turning messing with WMM settings in the advanced wireless options on the router?
  71. Grimson

    Grimson Networkin' Nut Member

    How high is the CPU load on the router when you are doing this? Could be that the encryption of the wireless data hogs the router CPU.
  72. though

    though Network Guru Member

    i tried all the different configurations with WMM on/off and the other subsettings below it. same results.
  73. though

    though Network Guru Member

    when i initiate a video share to youtube on the phone the CPU sits around 15-20%.
  74. though

    though Network Guru Member

    another board member @d2globalinc uses the same router (N66) with @shibby20 firmware and has the exact same problem. he has comcast 100/10. just for fun he went to stock asus firmware to test this problem and he said the problem disappears on stock firmware. so this seems like a problem specific to tomato firmware :(
  75. gfunkdave

    gfunkdave LI Guru Member

    I have an RT-N66 wih the current Toastman and several wifi devices that upload - and no issues at all. I'd suggest resetting the advanced wifi settings to default. Also try turning off QoS and see if the issue goes away.
  76. though

    though Network Guru Member

    please read my post again. this problem is specific to tomato firmware and so far, just android devices uploading to youtube. i have confirmed it in 2 different states, 2 different cable providers, 2 diff routers, with all of these devices:

    Nexus 5
    Nexus 4
    Nexus 10
    Nexus 7
    Nexus 7 2012
    Samsung S3
  77. d2globalinc

    d2globalinc Addicted to LI Member

    With tomato when uploading from my Nexus 4, Nexus 7 2013, Nexus 7 2012, etc. as listed above to youtube I can't even load a web page, get 100% packet loss on all pings, etc, on my main workstation. With the Asus firmware, I get no (or maybe 1 at the start) packet loss, can browse multiple websites, etc. It's a night and day difference. With both firmwares, the devices speedtest around the same speeds, and have no other issues with uploading to Google Drive, G+ Photos, etc. Just youtube uploads with tomato kill the entire network with the devices above.
  78. HunterZ

    HunterZ Network Guru Member

    It's possible that the Asus firmware (and your desktop OS when uploading from a desktop) is doing some kind of QoS behind the scenes, while Tomato is allowing your Android devices to have the full upstream WAN bandwidth by default.

    If this is a major issue for you but you want to keep using Tomato, you may want to consider setting up some QoS rules in Tomato that only allow ~80% of your upstream bandwidth to be used for Youtube uploads.
  79. koitsu

    koitsu Network Guru Member

    For what it's worth, I use an RT-N66U running tomato-K26USB-NVRAM64K-1.28.0506.3MIPSR2Toastman-RT-N-Ext.trx and do have a mobile phone (Motorola Moto G + KitKat 4.4.3) that uses WiFi and have not witnessed anything like this. However, the cell is mainly downloading; the only uploads might be a few pictures to Dropbox on occasion.

    That said: I'm happy to try some tests out if folks can give a clean/concise walkthrough of how to reproduce the issue they're having. Explain very clearly what apps to run, what to touch/do, etc. and I can give it a shot.
  80. though

    though Network Guru Member

    what is your internet connection speed?
  81. QSxx

    QSxx Network Guru Member

    Can I use this version for RT-AC66U? Or rather will both bands work or just 2.4 ghz one?
  82. koitsu

    koitsu Network Guru Member

    ISP is Comcast, "Blast" tier (105mbit down, 10mbit up).
  83. koitsu

    koitsu Network Guru Member

    No idea. See other threads on this forum about RT-AC66U and what firmwares work/don't work on it. I would not risk it until someone gives you the go-ahead.

    I can at least confirm that on the RT-N66U, both radios and bands (2.4GHz and 5GHz) show up and are independently configurable. I disable the 5GHz radio (I have no equipment which can use it, and enabling the radio increases the temperature of the router by about 4-5C).
    QSxx likes this.
  84. HunterZ

    HunterZ Network Guru Member

    They seem to be specifically talking about uploading a video to Youtube.
  85. though

    though Network Guru Member

    yes from the youtube app on your android phone.
  86. d2globalinc

    d2globalinc Addicted to LI Member

    To reproduce the problem its simple. On a notebook or PC that has a wired connection directly plugged into the N66U, goto the command prompt and put in "ping -t" - this will start pinging google non-stop.

    Next, on your Android mobile device that is connected on wifi through the same N66U (2.4ghz, or 5ghz, doesn't matter) - use the Youtube app and upload a video that is about a minute long or more. Now watch that ping timeout. The PC/Notebook will have no access to anything on the internet until the Youtube upload is finished. Youtube uploading seems to be the only way we can get this to happen. Drive, Dropbox, gmail, etc - the connection doesn't drop out. Also uploading to youtube from any wired or wireless notebook/pc does not cause this issue.
  87. koitsu

    koitsu Network Guru Member

    Thanks -- that's what I needed to know specifically, re: uploading from a mobile phone using the Youtube app.

    Below is a ping from my desktop, where at the same time I uploaded a 9.2MByte video to Youtube via my mobile phone:

    Reply from bytes=32 time=17ms TTL=55
    Reply from bytes=32 time=20ms TTL=55
    Reply from bytes=32 time=13ms TTL=55
    Reply from bytes=32 time=14ms TTL=55
    Request timed out.
    Reply from bytes=32 time=1107ms TTL=55
    Reply from bytes=32 time=14ms TTL=55
    Reply from bytes=32 time=14ms TTL=55
    Reply from bytes=32 time=13ms TTL=55
    Reply from bytes=32 time=13ms TTL=55
    The problem only happens with WAN-facing traffic; meaning, turn the ping into ping -t (local router) and there's no impact.

    I then decided to try the same test, but instead of using Youtube, uploading a file to Dropbox:

    Reply from bytes=32 time=13ms TTL=55
    Reply from bytes=32 time=14ms TTL=55
    Reply from bytes=32 time=14ms TTL=55
    Reply from bytes=32 time=14ms TTL=55
    Reply from bytes=32 time=52ms TTL=55
    Reply from bytes=32 time=1609ms TTL=55
    Reply from bytes=32 time=9ms TTL=55
    Reply from bytes=32 time=14ms TTL=55
    Reply from bytes=32 time=2360ms TTL=55
    Reply from bytes=32 time=13ms TTL=55
    Oh look, same problem. It doesn't matter that during the Youtube upload there was a dropped packet -- this could happen with the Dropbox upload too, it's just chance.

    And now finally the same test, but instead of uploading a file to Dropbox from my mobile phone, I uploaded a 12MByte file using the same PC as what's doing the ping. Please note that the Dropbox PC client has a feature (which I have TURNED OFF) that limits the upload rate in the Dropbox client, so by turning it off I should be able to induce the problem as well:

    Reply from bytes=32 time=13ms TTL=55
    Reply from bytes=32 time=13ms TTL=55
    Reply from bytes=32 time=13ms TTL=55
    Reply from bytes=32 time=503ms TTL=55
    Reply from bytes=32 time=16ms TTL=55
    Reply from bytes=32 time=15ms TTL=55
    Reply from bytes=32 time=17ms TTL=55
    Reply from bytes=32 time=16ms TTL=55
    Reply from bytes=32 time=16ms TTL=55
    Reply from bytes=32 time=14ms TTL=55
    Reply from bytes=32 time=24ms TTL=55
    Reply from bytes=32 time=4420ms TTL=55
    Reply from bytes=32 time=22ms TTL=55
    Reply from bytes=32 time=13ms TTL=55
    Reply from bytes=32 time=15ms TTL=55
    Same thing.

    And here's an mtr from a box on my LAN to my VPS in southern California, running at the same moment those uploads happened. Note the Wrst column (time is in ms), and how the latency starts at hop #2 (Comcast router on the other side of my connection, i.e. WAN link):

    === Thu Aug 28 17:39:00 PDT 2014  (1409272740)
    Start: Thu Aug 28 17:39:00 2014
    HOST: icarus.home.lan                                                Loss%   Snt   Rcv  Last   Avg  Best  Wrst
      1.|-- gw.home.lan (                                       0.0%    30    30   0.2   0.2   0.1   0.4
      2.|--                                                    0.0%    30    30   7.8 193.1   7.8 2035.6
      3.|-- (  0.0%    30    30  17.1 187.0   8.0 1979.6
      4.|-- (   0.0%    30    30  11.6 181.7   9.7 1924.6
      5.|-- (    0.0%    30    30  11.9 176.6  11.3 1870.9
      6.|--                                                   0.0%    30    30  11.2 165.6  10.5 1810.6
      7.|-- (                   0.0%    30    30  53.6 206.7  52.1 1797.0
      8.|-- ???                                                            100.0    30     0   0.0   0.0   0.0   0.0
      9.|-- (                  0.0%    30    30  52.6 200.2  52.0 1680.8
    10.|-- ???                                                            100.0    30     0   0.0   0.0   0.0   0.0
    11.|-- ???                                                            100.0    30     0   0.0   0.0   0.0   0.0
    12.|-- (                    0.0%    30    30  52.9 260.8  51.8 2534.8
    13.|-- ???                                                            100.0    30     0   0.0   0.0   0.0   0.0
    14.|-- (                66.7%    30    10 754.0 1124.  52.3 3290.7
    15.|-- (              0.0%    30    30  53.9 288.3  52.3 2365.9
    16.|--                                                   0.0%    30    30  53.0 276.8  52.5 2308.8
    17.|--                                                   0.0%    30    30  52.6 265.3  52.3 2251.8
    18.|-- (                               0.0%    30    30  54.0 256.1  52.5 2194.9
    === END
    So, the root cause is simple: you have a device on your network (your mobile phone) which is saturating your Internet upstream. This is what bandwidth limiters and/or QoS can be used for. Ten bucks the Asus firmware has this kind of stuff enabled by default, or has "something" going on under the hood to try and even out the traffic flow.

    CPU utilisation on the RT-N66U remains low, with softirq reaching up to 30% at times. I'm not surprised in the least there; there's nothing we can do about that (old Linux kernel, wireless drivers which are binary blobs and don't work with newer versions, etc.).

    Thus, I do not classify this as a "bug" at all. If I was to do the same thing on a local Ethernet network it would behave the same way. This is basic networking 101 stuff, really nothing to see here.
    Last edited: Aug 29, 2014
  88. d2globalinc

    d2globalinc Addicted to LI Member

    I understand network saturation and I would have not raised this issue if it was not that All PC's, Chromebooks, notebooks, etc do not cause this issue, on wifi or wired connections. I never get complete packet loss during an upload to youtube from any of those. We get complete packet loss, not high pings, but complete packet loss to the internet on all workstations, etc on the network the entire time when a single Android phone/tablet uploads to youtube. And only youtube, not FTP from the same device, not dropbox, not, not google drive, ect. Our test files are also around 50+mb as well.. Which is about a 1 minute video.
  89. though

    though Network Guru Member

    same here. complete packet loss when uploading via the youtube app on android phone. the entire internet is wiped out completely until the upload is finished. our test files are 100-200mb. i can now confirm with 4 different tomato routers, in 3 different states. 1 of the users is on cox ultimate. his upload speeds are 20+ mb and has the exact same issue :(
  90. koitsu

    koitsu Network Guru Member

    @d2globalinc I'm sorry to say but my experience (as documented in my post) shows that I can reproduce the problem using a desktop PC as much as I can a mobile phone. I'll make it crystal clear and simple (mainly for @though):

    My own experience confirms and shows 1) it isn't specific to a particular service (i.e. Youtube uploads and Dropbox uploads behave the same way (remember, Dropbox PC client has upload rate-limiting applied unless you disable it)), 2) it isn't specific to mobile phones (i.e. wireless), and 3) it isn't even specific to a particular traffic direction (i.e. I can reproduce the same behaviour saturating downstream, because most protocols are bidirectional (TCP, ICMP, etc.)).

    I do not use rate-limiting anywhere, and I do not use QoS anywhere. Thus, when I saturate the hell out of my Internet connection (doing anything from uploading a file at the full speed possible, to downloading a file at the full speed possible, regardless of where to/from), I expect some latency (and in extreme situations packet loss). Like I said, even on a classic pure wired Ethernet network, this is what would happen.

    If your experience shows otherwise, then please be my guest and present the evidence, do the troubleshooting/profiling, do what you can. I know you're asking for help while also complaining about a problem, but what I'm trying to tell you is that the problem is a universal norm and why things like bandwidth/rate limiters exist and how QoS can sometimes help.

    The only thing that's "unique" to wireless traffic compared to wired on these routers is with regards to the use of a software bridge (br0 interface) and use of VLANs (there is some hardware offloading for VLAN IDs on certain Ethernet frames though).

    As far as the high softirq usage during wireless transfers (down or up) -- yup, we've known about that for quite some time, and there isn't anything we can do about it at this time. I explained why at the bottom of my previous post.

    I'll end this post with a common mantra that I both follow and advocate, especially on this forum: if another firmware works for you / doesn't have issues (for whatever the reason), then by all means use that firmware! It's completely okay. I believe in people having choice, e.g. run Linux, run Windows, run OS X, run MS-DOS -- I don't care as long as whatever you're running meets your needs. I can't tell you how many times in my life I've stopped using an OS, stopped using an application, or stopped using a piece of hardware due to incompatibilities, bugs, or anomalies (the count is probably in the thousands by now). It's up to you to prioritise what matters more to you.
    Last edited: Aug 29, 2014
  91. HunterZ

    HunterZ Network Guru Member

    The fact that it doesn't happen on a desktop likely means that Windows is doing its own QoS (it does this by default unless you disable it) or is otherwise limited on its ability to utilize the upstream WAN connection. It's kind of a credit to Android and Tomato that they're actually letting you squeeze every drop out of your Youtube transfer by default, and to Youtube that they're taking upload data as quickly as your phone and WAN connection are able to pipe it through.
  92. Toastman

    Toastman Super Moderator Staff Member Member

    I just read this thread and don't know at this moment what to make of it. However, the building I am in has 30+ access points and up to 200 clients registered, many of those are android mobile phones and many people do upload videos using them. All clients in rooms use wifi, I have both wifi and LAN access. QOS is always in use. In my apartment I have S3, Note 1, and a Samsung tablet. All run Android Jellybean. A quick test uploading videos to my youtube or facebook accounts from any of those did not make any discernible difference to the operation of the network. My PC is still able to ping, surf the web and view youtube, either via LAN or Wifi.

    I also tried the experiment using wifi on the main router facing the internet (which I don't normally use) with the same result.

    I did turn off QOS briefly, and while the whole throughput for the apartment block was shot to hell, we didn't lose access or loss of ping replies, just everything became slow and uncontrolled, as expected.

    So, I'm unable to reproduce the problem here.

    I wonder if you are able to upload video using a browser instead of the app?
  93. though

    though Network Guru Member

    i use a desktop (10/100/1000) wired directly to the N66. this is exactly what happens when i upload a video to youtube from a wireless android device. i used a 125mb video file. you can see exactly when it started and stopped. as you can see, it causes the entire network to be 100% unusable. remember this is from 1 device.

    Last edited: Aug 29, 2014
  94. HunterZ

    HunterZ Network Guru Member

    Did you upload to youtube from an Android device while QoS was disabled and while observing pings originating from another device on the network?
    though likes this.
  95. though

    though Network Guru Member

    I've now confirmed the problem on 6 different routers running tomato and all android youtube devices that i have come across. Bottom line is, not 1 device should take down an entire network. How many times has this happened where the users think their ISP, cable modem, or router is experiencing an outage, when there really isn't any problem with them at all? My sister text'd me last night saying she had to unplug her cable modem for 30 seconds when she noticed her internet "wasn't working". I asked if she uploaded a youtube video from her phone by chance, and sure enough, that is exactly what she did and what took down the network. I told her that, unfortunately, this is normal behavior for the time being. You just have to wait for the upload to finish on your phone before you can do ANYTHING online.

    If a simple minimal Qos is required by default to keep this issue from happening, then it really needs to be turned on by default. Users can always disable it or change it if they don't want it for whatever reason. Apparently it's built into the default Asus firmware (and probably all OEM firmwares).
  96. Grimson

    Grimson Networkin' Nut Member

    Toastmans builds come with a good default QoS rule-set, all you need to to is set the inbound and outbound rates to match your connection. And no, turning stuff like this on by default is a very bad idea, as you can see by the need to first set the in/out rates right.

    Remember, Tomato is a custom firmware made by enthusiast for enthusiast (and those willing to learn and become them). It's not created by a big company for the dumb mass market.
  97. though

    though Network Guru Member

    i don't agree with this at all and i am sure i am not the only one either. the "dumb mass market" simply don't know about this and never would. a dynamic (adaptive) outbound qos rule would likely cure this in theory.
    Last edited: Sep 1, 2014
  98. HunterZ

    HunterZ Network Guru Member

    It's worth mentioning that Toastman maintains his fork of Tomato for his own uses, and graciously shares them with us and even provides some support for them as a favor to the community. As such, it's NOT meant to be a drop-in replacement for stock firmware that is optimized for whatever is determined by whomever to suit the use case of the average end-user (something that could never be agreed on anyway, especially in an enthusiast community like this one). If you don't believe me, just look at the fact that Toastman disables DHCP by default in his builds (which is a totally fine and reasonable thing to do, even though most of us turn it right back on)!

    You're always free to fork someone else's build and maintain it with whatever differing default settings you might want as an alternative to what the primary maintainers are shipping. Or you can just change the settings to whatever you like after the install, as the whole point of Tomato is to give you the freedom to do that.

    Grimson is also correct in that one-size-fits-all default QoS settings are not really possible for Tomato, because it requires the admin to at least specify how much upstream & downstream bandwidth it has to work with. Toastman graciously provides an extensive set of default QoS rules that are battle-tested via use in an apartment complex, and are thus a good starting point, but I believe they are disabled by default due to the issue of needing to provide good I/O rates that are going to be different for every WAN connection.
    koitsu likes this.
  99. though

    though Network Guru Member

    that is why i mentioned dynamic (adaptive) (adusts on the fly) to work with everyone's pipe size.
    Last edited: Sep 1, 2014
  100. Marcel Tunks

    Marcel Tunks Networkin' Nut Member

    There is no good universal QoS ruleset. An example that does some of what you propose can be seen in cerowrt, but even then they suggest combining it with traffic shaping customized to each network and adjusted over time.
    Toastman and koitsu like this.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice