1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

bandwidth usage per client

Discussion in 'Tomato Firmware' started by tonysa13, Apr 11, 2009.

  1. tonysa13

    tonysa13 Addicted to LI Member

    Hi there,

    I'm new to Tomato. Started using it a few months ago. I like the bandwidth monitoring tools, but I think it's missing something: the ability to monitor bandwidth on a per client basis.

    For example, in my case, we share an internet connection through the router, and the cable company has a data usage limit. When we hit the limit, I get charged extra on my monthly bill, but I don't know which of my roommates used too much bandwidth.

    I've searched around to try to find a solution, but I haven't been able to find any. So, I started fooling around with it myself. Here's what I came up with, looking for some help to get this working. I started reading about iptables, and thought I could use them to count how much data someone was using by inserting some custom FORWARD chains/rules...

    From tomato's main GUI, under Administration -> Scripts, I entered a firewall script:
    /usr/sbin/iptables -N mymac
    /usr/sbin/iptables -I FORWARD -m mac --mac-source XX:XX:XX:XX:XX:XX -j mymac
    (using my real MAC address)

    This inserts an iptables chain, with no rules, so from what I can understand, the router runs the chain on the matching packets (ie: all forwarded packets to and from my machine - this could also be done by IP address instead of MAC address)... and since there are no rules, it just continues processing the packets however tomato has configured the rest of the iptables chains/rules.

    So, now, when I list the iptables via the SSH session, I see:
    # iptables -L -nv
    Chain FORWARD (policy DROP 0 packets, 0 bytes)
    pkts bytes target prot opt in out source destination
    989K 44M mymac 0 -- * * MAC XX:XX:XX:XX:XX

    This gives me a packet count, but more importantly, the number of bytes of data from all packets with my MAC address in there.

    What I've noticed, though, is that the whole iptables seems to get zero'd out on occasion. I don't know when/how the tables get cleared.

    So, I thought that I would write a script to run as a cron job every couple of minutes to continuously monitor the byte count and write it to a file, then occasionally get the file from the router and store it on my local machine. At the end of each month, I could have the total bandwidth usage per client.

    Does this make sense?
    Can anyone see any problems with it?
    Is there a better way? Easier way?
    Could this somehow be done all via the tomato gui? I don't care to have fancy graphs, just the per-client count in text somewhere on the page.
    Will these kinds of rules in the FORWARD table have accurate counts of data used per client?

    Tomato keeps such accurate bandwidth data, I'm surprised that it doesn't already do this.

    I'm stuck now because I don't know how to write the script to check the iptables byte counts and store them.

    If I can get this to work, whenever we go over our bandwidth limit, I can tell exactly who used what, and so who should foot the bill for the extra costs associated with going over the limit.

    Thanks in advance for any help or suggestions!
  2. tonysa13

    tonysa13 Addicted to LI Member

    Sorry, forgot to say that I'm using Tomato Firmware v1.23.1607 on a linksys WRT54GL router.
  3. SoftCoder

    SoftCoder Addicted to LI Member

    Hello, I'm looking at the same thing

    My family has bandwidth cap as well, so I'm working on getting a bandwidth count per IP / MAC solution working with Tomato. So far I have 2 scripts that have a working audit trail of bandwidth usage, the 1st script adds the IP chains to track the bandwidth per IP while the 2nd scripts is a scheduled script that copies the output ip ipchains to my dns323 NAS (via a cifs share).

    Today i downloaded the Linux source code for the WRT54GL from linksys and can compile it fine (in Ubuntu 8.10) but I get compile errors when compiling Tomato 1.23 code from polarcloud. I think the posted source code for 1.23 has syntax errors but I don't know for sure. I plan on adding something that integrates with the static DNS part of tomato where you simply add a checkbox beside the IP's you want to track bandwidth for.

    If I get tomato compiling then I'll add my changes and post the code / binary links when complete.

  4. Joro711

    Joro711 Network Guru Member

    This is very good idea.
  5. SoftCoder

    SoftCoder Addicted to LI Member

    Got Tomato compiled

    I was successful getting Tomato compiling in Ubuntu 8.10. Over the next short while I'll add an easy to use Interface to enable bandwidth per client and post the source / binaries on my website (and here) when done. Thanks everyone for having a great community here, it is full of useful information.
  6. Victek

    Victek Network Guru Member

    Look this site, here it's a dual wan Tomato + IP traffic per IP and many other features, it works.. the author is a "mystic" versus GPL .

  7. Joro711

    Joro711 Network Guru Member

    This firmware in english or in chinese?
  8. Victek

    Victek Network Guru Member

    Obviously ... Chinese.... not difficult to run .. just pure intuition :biggrin:
  9. Toastman

    Toastman Super Moderator Staff Member Member

    If you want to look for some ideas, there is also an older English version here... no source code.. just a binary. Scripts for dual-WAN load balancing etc. Tried PPPOE on 2 ADSL lines, seemed to work OK, scripts not tested. But I don't remember if the older English version had bandwidth per MAC.

  10. tonysa13

    tonysa13 Addicted to LI Member

    That's exactly what I was originally looking for! Let me know if you get it working, I'd love to use it.
  11. Joro711

    Joro711 Network Guru Member

    This is a good idea to monitor the consumption of each customer rutera.I will be available in firmware.
  12. tonysa13

    tonysa13 Addicted to LI Member

    Let me know if you have any luck with this. What is your website?

    You mentioned that you have 2 scripts working, could I use those 2 scripts even without the web interface?
  13. SoftCoder

    SoftCoder Addicted to LI Member

    Ok I whipped this little parser together written in pure C to take a folder of logfiles and display bandwidth usage per client. This uses the method of iptable chaining and works ok even if your router reboots and your stats get reset. The key item here is that your scheduler must save the logfiles as frequently as you want to avoid losing usage in case of a power outage or reboot. This small utility has some intelligence to figure out if the stats were reset or not and do the right calculation. I tried this on logfiles I had spanning over 5 days where I had reset the router many times and the stats given from the tool matched almost exactly the bandwidth counter of my ISP. Here are he outputs (one is console and the other is html):

    Console mode output:

    softcoder@softhauslinux:~/Code/ipt-parse/ipt-parse/bin/Debug$ ./ipt-parse 0 20090420 20090425 /media/dlinknas/InternetConnection/usagelogs/ traffic_in_ traffic_out_
    Copyright (C) 2009 Mark Vejvoda

    Bandwidth usage from: 20090420 to 20090425
    Totals for host [] Incoming [0.00 MB] Outgoing [0.00 MB]
    Totals for host [] Incoming [0.00 MB] Outgoing [0.00 MB]
    Totals for host [] Incoming [0.00 MB] Outgoing [0.00 MB]
    Totals for host [] Incoming [0.00 MB] Outgoing [0.00 MB]
    Totals for host [] Incoming [0.00 MB] Outgoing [0.00 MB]
    Totals for host [] Incoming [2776.15 MB] Outgoing [510.04 MB]
    Totals for host [] Incoming [0.00 MB] Outgoing [0.00 MB]
    Totals for host [] Incoming [6363.44 MB] Outgoing [277.18 MB]
    Totals for host [] Incoming [0.00 MB] Outgoing [0.00 MB]
    Totals for host [] Incoming [0.00 MB] Outgoing [0.00 MB]
    Totals for host [] Incoming [0.00 MB] Outgoing [0.00 MB]
    Totals for host [] Incoming [0.00 MB] Outgoing [0.00 MB]
    Totals for host [] Incoming [0.00 MB] Outgoing [0.00 MB]
    Totals for host [] Incoming [450.28 MB] Outgoing [20.57 MB]
    Totals for host [] Incoming [146.59 MB] Outgoing [18.58 MB]
    Totals for host [] Incoming [0.00 MB] Outgoing [0.00 MB]
    Totals for host [] Incoming [467.16 MB] Outgoing [53.51 MB]
    Totals for host [] Incoming [524.41 MB] Outgoing [271.02 MB]
    Totals for all hosts in filter Incoming [10728.04 MB] Outgoing [1150.89 MB]

    Html mode output:

    softcoder@softhauslinux:~/Code/ipt-parse/ipt-parse/bin/Debug$ ./ipt-parse 1 20090425 20090425 /media/dlinknas/InternetConnection/usagelogs/ traffic_in_ traffic_out_

    Bandwidth usage from: 20090425 to 20090425
    Host Download Upload Total 0.00 MB 0.00 MB 0.00 MB 0.00 MB 0.00 MB 0.00 MB 0.00 MB 0.00 MB 0.00 MB 0.00 MB 0.00 MB 0.00 MB 0.00 MB 0.00 MB 0.00 MB 11.00 MB 2.33 MB 13.33 MB 0.00 MB 0.00 MB 0.00 MB 8.00 MB 2.72 MB 10.71 MB 0.00 MB 0.00 MB 0.00 MB 0.00 MB 0.00 MB 0.00 MB 0.00 MB 0.00 MB 0.00 MB 0.00 MB 0.00 MB 0.00 MB 0.00 MB 0.00 MB 0.00 MB 0.00 MB 0.01 MB 0.01 MB 0.14 MB 0.15 MB 0.29 MB 0.00 MB 0.00 MB 0.00 MB 0.00 MB 0.22 MB 0.22 MB 57.08 MB 1.65 MB 58.73 MB
    Total 76.22 MB 7.07 MB 83.30 MB

    I plan on packaging the linux binary and source code on my website shortly (www.soft-haus.com/blog) explaining how to setup tomato via iptables and logfiles to track bandwidth usage per IP /MAC. This tool was compiled with Ubuntu 9.04 (jaunty) but I will now proceed to compiling it using the tomato runtime libs to see if I can auto-run it from the tomato scheduler.

    P.S. I setup this is my firewall script:

    iptables -N traffic_in
    iptables -N traffic_out
    iptables -I FORWARD 1 -j traffic_in
    iptables -I FORWARD 2 -j traffic_out
    iptables -A traffic_in -d
    iptables -A traffic_out -s
    iptables -A traffic_in -d
    iptables -A traffic_out -s
    .. repeat above two lines for each additional IP Address

    In my scheduler i run this:

    cd /cifs1

    where backupusage.sh looks like:

    iptables -L traffic_in -vn > usagelogs/traffic_in_`date '+%Y%m%d%H%M%S'`
    iptables -L traffic_out -vn > usagelogs/traffic_out_`date '+%Y%m%d%H%M%S'`

  14. SoftCoder

    SoftCoder Addicted to LI Member

    Initial solution for bandwidth monitoring Per IP Address

    I have packaged up my work (with source code) available at:

    This includes a README.TXT containing all instructions required to setup
    your tomato (or any other iptables based device / operating system) to
    handle bandwidth monitoring.
  15. sizzaone

    sizzaone Addicted to LI Member

    Hey man, that's great, thanks! It seemed to work fine after I created the usagelogs directory in /cifs1, and i've added a few lines to the script to also create daily and monthly reports. I've got a few questions, are the following possible?

    Can you run ipt-parse to view the usage of just the last hour? Or 5 mins even?
    Can I view the usage since the 27th of the last month, without updating the date each month? This way I can view usage for the current month only.
    Possible to monitor by MAC address instead of IP so that you don't need to set statics, and the PC is still monitored even if it's IP changes?
    Is it safe to run the script every minute? It won't overload the router, even after months of logs are accumulated?

    Thanks again.
  16. SoftCoder

    SoftCoder Addicted to LI Member

    The frequency is currently assumed to be by date (not by time). The code could easily be modified to do by time, but I would prefer to perfect the date based calcs first and then tackle doing things by time later.

    Currently the parser just calculates from the logs (and has some built in arbitrary limits like max 31 day date range and 100 IP addresses max). One thing to do would be to create a cached version of processed files so that the parser wouldn't have to re-processes days that were already parsed. I'll consider adding that in the next release (I didn't know how much interest this would create, so I am currently waiting to hear feedback to see what things need to be looked at first).

    I'll check into the possibility of tracking by MAC address and see if that is something available via ipchains. I'm no expert, I'm just a coder like anyone else, so I have a lot of things to learn too.

  17. TrUzApalOOza

    TrUzApalOOza Addicted to LI Member

    questions about IPT-PARSE, plus my quick mods


    Took me a few minutes but I got it working. A few questions/comments.

    1. Are the # (pound) signs needed, they are not needed... My older WRT54G was being picky, I finally figured out that a full reboot after enabling them helped guarantee iptables was updated properly. I then telneted into the router and issued a iptables -L command, looking for the rules --once i saw them I know they are working.
    iptables -N traffic_in
    iptables -N traffic_out
    2. Does ipt-parse go to /cifs1 or /cifs1/usagelogs? I put in both places just to be covered.

    3. I hacked the chmod for the ipt-parse, let me know what is correct so i can clean-up.

    3. I've hacked your scheduler with changes (notes inline)
    cd /cifs1  #location for saving logs
    iptables -L traffic_in -vn >> usagelogs/traffic_in_`date '+%Y%m%d%H%M%S'`
    iptables -L traffic_out -vn >> usagelogs/traffic_out_`date '+%Y%m%d%H%M%S'`
    # not sure if needed, but your docs said make executable, yeah 777 is overkill
    chmod 777 /cifs1/usagelogs/ipt-parse
    chmod 777 /cifs1/ipt-parse
    # i appended YY-mm-DD to html file so i can have a running log
    ./ipt-parse 1 today-7 today usagelogs/ traffic_in_ traffic_out_ > weeklybandwidth_`date '+%Y-%m-%d'`.html
    # custom table defining me users (see below)
    cat /cifs1/user_listing_table.txt >> weeklybandwidth_`date '+%Y-%m-%d'`.html 
    4. The last line above cheats and appends my user table (incorrect HTML since its after your ending BODY tag, but it works). Assuming you could embedded this into your code easily? Here is my user_listing_table.txt file, which i stole from the tomato static screen and added a few more notes of my own.
    00:04:76:2F:xx:xx	ABBA530	BROTHER DESKTOP
    00:21:9B:06:xx:xx	KISS530	SISTER DESKTOP
    00:1F:3B:4D:xx:xx	TRUZ6860	DAD LAPTOP
    00:1C:BE:A7:xx:xx	Wii		NINTENDO WII CONSOLE
    00:24:2B:1D:xx:xx	JADE12	MOM LAPTOP
    00:13:20:5C:xx:xx	DASERVER	HOME SERVER
    5. Someone in earlier threads asked about by MAC, wondering if this would work?
     # iptables -A INPUT --mac-source 00:0B:DB:45:56:42
    Thanks for the cool little addon, just what i was looking for and dirt simple for the most part.

  18. TrUzApalOOza

    TrUzApalOOza Addicted to LI Member

    I know most people wouldn't care, but could iptables be used track all of traffic inside the network (all 192.x.x.x to 192.x.x.x chatter)? I'm a glutton for stats, just curious.
  19. sizzaone

    sizzaone Addicted to LI Member

    The instructions seem to indicate that these lines shouldn't be commented, it's working for me without the #'s anyway.

    It works for me in /cifs1.
  20. sizzaone

    sizzaone Addicted to LI Member

    Sounds good man, i'm looking forward to seeing these changes if you decide to implement them.

    One other thing i'd like to see is more exact stats, maybe with iptables --exact switch. I've noticed currently when a host runs up a large amount of gigs, iptables shows in GB instead of MB and I start to only see updates in the logs every gig instead of every meg. For example a host has done 46000MB so far this month, and the log page won't update until it does 47000MB, leaving me with no way of knowing actual usage inbetween, and an inaccurate usage displayed (could be up to 999mb above or below). I hope i'm clear enough on this, and thanks again for all your work.
  21. tonysa13

    tonysa13 Addicted to LI Member

    I had to use:
    iptables -I FORWARD -m mac --mac-source 00:0B:DB:45:56:42
    But, I don't use MAC filtering anymore... it's a combination of static DHCP with ip range filtering...

    I've worked out my own solution to this problem, looks a lot like SoftCoder's, but no C script. I'll post all the details here as soon as I find some time, hopefully tonight...
  22. tonysa13

    tonysa13 Addicted to LI Member

    Ok, here's how I solved this problem... comments/feedback would be great. Note that I'm using Tomato Firmware v1.23.1607 on a Linksys WRT54GL router.

    The problem I'm trying to solve is to be able know how much bandwidth each of my roommates has used every month.


    I decided to split up my network into blocks of 8 IP addresses. I "give" each roommate a block of 8 IP addresses to use, in case they have multiple internet devices (ie: laptop, Wii, etc)... When I say "give", I mean that I manage Tomato's Static DHCP to map their device MAC addresses to an IP address in their block. Using blocks of IP addresses will allow me to use subnet masks in my iptables to catch all traffic in/out for all devices from each roommate. You could choose to split up the network into different sized blocks, or do this same thing on a per IP basis or even per MAC address basis. I give one block per roommate, and I use one block for DHCP...

    For DHCP, I restrict the DHCP IP range to be the addresses from the IP block I just mentioned. This is so that if a new device connects to the router before I have a chance to statically assign it's DHCP address, I will have an iptables rule to catch all DHCP traffic, and hence I won't miss counting some traffic. This 'catch-all' rule should rarely see any traffic, especially if you use wireless MAC address filtering so that new wireless clients can't connect until you let them.

    Next, I add iptables rules to count traffic in/out of each block of IPs. Then, I add two cron jobs: one to count the totals every 5 minutes, and write them to a file; another to sum up the file and write a new file with the daily totals.

    One important thing is to put all the scripts and output files on a CIFS mount so that they don't get erased when the router reboots, however, I cannot get my CIFS to work for some reason... (anyone know what "cifs_mount failed w/return code = -145" means?)

    Example of IP block assignments (these are just for reference):
    IP_START       IP_END            IP / MASK              NAME -		DO NOT GIVE OUT THIS BLOCK -		Ali -		Bob -		Carl -		Dave
    ... ... ... -	DHCP
    ... ... ...

    Router Settings / Configuration:

    So, let's say that Ali has a laptop with MAC address 00:00:00:00:00:00, and a Wii with MAC address 11:11:11:11:11:11. Bob has a laptop with MAC address 22:22:22:22:22:22.

    I use Tomato's Static DHCP to set:
    MAC Address	IP Address	Hostname
    00:00:00:00:00:00	Ali-Laptop
    11:11:11:11:11:11	Ali-Wii
    22:22:22:22:22:22	Bob-Laptop
    This ensures that my own IP segmentation rules are being followed... Ali's IP's are within his range, Bob's in his, etc.

    Now, we need to add the iptables rules. Under Administration->Scripts->Firewall:
    /usr/sbin/iptables -N Count
    /usr/sbin/iptables -I FORWARD -j Count
    /usr/sbin/iptables -A Count -d
    /usr/sbin/iptables -A Count -s
    /usr/sbin/iptables -A Count -d
    /usr/sbin/iptables -A Count -s
    /usr/sbin/iptables -A Count -d
    /usr/sbin/iptables -A Count -s
    This creates a chain called Count and inserts a rule in the FORWARD chain causing all traffic to go through the Count chain (from what I understand, all traffic that goes *through* your router will fall into the FORWARD chain). Then, I add rules to the Count chain... for every block of IPs in use at this time (ie: Ali, Bob, DHCP), there's a rule for incoming and outgoing traffic. The "/29" masks the IP such that all IPs in the appropriate range are caught by that rule...

    Without doing anything else, you can start to see how much traffic is going through these rules by SSHing into the router, and then running an iptables command:
    # iptables -L Count -nvx
    Chain Count (1 references)
        pkts      bytes target     prot opt in     out     source               destination
          10     1628            0    --  *      *  
          10     2161            0    --  *      *
           0        0            0    --  *      *  
           0        0            0    --  *      *
           0        0            0    --  *      *  
           0        0            0    --  *      *
    But, now there are a few scripts I wrote to help keep track of the stats...

    First, I've put all my stuff in /tmp/home/root, but they'd be better in a CIFS mount as mentioned earlier (ie: /cifs1)... Here are the scripts, and a brief explanation of each:

    count.awk : used to parse the iptables output and count up how many bytes each roommate has used
       for(i=1;i<=NF;i++) {
          if ($i~/192\.168\.1.*/) {
       for(name in counter)
          printf("%s %d ",name,counter[name])
    count.sh : The executable script that will sum the iptables stats every 5 minutes, write them to the file data.txt and clear the iptables Count chain
    /usr/sbin/iptables -L Count -nvx | awk -f count.awk >> data.txt
    /usr/sbin/iptables -Z Count
    sum.awk : sums up the data collected by the previous script and prints it in KiloBytes (you could keep things in bytes by removing the divide by 1024)
       for(name in counter)
          printf("%s %d\n",name,counter[name]/1024);
       printf("TOTAL = %d\n", total/1024);

    sum.sh : The executable script that will add up the data collected throughout the day and store the summary in a new file, removes the data.txt file
    DATE=`/bin/date | awk {'printf("%d_%s_%d.txt",$6, $2, $3)'}`
    /bin/cat /tmp/home/root/data.txt | awk -f /tmp/home/root/sum.awk > /tmp/home/root/$DATE
    /bin/rm -f /tmp/home/root/data.txt

    Now, we need those scripts to run, so under Administration->Scripts->Init add:
    /usr/sbin/cru a BandwidthCounter "0,5,10,15,20,25,30,35,40,45,50,55,58 * * * * /tmp/home/root/count.sh"
    /usr/sbin/cru a BandwidthTotals "59 23 * * * /tmp/home/root/sum.sh"
    This will run the counting script every 5 minutes, and the daily sum script at 11:59pm each day.

    Finally, go into Basic->Network and under DHCP IP Address Range, change it to be - 207. In this way, if one of the roommates plugs in a new device, the DHCP assigned address will still have it's traffic tracked.

    That should be it!

    Every time you I add a new roommate, I give him a new block of IPs, and then I modify:
    - Static DHCP settings for the new roommates devices to be mapped to his IP address block
    - Administration->Scripts->Firewall to add the iptables rules for that block of IPs
    - count.awk to add a new 'sub' line for each new roommate

    Now, if you let it run for a while, you'll end up with a few data files: daily sum files (in KB) and today's data file (in Bytes).
    # pwd
    # ls -l
    -rw-r--r--    1 root     root           86 Apr 29 23:59 2009_Apr_29.txt
    -rw-r--r--    1 root     root           86 Apr 30 23:59 2009_Apr_30.txt
    -rw-r--r--    1 root     root          612 May  1 00:19 count.awk
    -rwxr-xr-x    1 root     root           92 May  1 00:10 count.sh
    -rw-r--r--    1 root     root          511 May  1 00:35 data.txt
    -rw-r--r--    1 root     root          240 May  1 00:23 sum.awk
    -rwxr-xr-x    1 root     root          184 May  1 00:29 sum.sh
    Looking into one of the daily summary files, we see:
    # cat 2009_Apr_30.txt
    DHCP 0
    Ali 13607
    Bob 4254
    TOTAL = 17861

    More can be done with this... next I'd like to get CIFS working so that the data is saved more permanently. Then, something to parse the daily totals into a monthly total. Ideally, a script to email me the monthly totals. And, even better, integration into the Tomato GUI...

    Anyways, I hope this helps other people thinking about trying this, comments/suggestions/feedback are welcome and appreciated!

  23. SoftCoder

    SoftCoder Addicted to LI Member

    Ok, I did some thinking here and recognized that we'll be potentially dealing with loads of stats as this progresses. Instead of dealing with logfiles, how about using sqlite3? So i snagged the code from their website and cross-compiled to tomato and voila, sqlite3 runs from the shell.

    Now I'll see about importing stats every minute or every 5 minutes into a sqlite database which can then be queried every which way we want to produce a zillion kinds of stats. This solution will work with any IPchains distro which makes it VERY useful, even beyond this little router platform.

    I'll post more info once I make some porgress. By the way keep posting your ideas and experiences here, it helps.
  24. TrUzApalOOza

    TrUzApalOOza Addicted to LI Member

    I plan on adding second router to my house, and will set it up the same way. I plan on putting both tomato (iptables) logs to the same spot, just have both routers on different schedules. This should give me combined traffic for my entire home network nicely.

    I'm logging hourly, so that will be 48 files a day --- I probably should setup a routine to gzip them every month so I don't take a hit on the file allocation tables?
  25. SoftCoder

    SoftCoder Addicted to LI Member

    Oh boy! I've gotten into a HUGE mess! All i want to do is save this bandwidth info in an organized way so I thought of using sqlite3 (from sqlite.org). WHAT A CAN OF WORMS! After spending days of investigation, I am able to run sqlite3 under tomato, but unfortunately looks like busybox has a defect related to cifs mounts. Because there appears to be NO WAY to turn off the default locking that is hard coded in mount.c for cifs mounts I cannot use the suggested work-around which is to use the norbl (or nomand) setting when mounting my cifs share (as they don't seem to exist). Here is what i mean (in mount.c share)

    // lock is required
    vfsflags |= MS_MANDLOCK;

    In the mount command every-where else (full blown versions of linux) there is the nobrl option which fixes the cifs locking issue that renders sqlite3 useless when writing to a cifs share. I AM able to write to the routers local file system... but that is equally useless since the bandwidth log will grow. Can anyone tell me how I can compile JUST mount.c with my modification as a standalone binary for tomato? If not I'll have to stick to log files until I can find a solution and get sqlite3 working with cifs shares.

    To validate, i compiled the sample program listed in this link:


    Once I compiled it and ran it on my local tomato file system, it runs fine, but running it against a file on my cifs share i get: write lock failed. (same error mentioned in the link)

  26. xorglub

    xorglub Addicted to LI Member

    I worked on a similar thing some time ago. I have posted my work there :

    I am also using the iptables trick described in this thread. The integration with tomato still needs to be polished.

    For some reason unknown to me the shell scripts are slow on tomato (20-30 seconds to generate the html report with 15 users, against less than 5s on openwrt & dd-wrt on the exact same router).
  27. sizzaone

    sizzaone Addicted to LI Member

    Sounds great.

    Looks interesting, can you post an example of the report created by your script? Thanks.
  28. xorglub

    xorglub Addicted to LI Member

  29. SoftCoder

    SoftCoder Addicted to LI Member

    The madness has only just begun. I spent the last 4 days grueling over locking bugs related to cifs / samba and the linux 2.4 kernel as used by tomato. A complete total waste of time relating to this bandwidth work, but hey I know a lot more about low level files system locking. With a few special compiler flags I FINALLY got sqlite3 operational!

    Now I'll start the real work of adding code to save bandwidth logs into the sqlite database. I'll support both the existing multi-logfile format and the new sqlite setting. I'll also add support to pre-populate the sqlite DB using existing file logs of they already exist (for those of us on v1 of this solution).

  30. SoftCoder

    SoftCoder Addicted to LI Member

    Ok the new ipt-parse (supporting both sqlite and old logfile parsing) can be read about and downloaded from:


    I have updated the attached README.TXT detailing how to setup using either method and I am using the sqlite method to save stats every 5 minutes to my cifs1 share.

  31. sizzaone

    sizzaone Addicted to LI Member

    Sounds awesome man, I can't wait to test this tonight, but I can't see a download link for the new version on your blogs download page? edit: woops, found it

    Btw, did you start to use --exact / -x now to ensure stats displayed are correct? Cheers.

    Edit: Just set it up and removed the old version, it seems to be working fine with the sqlite config. Seems faster already, i'll report back again after some more testing. Btw, is viewing the last 5 mins or hour implemented yet? Thanks.
  32. SoftCoder

    SoftCoder Addicted to LI Member

    The exact command is posted in the ReadME.txt (I only use 1 combined ipchain for in and out data now and YES I use -vnx for exact stats).

    I just updated the archive on my site (And README.TXT inside) indicating how to use up to the minute queries, and also added a few Indexes to the pre-packaged BANDWIDTH database in the archvie. For people using yesterdays build you'll need to connect to the BANDWIDTH database and add these indexes manually inside of sqlite3 using these SQL commands:

    CREATE INDEX idx_host ON bandwidthusage (host);
    CREATE INDEX idx_log_datetime ON bandwidthusage (log_datetime);
    CREATE INDEX idx_host_logdatetime ON bandwidthusage (log_datetime, host);

    Here is the section pasted out of the new readme.txt realted to querying by times:

    # General Notes:
    # To query using up to the minute intervals (if you are recording stats that frequently)
    # you may pass dates with times to produce a more finite data report like this:
    # ./ipt-parse 3 "20090511 22:15" "20090512 01:45" BANDWIDTH
    # ***NOTICE: the format MUST be exactly as above, with a space between date and time portion)

  33. i1135t

    i1135t Network Guru Member

    Cool, it seems to be working as it should. Do you have any plan to also log what websites are being accessed by whom as well? I think that would be a great addition to this, if it's even possible. Thanks!
  34. TrUzApalOOza

    TrUzApalOOza Addicted to LI Member


    Any simple way to suck in my last 10 days worth of the old IPT-PARSE log files and get them into the SQLite? I can live without them, but if you have a simple way to get them injected....
  35. SoftCoder

    SoftCoder Addicted to LI Member

    Updated the code/binary on my site. You can import existing logs (if they used -vn as opposed to -vnx due to the fact i originally told people to use -vn so I assume the stats are in that format). The commandline is:

    ./ipt-parse 0 20090401 20090501 usagelogs/ traffic_in_ traffic_out_ flags=import=BANDWIDTH

    Here is the info pasted out of the README.TXT:

    # If you have existing iptables logfiles (and it is assumed their values are stored
    # in the format:
    # iptables -L traffic_out -vn
    # then you may import these logfiles into a sqlite database using this commandline:
    # ./ipt-parse 0 20090401 20090501 usagelogs/ traffic_in_ traffic_out_ flags=import=BANDWIDTH
    # where all parameters are the same asd if you were generating a statistical
    # output for a given date range with the addition of the following parameter:
    # flags=import=BANDWIDTH
    # where BANDWIDTH is the path and location fo the sqlite database to import into.
    # It should be safe to run multiple times (Even for the same date range) because
    # there are validations to ensure that the same data does not get imported more than once.
  36. TrUzApalOOza

    TrUzApalOOza Addicted to LI Member

    import runs, outputs, but not tallied...

    Hmmm. I grabbed your new code, and then ran the above from the shell --- it chewed on all the data and spit output to the screen (20GB worth of detailed breakdowns), but doesn't appear to be in my totals from the scheduler .html files.

    right, only ask since the parameter has 2 equal signs.

  37. SoftCoder

    SoftCoder Addicted to LI Member

    Yes the:


    is correct. A few questions:
    - If you just used this parameters exactly as shown, did you run from the SAME FOLDER as the BANDWIDTH sqlite database file?
    - Did you see output relating to SQL INSERT statements? If so can you show me any?
    - Did your BANDWIDTH database file grow in size?
    - What date range did you use for the commandline?
    - What commandlien did you use to test that it imported or not?

  38. sizzaone

    sizzaone Addicted to LI Member

    I think i'm still seeing some incorrect stats, about when the upload or download hits 2gb. For example the upload for one host has showed 2147.48 MB for the past few hours when i'm sure it should be higher, I have a feeling the report won't update until it hits 3147.48 MB.

    As a result the daily and weekly logs both show 2147.48 MB uploaded for this host, while every other stat (less than 2gb) is correctly shown (I believe).
  39. mactogo

    mactogo Addicted to LI Member

    Reporting a similar problem as well. I'm getting 2147.48 upload and download for all my ip's in the weekly logs.
  40. SoftCoder

    SoftCoder Addicted to LI Member

    Ok the problem must be related to:


    Shows that an int has the maximum value range of:
    –2,147,483,648 to 2,147,483,647

    I thought I was using type double everywhere, but I'll need to (double check) :)

    I'll post again once I find something
  41. SoftCoder

    SoftCoder Addicted to LI Member

    Ok new binary is up on my site fixes the large number bug. I was using double everywhere EXCEPT for one line of code where i convert from char * to double I was using atoi(). I switched to strtod() and all is good.

    I also added a new type of commandline option 5 for shrinking the database for a date range where you no longer care about storing intra-day stats and can live with one daily summary stat per client. This lets you store live data for recent stats and then sort-of archive it by dropping the old intra-day time based stats and replacing it with one daily summary for each client per day in the date range. Here is the info from the README.txt:

    # It is possible to shrink the size of the database IF, and ONLY IF you are willing to
    # remove granularity by time for a specific date range (for example for an old
    # date range where you really don't care what happened throughout the day)
    # To shrink the database for a range of dates (compress intra-day values into
    # daily values) use this syntax:
    # ./ipt-parse 5 20090301 20090510 ~/Code/sqlite-3.6.13/BANDWIDTH
    # The commandline above will scan all data between March 1st, 2009 - May 10, 2009
    # and summarize ststs daily, then delete old records for that date range and insert
    # the new daily summarized values, and resize the database.
  42. sizzaone

    sizzaone Addicted to LI Member

    Go SoftCoder! Very impressed with the speed of updates and bugfixes to your script, i'm testing the new version now and will report back...

    Edit: Ok I think there still might be a bug, either that or I didn't upgrade it properly.

    I overwrote the old BANDWIDTH and ipt-parse files to upgrade the script and wipe my db, then rebooted the router. 5 mins after it came back up it created new reports, showing 0.11mb download for one host, presumably the correct amount. I then downloaded about 700mb to that host, but it shows as 3201.9mb downloaded. Any idea whats going on?

    Edit: Another test, I wiped the db again and rebooted once more, waited 5 mins and the report was created showing 0.18mb on one host. I waited another 5 mins without downloading anything and the next report showed the host as having used 0.9mb. I'm not really sure exactly how much I did use in that 10mins (less than 1mb), but I then downloaded a 270mb file, and the next report at 15mins showed 2747mb used, much more than I had really downloaded (~271mb).
  43. SoftCoder

    SoftCoder Addicted to LI Member

    I'm definately not seeing this behaviour on my end. What is your primary operating system Linux or Windows?

    To test the output of the parse you can always run:

    ipt-parse 3 today today /media/dlinknas/InternetConnection/BANDWIDTH

    from a command prompt (substitute the path to the BANDWIDTH database file) and it will output to the console the stats the same way the script does to html.

    Are you still seeing incorrect #'s? I have a page from my ISP whcih displays how much bandwidth I used and ipt-parse is showing nearly exactly the same #.
  44. sizzaone

    sizzaone Addicted to LI Member

    Primary OS is windows.

    I tried that command and i'm still seeing heavily inflated stats in both the console and the html reports. I downloaded maybe 2gigs to one host and it shows as 20621.37 MB
  45. SoftCoder

    SoftCoder Addicted to LI Member

    Does the problem ONLY show on hosts that have downloaded > 1 GIG for the reporting period (ie: between the last time you saved to the databse and the current save)?

    I have not yet tested traffic in the GB range for my reporting frequency (I save every 5 mins to the databse). I am somewhat limited in my total monthly bandwidth allotment so I cannot test GB traffic. Perhaps you could manually capture the output of iptables -L traffic_all -vnx and post it here or email me the info so I can test importing it (email mark_vejvoda @ h o t m a i l . c o m )

  46. sizzaone

    sizzaone Addicted to LI Member

    Nah this can't be the case, as my connection only allows me to download about 330mb in a reporting period (5 mins).

    When should I run the iptables command manually? Do I need to stop the script from zeroing the iptables?

    Edit: Another test, with timelines, logs and manual iptables output. I hope this helps you debug it. The host i'm testing with is

    11:10:00 replaced BANDIWDTH file to wipe db
    11:14:50 iptables manualy run, host shows as 1.55 mb downloaded http://pastebin.com/m14265701
    11:15:02 1st report generated, host shows 1.71 mb downloaded
    now I download a 120mb file to that host
    11:19:50 iptables manually run, host shows as 126.08 mb downloaded http://pastebin.com/m566caa6
    11:20:02 2nd report generated, host shows as 1324.35 mb downloaded http://pastebin.com/m2c2fdca9
  47. TrUzApalOOza

    TrUzApalOOza Addicted to LI Member

    I guess i better get the new code, as my totals are capped at 2G (running 5/13 version)
    Bandwidth usage from: 20090507 to 20090514 - page last calculated at: 2009-05-14 20:37:16
    Host 	Download 	Upload 	Total 	234.38 MB 	15.18 MB 	249.56 MB
    [B] 	2147.48 MB 	37.81 MB 	2185.30 MB[/B] 	0.94 MB 	0.12 MB 	1.06 MB 	0.00 MB 	0.00 MB 	0.00 MB 	0.00 MB 	0.00 MB 	0.00 MB 	0.00 MB 	0.00 MB 	0.00 MB 	0.00 MB 	0.00 MB 	0.00 MB 	90.97 MB 	9.65 MB 	100.63 MB 	1158.08 MB 	10.74 MB 	1168.81 MB
    Total 	3631.85 MB 	73.51 MB 	3705.36 MB

    hmmm... just installed today's build 5/14 -- and I'm still capping out at . I did replace the DB with the new BANDWIDTH, and just tagged 2+ GB to download to test it out.
    Bandwidth usage from: 20090514 to 20090514 - page last calculated at: 2009-05-14 21:54:05
    Host 	Download 	Upload 	Total 	36.66 MB 	1.33 MB 	37.99 MB 	553.79 MB 	10.86 MB 	564.64 MB 	0.01 MB 	0.00 MB 	0.02 MB 	0.00 MB 	0.00 MB 	0.00 MB 	0.00 MB 	0.00 MB 	0.00 MB 	0.19 MB 	0.06 MB 	0.25 MB 	0.00 MB 	0.00 MB 	0.00 MB 	0.00 MB 	0.00 MB 	0.00 MB
    [B] 	2147.48 MB 	47.40 MB 	2194.88 MB[/B]
    Total 	2738.13 MB 	59.66 MB 	2797.79 MB
    I am updating every 5 minutes. Looks like the upload and download columns are not converted to the new large integers?

  48. i1135t

    i1135t Network Guru Member

    Cool, I got this mod working.. looks very promising... The only downside is that this doesn't give you a daily vs weekly report like how SoftCoder has setup. There probably is a way to do it by writing some scripts to rename files, etc. etc., but that's another story. Thanks for the contribution guys...
  49. SoftCoder

    SoftCoder Addicted to LI Member

    Ok thanks Sizzaone, the logfile helped me to find the bug. When parsing the output from iptables i didn't store a large enough string for the bytes before converting to a number which caused this problem. New binary is up on my site and should fix this issue.

    P.S. I'll cross compile ipt-parse so that windows users can run it against the logfiles or the sqlite database and produce console or html results ad-hoc and independent of the linux distro. That way a windows based CGI script could use this to dynamically produce results on a windows based website (which is already possible with the linix binaries attached). I'll let everyone know when the billy (Windows) version is attached to the archive.

    Thanks again.
  50. sizzaone

    sizzaone Addicted to LI Member

    You're very welcome, once again well done with the quick fix. :thumbup: I'll test this version out and let you know how it goes.

    The windows binary sounds nice, though it would probably be of more advantage to people who are better coders than me. :p I've got an ubuntu machine to run the linux one on if I have to anyway....

    Edit: The bug seems to be fixed now, I performed a similar test to before and downloaded an 180mb file, this time the report shows as 186mb used. :thumbup:

    BTW, I didn't get a chance to test the minute queries yet, querying specific small dates and times is unfortunately not that useful to me. I'm really interested in more real-time data, such as a report that's constantly generated that always shows only usage in the last 5 mins, last hour, or even last report period. It'd be great to see that feature added. (today-5m, today-1h maybe?)

    On the same topic, another thing that would be nice to see eventually is the ability to set a "first day of the month" date, similar to what's in tomato built-in bandwidth monitoring. Then an ability to query for the period from the start of the current month, to now. Currently this is quite easily to do manually, I just added a line to the scheduler with the following command, but a way to query back to the 27th of last month would mean I don't have to change the date each month....

    ./ipt-parse 4 20090427 today BANDWIDTH > monthlybandwidthlive.html
  51. SoftCoder

    SoftCoder Addicted to LI Member

    Hey try the new build (The one that fixed sizzaone's problem) as I think it will fix this too. Either way let me know.
  52. sizzaone

    sizzaone Addicted to LI Member

    Quick update, I can confirm the 2147.48 MB bug is fixed now.
  53. mactogo

    mactogo Addicted to LI Member

    Likewise, all working so far for me on the latest script. Thanks for a speedy fix.
  54. SoftCoder

    SoftCoder Addicted to LI Member

    Ok, added a folder for linux, windows and tomato binaries in the archive.

    Also added a parameter for the bandwidth start day of month (as requested)

    # Special Note: for the two input date parameters you may use the following special
    # notations:
    # today (meaning use todays date)
    # today-4 (meaning use todays date minus 4 days as the date)
    # bw_startday=21 (meaning the 21st is the bandwidth monthly start
    # date. if today > x then use current month and change day
    # of month to x, if today < x use last months month and change
    # day of month to x)

    so if you wanted to see the usage for the current period and the period starts on the 21st of each month and you are using sqlite:

    ./ipt-parse 3 bw_startday=21 today BANDWIDTH

    if you want to show BOTH IP - Hostname you can always tack this on the end:

    ./ipt-parse 3 bw_startday=21 today BANDWIDTH flags=morehostinfo

    New binaries are on my site.
  55. TrUzApalOOza

    TrUzApalOOza Addicted to LI Member


    Your 5/15 version is calculating beyond 2GB -- thanks! I'll have to try out the Windows IPT-Parser this weekend.

    Many thanks for your work on this addon for Tomato.
  56. sizzaone

    sizzaone Addicted to LI Member

    Thanks for adding the start day feature, everything seems to work fine. :thumbup:
    Also the windows binary works well.
  57. SoftCoder

    SoftCoder Addicted to LI Member

    I've been looking at how to make this tool as easy to use as possible so a broader audience can use it. I'm testing (works good so far) a new option to run the tool with a special parameter to replace figuring out which hosts you setup in the Static DHCP page. I poked around the Tomato code and found out how to access the static DHCP list and use it, thus creating the IPTABLES rules automatically for you. This means the only requirements to use this tool are:

    #1: Some type of external mount (cifs) as you need the disk space
    #2: one line in the Firewall script:

    /cifs1/ipt-parse 6

    #3: two lines in a custom Scheduler Event (Set at the interval you want to
    capture data):

    /cifs1/ipt-parse 2 /cifs1/BANDWIDTH > /cifs1/ipt-parse.log
    /cifs1/ipt-parse 4 today today /cifs1/BANDWIDTH > /cifs1/dailybandwidthlive.html

    And that is it. Next I'll look at getting the tool to run as a daemon under tomato and see how to possibly provide real-time stats. This one will take a lot longer as I'll need to do more detailed investigation. I have not yet posted my new binary on my website but will likely do it this weekend sometime.
  58. TrUzApalOOza

    TrUzApalOOza Addicted to LI Member


    Can you have the stats generate output list the version of IPT-PARSE, that way we can quickly know which version we have compared to your latest on web?
  59. SoftCoder

    SoftCoder Addicted to LI Member

    Ok, I found a way to 'sort of' embed custom content into Tomato WITHOUT having to recompile and distribute the binary. I started investigating how to add my bandwidth output to the tomato UI and found that the filesystem in read-only (done on purpose for security reasons). There is however a way to add content in the /ext/ part of the URL when loading the Tomato UI (same mechasism used for custom css files). Using some tricks I found a way to add my wrapper HTML parent and host the tomato UI inside and manipulate the UI to add my new menu item. The hostname of my router is wrt in the screenshot below:
    *NOTICE the new menu item called 'Per Client Usage'
    Also added an optional parameter to show the hostname of each IP address
    when you hover over them when you generate the HTML (not shown here as I couldn't get the screenshot to capture the hint popup as I hovered over the IP address)


    You may always revert the the default Tomato UI by changing the URL to http://wrt/ as opposed to http://wrt/ext/ shown in the image. There are a # of items to setup the UI like this but hey, i got the Tomato http server serving up my modified UI!

    I'll post an update this evening on my site with the version stamp and also explaining how to modify the Tomato UI.
  60. SoftCoder

    SoftCoder Addicted to LI Member

    Ok new version 1.1 is up on my website, README.txt explains it all.

  61. mactogo

    mactogo Addicted to LI Member

    Just want to say a big thank you!

    A little advice :) Considering this thread is getting a little long, I think it's best that you place all your updates on the first page along with the download links, much like what's being done for other tomato projects. I think this merits a link in the FAQ section as well.

    UPDATE: I'm getting a 500 Unknown Read Error when i try GUI.
  62. SoftCoder

    SoftCoder Addicted to LI Member

    A few reasons for the 500 would be:

    - You didn't change the script to match your environment (see the file/path locations it looks for)
    - The Per Client Usage uses a number of iframes and ultimately looks for dailybandwidthlive_1.asp (which is the output of ipt-parse named this filename)

    My Custom Scheduler event looks like:

    cd /cifs1
    ./ipt-parse 2 BANDWIDTH > ipt.log
    ./ipt-parse 4 today today BANDWIDTH flags=morehostinfo > dailybandwidthlive.html
    cp /cifs1/dailybandwidthlive.html /var/wwwext/dailybandwidthlive_1.asp

    My Firewall Script looks like:

    cd /cifs1
    ./ipt-parse 6

    ./SetupUI.sh > SetupUI.log

    And SetupUI.sh (packaged in the download file) sits in /cifs1/ and has execute permission via doing a : chmod 777 /cifs/SetupUI.sh

    The folder tomato_dynamic_files along with its contents (packaged in the download file) should be copied into /cifs1/

    Then reboot the router, then you can see the updated GUI by going to http://<your router name>/ext/
    Notice the trailing slash.

    *NOTE: You could always run SetupUI.sh manually from the tomato shell to see if it is working (needs to run every time Tomato boots up), look for the folder /www/ext with the contents of /cifs1/tomato_dynamic_files/ which should be 5 files:

    # ls -la /www/ext/
    drwxr-xr-x 1 root root 0 May 18 13:09 .
    drwxr-xr-x 1 root root 0 Dec 31 1969 ..
    -rwxr-xr-x 1 root root 2257 May 18 13:09 dailybandwidthlive.asp
    -rwxr-xr-x 1 root root 5527 May 19 05:21 dailybandwidthlive_1.asp
    -rwxr-xr-x 1 root root 3179 May 18 13:09 index.asp
    -rwxr-xr-x 1 root root 6009 May 18 13:09 tomato.css
    -rwxr-xr-x 1 root root 47318 May 18 13:09 tomato.js

    Let me know if this helps

  63. Joro711

    Joro711 Network Guru Member

    This scripts is very hard for me.Impossible implement this code in next version of Tomato?
  64. mstombs

    mstombs Network Guru Member

    Do you really want to do this every time the WAN reconnects? I would have thought you could use the Tomato config "Execute When Mounted" to run only once when the CIFS is mounted
  65. sizzaone

    sizzaone Addicted to LI Member

    Hey SoftCoder, the new integrated-to-tomato gui looks great, i'll be using it soon, thanks.
  66. SoftCoder

    SoftCoder Addicted to LI Member

    I tried this first but this feature doesn't work for me (if it works for you go ahead). I checked around and found others having the same problem with the "execute when mounted" event and they also moved their script content into the firewall event (works for me)

  67. SoftCoder

    SoftCoder Addicted to LI Member

    About integrating stuff into Tomato... does ANYONE know if John (original author) is still alive?? I emailed him a while ago with no reply and have not seen much of a trace of his existence since the December update of v1.23, does anyone know where or how to contact John?
  68. SoftCoder

    SoftCoder Addicted to LI Member

    Posted a bugfix for the 'compress' option. This option allows you to combine the stats by day for a date range and thus make the database file smaller. This is useful for dates that you don't care about intra-day statistics (with times) which may be the case for older data. Currently I have summarized my data since March up to today and the sqlite database is only 750K which makes me think this tool might be able to be used for people without a cifs share device using the jffs shared space. OF course the space is limited on there but at least it opens up the opportunity for some bandwidth monitoring.

    I think I'll move this tool to an open source website and create more formal docs with forums for discussion etc... I think there might be a lot of potential for future growth. I'll post once I have set things up.

    In the meantime v1.2 is up on my website and contains the generate type #5 bugfix (data compress) as mentioned above.

  69. Joro711

    Joro711 Network Guru Member

    Send e-mail to Victek.He might know how it works with the implementation of the code.
  70. SoftCoder

    SoftCoder Addicted to LI Member

    Ok this project is now setup at: http://www.launchpad.net/ipt-parse and will be maintained there from now on. I'll be adding FAQ's, Reviewing bug reports and answering questions from there from now on.

  71. sizzaone

    sizzaone Addicted to LI Member

    Hey SoftCoder, I really liked the ability in the old version to show both IP and Hostname in the reports table. The new way of showing only on mouse-over can be annoying, as it's quite hard to see at a glance which ip is which host.

    Do you think you could add a switch to show both ip and host like before, instead of the mouse-over behaviour? Thanks
  72. SoftCoder

    SoftCoder Addicted to LI Member

  73. SoftCoder

    SoftCoder Addicted to LI Member

    New update v1.3 is available with new command-line options. Checkout the launchpad.net website for the announcement and download.
  74. TrUzApalOOza

    TrUzApalOOza Addicted to LI Member

    BANDWIDTH database too big

    I noticed my BANDWIDTH db hit 8mb, and ipt-parse was no longer working. I grabbed v1.31 and it still wouldn't work. I tried the option 7 to shrink, which started working from the tomato side, but also ended with a [terminated] -- partially completed based on the existence of a .journal file.

    I finally just ran the ipt-parse windows and it shrank it without issues --- ive setup a scheduled job on my home server to keep it small.

    Just reporting this to you in case you have any better idea, or want me to try something -- i kept a backup copy of the 8+MB DB that would'nt grow anymore, or shrink via tomato ipt-parse

  75. celtoid

    celtoid Guest

    Can someone help me?

    I've tried using both version of the ipt-parse setup (logfile and sqlite) and all I get in the html file is
    I'm using tomato v1.27 on a WRT54GL
    ipt-parse is set to 777 permissions
    I can't figure out what is going on, unless it needs an entire week to populate to start displaying but I doubt that is the issue because there is the live bandwidth which should just pick up from when it started.

Share This Page