I've been running Tomato (previously Toastman, currently Shibby) on a Netgear WNR3500L v2 for several years, mainly as a home router, but I also have a Synology DiskStation NAS and a server running Ubuntu behind it. The LAN setup I've had for a while is as follows: br0 (main LAN): 10.1.0.1/24 with DHCP enabled br1 (server network): 172.17.1.1/24 with DHCP disabled I have it set up so that the server and NAS are each connected directly to their own ports on the router, which are set to belong to the VLAN used by br1. In my LAN access settings, I have it set up so that my main LAN can access the server network, but not the other way around. The problem is that I recently noticed when transferring a large amount of data between my workstation and the NAS, that the throughput is nowhere near what I would expect from a gigabit connection. I quickly noticed that when running a transfer (copying to/from a CIFS share on the NAS from my workstation), the CPU usage on the router would skyrocket to near 100%, causing internet traffic to almost grind to a halt, and sometimes even time out. After installing iperf on the NAS and the server, and running some tests "iperf -s -i 1 -f m" on the server and NAS, and "iperf -c 172.17.1.10 -t 15 -i 1 -f m" on my workstation, I found that the bandwidth measured around 100Mbit/s, which is pretty awful considering it's supposed to be a gigabit connection. UDP tests resulted in ~153Mbit/s. This was running a Toastman build. Since I recently saw Shibby's firmware recommended, I decided to switch over to that to see if it would improve matters. So I did a full 30/30/30 reset both before and after flashing the firmware, and set it up in the same way as before. Throughput actually did improve quite a bit - TCP is now ~155Mbit/s, and UDP ~250Mbit/s. However, that's still pretty bad for a gigabit connection, so I did another experiment. I changed the server and NAS IPs to IPs in my main LAN (10.1.0.5 and 10.1.0.10 instead of 172.17.1.x equivalents) and switched the ports they were connected to back to the main VLAN (br0). To my surprise, I then get a whopping 750Mbit/s (TCP) and over ~800Mbit/s (UDP), which is much nearer the expected 1Gbit/s. While I could maybe see the additional routing between the two subnets adding a little bit of overhead, this difference is just enormous. My question is basically, what's the reason for this minor change making such a huge difference in throughput, and are there any settings that can be changed to improve the throughput between different VLANs and subnets?