1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

OFFTOPIC: Accelerated file transfer software over poor WANs, e.g. via UDP

Discussion in 'Tomato Firmware' started by occamsrazor, Apr 20, 2013.

  1. occamsrazor

    occamsrazor Network Guru Member

    Offtopic I know, but as we have some very experienced networking guys on this here friendly forum, I thought maybe someone might have some experience with this, as I am in no way expert on such matters.

    I regularly have to send image and video files, ranging from 1MB to 200MB half way around the world, and currently we use FTP. Files are sent over a variety of internet connections including internet cafes, 3G and satellite. Sometimes the connections are congested (e.g. overloaded cell towers) and sometimes the latency is very high indeed (e.g. 1000ms+ over satellite). Sometimes I also suspect weird traffic-shaping is also going on. Other times the ISP network infrastructure is just badly planned or implemented. The key aspect is that I DO NOT have control over any of these networks.

    So I have been looking for a better system of file transfer, and in doing so I've come across various options for transferring files over UDP instead of TCP. Which may or may not be the answer - I don't know - but I suspect FTP over TCP does not obtain optimal speed in such conditions. I need something that will traverse firewalls, achieve optimal speeds for any given connection (and work over a variety of such connections), and require no router network setup at each end - i.e. work from anywhere to anywhere - ideally. It can be server-client based with different software for server and client, or it can be peer-to-peer. It does not need to be "friendly" to other users on a shared connection - in fact I want it to suck up all the available bandwidth if possible. Platform-wise it needs to work on Mac OSX at the client end, at the server end either Mac OSX or Windows. Anything else is a bonus.

    I'm aware of commercial options that claim to achieve improved file transfer speed, such as:


    But I am more interested in any open-source/free type offerings. A few I have been reading up on include the following protocols:

    Tsunami UDP

    Does anyone have experience in such matters or have knowledge of improving file transfer speed over networks with high-latency and other problems?

    My other thought was to somehow leverage the Bittorrent protocol which seems extremely robust and also about the only thing capable of maxxing out my network connection. But I can't see a user-friendly way to automate the transfer process when sending many small files sequentially (e.g. the image files), and the whole "swarm" aspect of it is un-needed in my case as the transfer just needs to go point-to-point.

    Any thoughts?
  2. koitsu

    koitsu Network Guru Member

    1) In general do not use a UDP to transfer files.

    UDP is stateless, lacks packet acknowledgement (TCP works off of "I sent the packet" and a "I got the packet and its checksum is okay" response model to guarantee integrity in 99% of cases out there), lacks RFC1323 window scaling, and lacks algorithmic designs to combat certain scenarios (such as the Nagle algorithm; and here's a video I particularly enjoy (yes, I do understand this stuff)), and automatic retransmit.

    All of those things, and more, are what make TCP fast. Do not let the FUD created during the late 90s/early 2000s about "UDP being faster because it has less overhead" make you think otherwise (this FUD was generally created by "PC enthusiast gamers" and gaming companies who insisted UDP provided better performance in gaming -- while there is some truth to that, in general it's utter nonsense).

    UDP for a long time was thought to be the Ymodem-G of protocols, but the downsides -- just like with Ymodem-G -- greatly outweigh the positives. Furthermore, as I stated, TCP has greatly increased in capability/speed while providing reliability.

    You can read this over at stackoverflow for details -- read the post from Robert S. Barnes, and the replies. This individual understands, while the top-rated comment (despite being true) is really missing the bigger picture.

    2) With high latency networks, you cannot do anything to combat the "slow transfer speed" problem. I repeat: there is nothing you can do. There isn't "magic software" that can work around this situation. The solutions you may have heard about used by satellite-based ISPs involves caching proxies -- because they know they cannot work around the speed of light. :)

    3) FTP is one of the fastest point-to-point TCP-based protocols out there. Here's actual proof (read, do not skim).
    Given that you're distributing files to multiple geographic locations and want something that's fast and can guarantee integrity, there is one program/solution I can think of that will benefit you greatly: BitTorrent. I'm not talking about putting your files up publicly on some tracker -- I'm talking about finding some software that implements the BitTorrent distributed model. This method/model is used at some fortune 500 companies for distributing things like OS images or patch updates across multiple datacenters, rather than distribute everything from one single point -- instead, as that single point distributes things to points X Y and Z, make points X Y and Z also distribute some of that data to each other, as well as other points (A B and C, which might be closer network-latency-wise to X Y or Z than to the original sender point). But I am not going to discuss implementing that or how to go about doing it, because from the way your worded your post (use of the word "we"), it implies an organisation or business, in which case you should hire some software and network engineers to look into the advantages. There are C libraries which provide torrent capability and offer APIs.

    Good luck.
  3. blackwind

    blackwind Serious Server Member

    With such questionable connections, any potential gain from transferring data via UDP will be mostly, if not entirely offset by the need for the alternate protocol to do its own error-handling.

    My suggestion would be to keep your existing setup and simply use a download accelerator like DownThemAll and/or an FTP client capable of concurrent transfers like FileZilla. If you believe you need to subvert traffic shaping, switch to SFTP (or even FTPS).

    Alternately, if you trust them, you could try MEGA:


    Note the "Number of parallel upload connections" setting in the screenshot.
  4. occamsrazor

    occamsrazor Network Guru Member

    First off, thank you both for your replies, much appreciated...

    I respect your expertise which is much greater than mine, clearly, and am not saying you are wrong, but it would appear that some serious organisations including many large media companies are using such methods. My organisation has used DataExpedition's Expedat before and I know, at least in many situations, the transfer speeds vs FTP are much higher.

    Staying within TCP, I know from experience, for example, that over BGAN satellite phone links the use of Inmarsat's TCP/IP accelerator [PDF link] can improve FTP throughput very significantly indeed.

    So I'm not doubting your expertise, just it appears there are some solutions out there whether UDP-based or TCP tweaking that do appear, at least, to offer improvements.

    Reading the description of UFTP for example I see this:

    This sounds to me to be a much smarter way of transferring files, rather than slowing down the whole transfer rate each time errors or delays occur.

    Do you think? I wonder about that... A small number of errors could drop the speed of an FTP connection a lot, but as a percentage of the file to do resends on would be tiny. No?

    Perhaps I should clarify my setup a bit more. I am not doing as described above i.e. "one to many", but instead doing "many to one" and the files are all unique. For the former I agree Bittorrent would be an ideal candidate. My situation however is maybe a hundred or so "clients" around the world uploading image/video files to a central "server". Each client is transferring unique files to the server - and the other clients don't need to receive them ever.

    I work in the organisation, but I am just an end user of the system ("client" in the terminology above) and have no responsibility for tech decisions. No one else higher up is really interested in changing things, this is just a "pet project" of mine to see if hypothetically it could be done better. As I mentioned above some parts of the organisation are using Expedat and find it to be successful, but the per-user pricing model has prevented its wider adoption within the company, which is why I'm looking mostly at non-commercial methods.

    Thanks again for your insight and comments...
  5. blackwind

    blackwind Serious Server Member

    It is. Will this or any of these other fancy new protocols make an appreciable difference, though? I strongly question that, but the proof is in the pudding, I suppose -- you won't know for sure until you try it. It's possible you could see some degree of benefit on some of the more unreliable connections, but probably no more so than you would with concurrent uploads.

    I find it interesting that there appear to be no FTP clients capable of upload multi-sourcing. If I still had the time (and drive) I did in my earlier years, I'd probably be firing up Visual Studio right about now and hacking together a proof-of-concept.
  6. occamsrazor

    occamsrazor Network Guru Member

    What does "upload multi-sourcing" mean?
  7. blackwind

    blackwind Serious Server Member

    Uploading multiple chunks of the same file simultaneously (just as download accelerators do with downloads), which would be useful for single-file uploads where concurrency is impossible. My assumption/hope is that this is what MEGA is doing with its "parallel upload connections" feature.
  8. occamsrazor

    occamsrazor Network Guru Member

  9. koitsu

    koitsu Network Guru Member

    1. This statement from the UFTP people -- "because UDP does not guarantee that packets will arrive in order" -- is amazingly ignorant. Guess what: TCP doesn't guarantee packets arrive in order either. In fact, nothing does these days. This is an extremely common problem on routers on the Internet, 24x7x365, which are doing forms of packet load-balancing across multiple (physical) links. Watch/listen and learn. Once again: TCP offers a way to solve this problem (selective ack), UDP doesn't. I couldn't care less if some program implemented their own model, I really couldn't, because RFC 2018 already exists.

    2. I'm not going to read the PDF involving a "TCP/IP accelerator", sorry. I simply do not have the time or interest when it comes to such things. If in the future I end up working at a place where high latency network links are common, maybe then I'll have reason to.

    The only stuff I know "about" on a general basis pertaining to I/O over high latency links are alternate TCP congestion algorithms which some kernels offer. However, in your case this isn't feasible -- you stated clearly you have no control over the clients' computers, thus you have no control over what TCP algorithm is used (and it's usually all-or-nothing, meaning you can't change the TCP stack "per application").

    All of those algorithms -- I repeat, ALL OF THEM -- need to be looked at and reviewed for implementation by a senior network engineer. I'm not talking about some guy who brags about having his CCNA. I'm talking about someone who is very, very familiar with packet analysis, spends a lot of time in Wireshark, and actually reads/understands all the related TCP RFCs. I personally know of 3 people who meet such requirements, and their salaries reflect their seniority (we're talking 6-digits and hefty), combined with the fact that they're all employed by very key fortune 500 companies. I worked with 2 of the 3 at my last job, and we regularly got into discussions about stuff like this. But that's work, not volunteer.

    3. There are FTP clients out there which open multiple (simultaneous) sockets (FTP is TCP-based remember) to a server, transmitting portions of a single file across each socket. The FTP server has to have support for this (for proper reassembly of each "piece"). I know about this because I had to use it to fetch files from a friend of mine's FTP server in Sweden, where his ISP implemented an incredibly stupid form of traffic shaping: limiting throughput on a per-socket basis (~512kbit/sec), with no actual aggregate rate-limit ("bandwidth/speed cap"). Open up 16 simultaneous sockets and you can now push 512*16=8192kbit/sec. That was the only place, and the only time, I have ever had to use such a thing. And it was very easy to determine their traffic shaping model. Otherwise, for ISPs which don't implement that model of shaping, the multi-socket method provides zero benefit (in fact, if anything it's worse -- it wastes network buffers / socket TCP window allocation space for multiple sockets when really all that's needed is 1). I can't remember what FTP server my friend used, and I can't remember what FTP client I used (it was Windows-based). The terminology the client authors use varies; some call it "multi-session" for example.

    4. I recommend you re-think your existing model (multiple geographically distributed clients --> single server). Use a distributed server model. I hate to mention this (I really do, because I'm generally against such crap), but Amazon's "cloud" concept backed by their load balancer implementation might actually provide you some benefits. Otherwise your network bottleneck becomes your single server -- that is, if you're even hitting such limits to begin with (I have no idea).

    Bottom line: use TCP. You will thank me later.
  10. blackwind

    blackwind Serious Server Member

    Not entirely true. Given his goal...
    ...multi-sourcing absolutely has its place in such a setup.

    I'd be interested to test the FTP client/server pairing of which you spoke. If the names come to you at any point, please do follow-up.
  11. leandroong

    leandroong Addicted to LI Member

    I would recommend, nctfp optware. Very fast and it can do automatically download or upload folders content. I use it when I do heavy vacation videos upload to 4shared and download for drama that have 150+ episodes, having each episodes about 700-800MB.
    Important to note here, router is doing the ftp without human intervention. Lastly, it can saturate my whole bandwidth available when idle or im not doing anything.
  12. koitsu

    koitsu Network Guru Member

    You can still suck up all the bandwidth of a network connection with one socket just as easily as you can with multiples. So no, multi-socket doesn't provide anything for him unless he knows for a fact there's a client who has an ISP that has implemented a per-socket rate limiting model (such as with that Swedish ISP I dealt with), in which case yes the multi-socket method will greatly increase total (aggregate) throughput.
  13. blackwind

    blackwind Serious Server Member

    You can, but you're not accounting for competing traffic on the network, which very much applies to, at minimum, his internet cafe usage case. As anyone who's shared a residence with a heavy torrenter knows, the more connections a client uses, the less bandwidth other users on the network have to work with.
  14. koitsu

    koitsu Network Guru Member

    What relevancy does that have? I'm not following. It the cafe has no form of rate-limiting/throttling (as I described above, re: that Swedish ISP), then one cafe customer can take up all the bandwidth using a single socket. Now cafe customer #2 comes along using an FTP client that lacks support for multi-session and begins uploading a file. Now customer #1 and customer #2 effectively get equal amounts of bandwidth (50/50). Now let's say cafe customer #2 switches to an FTP client that has multi-session support -- the two customers still get equal amounts of bandwidth (50/50), just that the 50% customer #2 gets is now effectively divvied up (i.e. that 50% of total bandwidth now gets split up across, say, 8 TCP sockets -- they don't suddenly get 8x more bandwidth, layer 1 is still the limiting factor).

    The server side has no bearing on this, unless of course on the server side there is per-socket rate-limiting put in place, which the OP did not say there was (I am assume there is not).

    So in summary, using a multi-session FTP client when there isn't per-socket rate-limiting put in place by the client's ISP gains you absolutely nothing, other than wasting TCP sockets / wasting tons of network buffers and NAT translated sockets (especially on the cafe's router).
  15. blackwind

    blackwind Serious Server Member

    I'm sure it's possible with the right configuration, but I, for one, have never used a network that distributes bandwidth per-user rather than per-socket. On a typical network, if customer #1 and customer #2 are each using a single socket, the split should be approximately 50/50; if customer #1 adds a second socket, customer #1 gets ~67% of the total bandwidth, while customer #2 is dropped to ~33%.
    Customer #2 doesn't get 8x more bandwidth, no -- assuming all nine connections are capable of saturating the network and no traffic shaping is taking place, he gets eight ninths of the total bandwidth available.

    Again, I point you to the BitTorrent example. Disable QoS on your home network, fire up a torrent on one of your computers with enough sockets to saturate your connection, then download a file via HTTP or FTP on another. If you witness a 50/50 bandwidth split, I'll eat my hat. ;)
  16. Malitiacurt

    Malitiacurt Networkin' Nut Member

    Sorry but using BT to prove your point shows your lack of expertise in this area. The upload bandwidth of each connection is unknown and will vary due to multiple reasons, not just at the transport layer but also how each uploader's BT client allocates upload bandwidth resources.

    Using it as an example to prove your point is like leaving out some food in front of your doorstep, coming home the next day to find it missing and saying this proves cats like that food cause you've seen a few cats in your neigbourhood wandering around.
  17. koitsu

    koitsu Network Guru Member

    I don't use QoS of any kind, and I can saturate the connection with a single socket (i.e. a single download via HTTP, a single download via FTP, etc.). I don't need to use BT (meaning: opening up hundreds of sockets) to accomplish that task.

    Refocusing on the issue: the OP admits to having no control over the network on the client end.

    When trying to solve complexities like this, you really need control over both ends of a network. When you only have control (and partial control at that) over one endpoint, there is very, very little you can do.

    I'm going to Unwatch this thread -- I've stated my case: use TCP. UDP offers absolutely nothing in the way of solving the OP's dilemma, and will only result in whatever program which uses UDP having to (effectively) re-invent the mechanisms which TCP already has natively (and are often done (programmed) wrong on the part of the application authors). Use TCP. :)
  18. blackwind

    blackwind Serious Server Member

    Right, and yet, even the test case I proposed should be sufficient to illustrate the point.

    For a more "scientific" test, use a download accelerator to download a file in eight chunks, then download the same file from the same server on another computer. That won't produce a 50/50 result either.

    In any case, I, too, am not particularly interested in belaboring the point any further, but neither am I interested in being made to look like the fool when I'm not wrong. Your essay responses, koitsu, while generally on-point and informative in addition, tend to have that effect on the recipient.

    I suspect you have more experience in the field than I do, and there are many topics on which I wouldn't presume to cross you. This wasn't one of them, but hopefully there aren't any hard feelings on your end. There aren't on mine.

    Moving on... :)

Share This Page