The question was:
I’m catching up on Aspera, Digital Rapids, FileCatalyst, Kencast, RepliWeb, RocketStream (TIBCO), Signiant, SmartJog, etc…
What problems are each of these companies solving today? Tomorrow?
Any other suggestions?
My response was an impromptu stream… written on the fly… and since Google+ wouldn’t actually let me post it from my account (too long? thread locked?) I thought I would reproduce it here in this blog instead:
Hi Jay, good question. Some of the companies you mention (including the one I work for, FileCatalyst) are definitely interested in seeing businesses replace FTP for their file transfer needs. Some of our marketing materials even say, “FTP was born in the 70’s… and you’re still using it? Seriously?” I believe a few others are using FTP for transport, but adding a layer of management.
So my own short answer to the question is: FileCatalyst is trying to solve the problem of (primarily large, primarily high-speed) file transfer on WAN.
The broader answer would be an essay. And those who know me will say that I’m just the man to ramble a subject into the ground. But I’ll try to keep it brief. My first disclaimer: I’m not an engineer for the company; if any of the information I present is in dispute, I can get clarification from smarter folks than me.
The RFC for the FTP spec didn’t anticipate the huge files and fast speeds we face today. Being based on TCP, it had reliability as job #1 rather than efficiency (though its reliability model hasn’t really held up either). And it performs well enough up to about 10Mbps without tuning. Until recently, that described most residential and even many business connections, so people were (and still are) more or less satisfied and didn’t see the need to replace this absolutely ubiquitous technology (FTP). Sure there were other problems (reliability, ease-of-use, auditing/tracking) but people seemed more or less content to stick to FTP or at least FTP-based Managed File Transfer.
Larger enterprises and other businesses that have file transfer as a daily mission-critical task knew better, and were the first businesses to notice that even with shiny new 45Mbps connections (of course since then we’ve seen 100Mbps, then 1Gbps, and now 10Gbps connections becoming the ‘shiny new’ links) they weren’t getting the speeds they expected and started looking for alternatives.
There were only a few of us at the time who had independently of one-another seen the possibilities UDP had to offer. UDP is only about a decade newer than TCP, truth be told. But it wasn’t meant for file transfer because it had NO reliability built into it… it was meant for streaming and broadcasting. Adding the layer of reliability on top of UDP is what makes the file transfer protocols of FileCatalyst, Aspera, Signiant, and other peers unique. It’s also what makes each of our solutions proprietary.
I’m still not answering the question directly, though, so I’ll move on. I would like to add my second disclaimer which is that I’m trying to answer the question “what do we solve” and therefore answers will inherently look like marketing pieces. This is not my intention!
- For the enterprise, FileCatalyst is definitely solving the problem of large file transfer on high-speed networks. We’re constantly pushing and testing the limits of FTP (as a ‘competitor’, we need to give it benefit of the doubt! And our technology can also “speak” TCP when needed) as well as the FileCatalyst protocol. TCP is inherently bottlenecked. It will never reach 10Gbps for transferring a large file. We are reaching those speeds on relatively modest single-space units, with encryption.
- FileCatalyst also solves reliability problems present in FTP. TCP’s checksum model is archaic and unreliable, as anyone who has had a failed transfer can attest to. And most of us who transfer large files have also seen that it does not always reliably resume the transfer (and that’s assuming resume is even enabled, which is not always the case). By contrast, we have a whole set of retry/resume facilities that ensure delivery of any size of file through any network conditions.
- Speaking of network conditions, this is the main thing that our technology (and other UDP-based technologies) solve. We’re immune to latency and packet loss, adjust to network conditions, and recover any missing packets in a highly efficient manner (I can’t speak for all of the UDP-based competitors, but we do not just blast packets willy-nilly and collect lost ones later– the protocol tries to lose as few as possible in the first place).
- What differentiates each of the competitors is the feature set on top of the protocol. In other words, we also solve client-option and “management” problems in different ways. FileCatalyst offers too many management features to list here, but suffice it to say that you can adjust file transfers on the fly, collect and record transactional information and logs, be aware of when files are sent and to whom, and access an entire deployment via a central console on the web. There isn’t much that you can’t monitor and manage remotely, and since it may all be secured with SSL (and AES for data), you are able to meet security and compliance mandates.
- Another problem with FTP is the end-user experience. Those of us who are a bit more technical don’t mind installing and configuring an FTP client, but remember that there are many people out there for whom using an FTP client feels like engaging in mysterious and forbidden arts. For those people we have a number of options that will get them transferring files in moments, some of which require zero installation or configuration while still allowing UDP-based transfers.
I’ve lost most of your readers already, especially once the “marketing sounding” parts kicked in… but in short: every problem you can think of that FTP offers in terms of either technical problems or usability problems… we have solved or are trying our best to solve.
Should I mention that people use email for file transfer, too? A whole other problem to be solved….
What problems are we solving for tomorrow? In terms of the technology, we’re always pushing for efficiency. We’re going to turn the corner and 10Gbps won’t be the top dog, it’ll be 20Gbps… then 100…. and as long as our protocol remains efficient, we’ll have a strategy for scaling. In terms of usability, there’s always room for improvement or lateral thinking. How can we break down the barriers of adoption? How can people easily move from FTP to an alternative? As mobile devices become a primary mode of computing, how can we ensure a seamless experience in that space? I certainly have more questions on this one than answers. I wonder what other people see as being the file transfer problems of tomorrow?
There’s a lot of talk in the media right now about storage SPACE. And of course as a file transfer software company, we always read articles and think, “OK, you need space… but you also need to get your files there in a timely fashion!” A lot of companies are going to be faced with shipping physical media to their cloud storage provider. We have solutions already, but people are still shipping discs or using FTP/HTTP. This is one of the problems of today that we’ll also be solving for tomorrow.