150 years after Sam Morse developed the electrical telegraph, people are still devising new protocols for controlling data communications. Hardly a month passes without a new program claiming to transfer your files faster or better than ever. That sounds perfectly reasonable, since today's personal computer may be twice as fast as last year's screamer. But which protocol or program should you use? The selection of the best file transfer protocol for your application is important but not obvious. A poor protocol can waste your time, patience and data. In the next few articles I hope to teach you how to choose the file transfer protocols best suited for your needs. I also hope to teach how you can get the best possible results with those protocols.
What should you look for in a file transfer protocol? While not mentioned very often in magazine articles, a protocol's reliability is a paramount concern. What good is it to transfer a file at three times the speed of light if we don't get it right? We'll return to the question of reliability throughout this series because reliability in file transfers is all important.
File transfer speed is a hot topic today. Who wants to waste time and bloat phone bills with slow file transfers? No survey of data communications software is "complete" without some attempt at comparing performance of different programs. Some surveys tabulate simple speed measurements with impressive three dimensional color charts that look to be very meaningful. Unfortunately, tests of file transfers under sanitized laboratory conditions may not have much relevance to your needs. After all, if you only had to move data across a desktop or two, you'd use a 10 megabit LAN or just swap disks.
A protocol's speed is profoundly affected by the application and environment it is used in. The original 1970's Ward Christensen file transfer protocol ("XMODEM") was more than 90 per cent efficient with the 300 bit per second modems then popular with microcomputers.
XMODEM obliged both the sender and the receiver to keyboard a file name, and only one file could be sent at a time. You couldn't go out and have a cup of coffee while a disk's worth of files were moving over the wire.
The first extension to XMODEM was batch transfer. Batch transfer protocols allow many files to be sent with a single command. Even if you only need to send one file, a batch protocol is valuable because it saves you from typing the filenames twice.
MODEM7 BATCH (sometimes called BATCH XMODEM) sends the file name ahead of each file transferred with XMODEM. MODEM7 BATCH had a number of shortcomings which provided the mother of invention for alternative XMODEM descendants. MODEM7 BATCH sent file names one character at a time, a slow and error prone process. One such alternative was the batch protocol introduced by Chuck Forsberg's CP/M "YAM" program. YMODEM transmits the file pathname, length, and date in a regular XMODEM block, and transmits the file with XMODEM. Unlike MODEM7 BATCH, the YMODEM pathname block was sent as reliably and quickly as a regular data block. In 1985 Ward Christensen invented the name YMODEM to identify this protocol.
While we're discussing speed, let's not forget that batch protocols were developed to increase the speed of real life transfer operations, not magazine benchmarks. (What would DOS be without "COPY *.*"?) When studying magazine reviews comparing the speeds of different file transfer protocols, you must add the extra time it takes to type in file names to the listed transfer times for protocols that make you keyboard this information twice.
By 1985 personal computer owners were upgrading their 300 bit per second (bps) modems to 1200 and 2400 bps units. But while a 2400 bps modem transmits bits eight times faster than a 300 bps modem, 2400 bps XMODEM transfers are nowhere near eight times as fast as 300 bps XMODEM transfers. This was a keen disappointment to owners of these new $1000 modems.
What causes this disappointing XMODEM file transfer performance? The culprit is transit time, the same phenomenon that makes satellite telephone conversations so frustrating. XMODEM, YMODEM, Jmodem and related protocols stop at the end of each block of data to wait for an acknowledgement that the receiver has correctly received the data. Even if the sending and receiving programs processed the data and acknowledgements in zero time (some come close), XMODEM cannot do anything useful while the receiver's acknowledgement is making its way back to the sender.
The length of this delay has increased tremendously in the last decade. The 300 bps modems prevalent in XMODEM's infancy did not introduce significant delays. Today's high speed modems and networks can introduce delays that more than double the time to send an XMODEM data block.
An easy performance enhancement was to increase of the XMODEM/YMODEM data block length from 128 bytes to 1024. This reduced protocol overhead by 87 percent, not bad for a few dozen lines of code. Other protocols such as Jmodem and Long Packet Kermit allow even longer data blocks. Long packet protocols gave good results under ideal conditions, but speed and reliability fell apart when conditions were less than ideal.
Programs with non-standard "YMODEM" have plagued YMODEM users since the early days of the protocol. Some programmers simply refused to abide by Ward Christensen's definition of YMODEM as stated in his 1985 message introducing the term. Today more and more programs are being brought into compliance with the YMODEM standard, but as of this writing some widely marketed programs including CROSSTALK still don't meet the standard.
Even when both the sending and receiving programs agree on the protocol, XMODEM and YMODEM do not always work in a given application. Some applications depend on wide area networks which use control characters to control their operation. Some mainframe computers have similar restrictions on the transmission and reception of control characters. (ASCII control characters are reserved for controlling devices and networks instead of displaying printing characters.) Since XMODEM and YMODEM use all possible 256 character codes for transferring data and control information, some of XMODEM's data appears as control characters and will be "eaten" by the network. This confusion sinks XMODEM, possibly taking your phone call or even a terminal port down with it.
The Kermit protocol was developed at Columbia University to allow file transfers in computer environments hostile to XMODEM. This feature makes Kermit an essential part of any general purpose communications program.
Kermit avoids network sensitive control characters with a technique called control character quoting. If a control character appears in the data, Kermit sends a special printing character that indicates the printing character following should be translated to its control character equivalent. In this way, a Control-P character (which controls many networks) may be sent as "#P". Likewise, Kermit can transmit characters with the parity bit (8th bit) set as "&c" where "c" is the corresponding character without the 8th bit. When sending ARC and ZIP files, this character translation in combination with character quoting adds considerable overhead with a corresponding decrease in the speed of data transfer.
The original and most popular form of Kermit uses packets that are even shorter than XMODEM's 128 byte blocks. Kermit was designed to work with computers that choked on input that doesn't look very much like regular text, so limiting packet length to one line of text was quite logical.
Kermit Sliding Windows ("SuperKermit") improved throughput over networks at the cost of increased complexity. As with regular Kermit, SuperKermit sends an ACK packet for each 96 byte data packet. Unlike regular Kermit, the SuperKermit sender does not wait for each packet to be acknowledged. Instead, SuperKermit programs maintain a set of buffers that allow the sender to "get ahead" of the receiver by a specified amount (window size) up to several thousand bytes. By sending ahead, SuperKermit can transmit data continuously, largely eliminating the transmission delays that slow XMODEM and YMODEM.
For a number of technical reasons, SuperKermit has not been widely accepted. Those computers capable of running SuperKermit usually support the simpler YMODEM-1k or more efficient ZMODEM protocols. However, the lessons I learned from YMODEM's and SuperKermit's development were to serve me later in the design of ZMODEM.
Recent changes in the Kermit protocol have allowed knowledgeable users to remove much of Kermit's overhead in many environments. At the same time, Columbia University placed restrictive Copyright restrictions on their Kermit code, and few developers have the expertise to upgrade the older "The Source" SuperKermit code to the current protocol. The computing resources needed to support higher speed Kermit transfers have also deterred Kermit's widespread deployment. Currently, Omen Technology's Professional-YAM and ZCOMM are the only programs not marketed by Columbia University that include the recent Kermit performance enhancements.
Click here for "The Truth about Kermit News"
In early 1986, Telenet funded a project to develop an improved public domain application to application file transfer protocol. This protocol would alleviate the throughput problems their network customers were experiencing with XMODEM and Kermit file transfers.
As I started work on what was to become ZMODEM, I hoped a few simple modifications to XMODEM technology could provide high performance and reliability over packet switched networks while preserving XMODEM's simplicity. YMODEM and YMODEM-1k were popular because programmers inexperienced in protocol design could, more or less, support the newer protocols without major effort, and I wanted the new protocol to be popular.
The initial concept added block numbers to XMODEM's ACK and NAK characters. The resultant protocol would allow the sender to send more than one block before waiting for a response.
But how should the new protocol add a block number to XMODEM's ACK and NAK? The WXMODEM, SEAlink, and Megalink protocols use binary bytes to indicate the block number. After careful consideration, I decided raw binary was unsuitable for ZMODEM because binary codes do not pass backwards through some modems, networks and operating systems.
But there were other problems with the streaming ACK technique used by SuperKermit, SEAlink, and Unix's UUCP-g protocols. Even if the receiver's acknowledgements were sent as printing digits, some operating systems could not recognize ACK packets coming back from the receiver without first halting transmission to wait for a response. There had to be a better way.
Another problem that had to be addressed was how to manage "the window". (The window is the data in transit between sender and receiver at any given time.) Experience gained in debugging The Source's SuperKermit protocol indicated a window size of about 1000 characters is needed to fully drive Telenet's network at 1200 bps. A larger window is needed in higher speed applications. Some high speed modems require a window of 20000 or more characters to achieve full throughput. Much of the SuperKermit's inefficiency, complexity, and debugging time centered around its ring buffering and window management logic. Again, there had to be a better way.
A sore point with XMODEM and its progeny is error recovery. More to the point, how can the receiver determine whether the sender has responded, or is ready to respond, to a retransmission request? XMODEM attacks the problem by throwing away characters until a certain period of silence has elapsed. Too short a time allows a spurious pause in output (network or timesharing congestion) to masquerade as error recovery. Too long a timeout devastates throughput, and allows a noisy line to "lock up" the protocol. Kermit and ZMODEM solve this problem with a distinct start of packet.
A further error recovery problem arises in streaming protocols. How does the receiver know when (or if) the sender has recognized its error signal? Is the next packet the correct response to the error signal? Is it something left over "in the queue" from before the sender received the error signal? Or is this new subpacket one of many that will have to be discarded because the sender did not receive the error signal? How long should this continue before sending another error signal? How can the protocol prevent this from degenerating into an argument about mixed signals?
For ZMODEM, I decided to forgo the complexity of SuperKermit's packet assembly scheme and its associated buffer management logic and memory requirements. ZMODEM uses the actual file position in its headers instead of block numbers. While a few more bytes are required to represent the file position than a block number, the protocol logic is much simpler. Unlike XMODEM, YMODEM, SEAlink, Kermit, etc., ZMODEM cannot get "out of sync" because the range of synchronization is the entire file length.
ZMODEM is not a single protocol in the sense of XMODEM or YMODEM. Rather, ZMODEM is an extensible language for implementing protocols. The next article in this series will show how ZMODEM can be adapted for vastly different environments.
ZMODEM normally sends data non-stop and the receiver is silent unless an error is detected. When required by the sender's operating system or network, the sending program can specify a break signal or other interrupt sequence for the receiver to use when requesting error correction.
To simplify logic and minimize memory consumption, most ZMODEM programs return to the point of error when retransmitting garbled data. While not as elegant as SuperKermit's selective retransmission logic, ZMODEM avoids the considerable overhead required to support selective
retransmission. My experience with SuperKermit's selective retransmission indicates ZMODEM does not suffer from this lack of selective retransmission. If selective retransmission is required, ZMODEM is extensible enough to support it.
Yet another sore point with XMODEM is the garbage XMODEM adds to files. This was acceptable in the days of CP/M when files which had no exact length. It is not desirable with contemporary systems such as DOS and Unix. Full YMODEM uses the file length information transmitted in the header block to trim garbage from the output file, but this causes data loss when transferring files that grow during a transfer. In some cases, the file length may be unknown, as when data is obtained from a "pipe". ZMODEM's variable length data subpackets solve both of these problems.
Since ZMODEM has to be "network friendly", certain control characters are escaped. Six network control characters (DLE, XON and XOFF in both parities) and the ZMODEM flag character (Ctrl-X) are replaced with two character sequences when they appear in raw data. This network protection exacts a speed penalty of about three percent when sending compressed files (ARC, ZOO, ZIP, etc.). Most, but not all, users feel this is a small penalty to pay for a protocol that works properly in many network applications where XMODEM, YMODEM, SEAlink, et al. fail.
Since some characters had to be escaped anyway, there wasn't much point wasting bytes to fill out to a fixed packet length. In ZMODEM, the length of data subpackets is denoted by ending each subpacket with an escape sequence similar to BISYNC and HDLC.
The end result was a ZMODEM header containing a "frame type", supervisory information, and a CRC protecting the header's information. Data frames consist of a header followed by 1 or more data subpackets. (A data subpacket consists of 0 or 1024 bytes of data followed by a 16 or 32 bit CRC.) In the absence of transmission errors, an entire file can be sent in one data frame. The ZMODEM "block length" can be as long as the entire file, yet error correction can begin within 1024 bytes of the garbled data. Unlike XMODEM et al., "Block Length" issues do not apply to ZMODEM. People who talk about "ZMODEM block length" in the same breath as XMODEM/YMODEM block length simply do not understand ZMODEM.
Since the sending system may be sensitive to numerous control characters or strip parity in the reverse data path, all of the headers sent by the ZMODEM receiver are in hex.
With equivalent binary (efficient) and hex (application friendly) frames, the sending program can send an "invitation to receive" sequence to activate the receiver without crashing the remote application with many unexpected control characters.
Going "back to scratch" in the protocol design allowed me to steal many good ideas from the existing protocols while adding a minimum of brand new ideas. (It's usually the new ideas that don't work right at first.)
From Kermit and Unix's UUCP came the concept of an initial dialog to exchange system parameters. Kermit inspired ZMODEM headers. The concatenation of headers and an arbitrary number of data subpackets into a frame of unlimited length was inspired by the ETB character in IBM's BISYNC.
ZMODEM generalized CompuServe Protocol's concept of host controlled transfers to provide ZMODEM AutoDownloadTM. An available Security Challenge prevents password hackers from abusing ZMODEM's power. With ZMODEM automatic downloads, the host program can automate the complete file transfer process without requiring any action on the user's part.
We were also keen to the pain and $uffering of legions of telecommunicators whose file transfers have been ruined by communications and timesharing system crashes. ZMODEM Crash Recovery is a natural result of ZMODEM's use of the actual file position instead of protocol block numbers. Instead of getting angry at the cat when it knocks the phone off the hook, you can dial back and use ZMODEM Crash Recovery to continue the download without wasting time or money. Recent compatible proprietary extensions to ZMODEM extend Crash Recovery technology to support file updates by comparing file contents and transferring new data only as necessary.
For many years the quality of phone lines at Omen's Sauvie Island location has exposed the weaknesses of many traditional and proprietary file transfer protocols. Necessity being the Mother of Invention, I have over the years added "secret tweeks" to my XMODEM an YMODEM protocol handlers to compensate for XMODEM's marginal reliability at higher speeds.
Kermit is easier to make reliable because Kermit does not depend on single character control sequences (ACK/NAK/EOT) used by XMODEM, YMODEM, SEAlink, etc. But, as we have seen, Kermit is too slow in many applications.
One advantage of starting ZMODEM's design "from scratch" was the opportunity to avoid the mistakes that plague the old protocols. (I could always make new ones.)
Two ideas express the sine qua non of reliable protocol design. The truism: "Modem speak with forked tongue" bites in many ways. While no protocol is error-proof, a protocol is only as reliable as its weakest link, except when tested on a table top. ZMODEM obeys this precept by protecting every data and command with a 16 or 32 bit CRC.
Another protocol design commandment is: "Don't burn your bridges until you are absolutely positively certain you really have crossed them." ZMODEM's use of the actual file position instead of block numbers helps, but the program logic must be carefully designed to prevent race and deadlock conditions.
The basic ZMODEM technology was introduced into the public domain in 1986. Many perceptive programmers and theoreticians have examined and implemented ZMODEM in their programs. They have exposed weaknesses in ZMODEM which go uncorrected in undocumented proprietary protocols. ZMODEM today is a stable, reliable protocol supported by communications programs from dozens of major authors.