ARL's Role in Scientific Computing
Computing at the Ballistic Research Laboratory (BRL) / Army Research Laboratory (ARL) is a long tradition spanning back to the 1930s. The history of these programs can separated into three different time periods, each with its own unique impact on U.S. Army initiatives.
Laboratory Computing — The Early Years (1935 – 1976)
Historically, the Ballistic Research Laboratory (BRL) has played a significant role in the evolution of scientific computing architectures and technologies. The automatic computing and data processing industry is a direct outgrowth of research, sponsored by the U.S. Army Ordnance Corps, which produced the ENIAC, the world's first electronic digital computer.
The following timeline shows the lineage of scientific computing systems that were either procured by visionary BRL managers or designed by BRL scientists and engineers during the time period from 1935 to 1966. In their day, these scientific computing systems were among the most powerful and technologically advanced computers in the world.
|Bush Differential Analyzer||ENIAC||EDVAC||ORDVAC||BRLESC I||BRLESC II|
Early Commercial Scientific Computing Systems — The Army Supercomputing Program
By the mid 1970s, the commercial HPC industry had matured to the point that the laboratory could more effectively exploit commercial scientific computing technologies without the time and cost to build in-house systems. The laboratory continued to play a key role in the design and development of these commercial systems by driving the technical requirements for high performance computing (HPC) technologies like large memory, use of the UNIX OS, and innovative graphics Interfaces and tools.
The Army Supercomputing Program was established to enable the availability of HPC class systems which had become essential to the Army research community. BRL again led the way by providing top level leadership, guidance and funding strategies to enable the Army (and the Defense Department) to establish, mature and sustain a viable and highly successful HPC program.
|CDC Cyber 7600||Denelcor HEP||Cray X/MP||Cray 2|
Pioneers in Scalable Scientific Computing Systems
By the mid 1970s, the commercial high performance computing (HPC) industry had matured to the point that the laboratory could more effectively exploit commercial scientific computing technologies without the time and cost to build in-house systems. The laboratory continued to play a key role in the design and development of these commercial systems by driving the technical requirements for HPC technologies like large memory, use of the UNIX operating system, and innovative graphics interfaces and tools.
The Army Supercomputing Program was established to enable the availability of HPC class systems which had become essential to the Army research community. BRL again led the way by providing top level leadership, guidance and funding strategies to enable the Army (and the Department of Defense) to establish, mature and sustain a viable and highly successful HPC program.
|Kendall Square research KSR 1||Silicon Graphics Power Challenge Array|
The Modern Era of Supercomputing
In 1996, ARL was selected to host one of four large-scale Defense supercomputing centers within the Department of Defense High Performance Computing Modernization Program. The ARL Major Shared Resource Center (MSRC) is one of the premier centers in the national HPC infrastructure and proudly carries the laboratory's tradition as a world leader in exploiting the power, capability, performance, and utility of today's scientific computing systems.
The ARL DSRC has evolved tremendously since 1996 and the following timeline provides a brief overview of the rapid evolution of DSRC computing systems to date.
Please visit the ARL DSRC Web site to learn more about the current state of supercomputing at ARL.
|Sun Enterprise 10000||IBM NH-2 SMP P3
SGI Origin 3800
|IBM p690 SP||IBM Cluster 1350
Linux Network Evolocity II
SGI Altix 3700
|Linux Network Advanced Technology Cluster
Linux Network LS-V
Story of Ping program
As written by Mike Muuss
Yes, it's true! I'm the author of ping for UNIX. Ping is a little thousand-line hack that I wrote in an evening which practically everyone seems to know about.
I named it after the sound that a sonar makes, inspired by the whole principle of echo-location. In college I'd done a lot of modeling of sonar and radar systems, so the "Cyberspace" analogy seemed very apt. It's exactly the same paradigm applied to a new problem domain: ping uses timed IP/ICMP ECHO_REQUEST and ECHO_REPLY packets to probe the "distance" to the target machine.
My original impetus for writing PING for 4.2a BSD UNIX came from an offhand remark in July 1983 by Dr. Dave Mills while we were attending a DARPA meeting in Norway, in which he described some work that he had done on his "Fuzzball" LSI-11 systems to measure path latency using timed ICMP Echo packets.
In December of 1983 I encountered some odd behavior of the IP network at BRL. Recalling Dr. Mills' comments, I quickly coded up the PING program, which revolved around opening an ICMP style SOCK_RAW AF_INET Berkeley-style socket(). The code compiled just fine, but it didn't work — there was no kernel support for raw ICMP sockets! Incensed, I coded up the kernel support and had everything working well before sunrise. Not surprisingly, Chuck Kennedy (aka "Kermit") had found and fixed the network hardware before I was able to launch my very first "ping" packet. But I've used it a few times since then. *grin* If I'd known then that it would be my most famous accomplishment in life, I might have worked on it another day or two and added some more options.
The folks at Berkeley eagerly took back my kernel modifications and the PING source code, and it's been a standard part of Berkeley UNIX ever since. Since it's free, it has been ported to many systems since then, including Microsoft Windows95 and WindowsNT. You can identify it by the distinctive messages that it prints, which look like this:
(126.96.36.199): 56 data bytes
64 bytes from 188.8.131.52: icmp_seq=0 time=16 ms
64 bytes from 184.108.40.206: icmp_seq=1 time=9 ms
64 bytes from 220.127.116.11: icmp_seq=2 time=9 ms
64 bytes from 18.104.22.168: icmp_seq=3 time=8 ms
64 bytes from 22.214.171.124: icmp_seq=4 time=8 ms
----vapor.arl.army.mil PING Statistics----
5 packets transmitted, 5 packets received, 0% packet loss
round-trip (ms) min/avg/max = 8/10/16
In 1993, ten years after I wrote PING, the USENIX association presented me with a handsome scroll, pronouncing me a Joint recipient of The USENIX Association 1993 Lifetime Achievement Award presented to the Computer Systems Research Group, University of California at Berkeley 1979-1993. "Presented to honor profound intellectual achievement and unparalleled service to our Community. At the behest of CSRG principals we hereby recognize the following individuals and organizations as CSRG participants, contributors and supporters." Wow!
Want to see the source code ? (40k)
From my point of view PING is not an acronym standing for Packet InterNet Grouper, it's a sonar analogy. However, I've heard second-hand that Dave Mills offered this expansion of the name, so perhaps we're both right. Sheesh, and I thought the government was bad about expanding acronyms!
Phil Dykstra added ICMP Record Route support to PING, but in those early days few routers processed them, making this feature almost useless. The limitation on the number of hops that could be recorded in the IP header precluded this from measuring very long paths.
I was insanely jealous when Van Jacobson of LBL used my kernel ICMP support to write TRACEROUTE, by realizing that he could get ICMP Time-to-Live Exceeded messages when pinging by modulating the IP time to life (TTL) field. I wish I had thought of that!J Of course, the real traceroute uses UDP datagrams because routers aren't supposed to generate ICMP error messages for ICMP messages.
The best ping story I've ever heard was told to me at a USENIX conference, where a network administrator with an intermittent Ethernet had linked the ping program to his vocoder program, in essence writing:
ping goodhost | sed —e 's/.*/ping/' | vocoder
He wired the vocoder's output into his office stereo and turned up the volume as loud as he could stand. The computer sat there shouting "Ping, ping, ping..." once a second and he wandered through the building wiggling Ethernet connectors until the sound stopped. And that's how he found the intermittent failure.
Fun Network Hacks and the Story of TTCP Program
As written by Mike Muuss
Along with PING, there are a few other fun bits of network code that I got to write back in the early days of TCP/IP.
Fun Network Hacks
Some other bits of kernel code that I've originated include the "default route" support, which a lot of people depend on to get their packets to an InterNet router when the full generality of a dynamic routing protocol is not required, or isn't working.
I also devised the "TCP max segment size (MSS) follows departing interface maximum transmission unit (MTU)" algorithm, which greatly improved TCP/IP efficiency in the face of dropped datagrams by allowing TCP to avoid using IP fragmentation.
I then further extended the algorithm to bring BSD UNIX into strict conformance with the TCP specification, and limit the TCP segment size when transmitting to faraway systems to 576 bytes. This is the origin of the "subnets-are-local" flag, which sometimes frustrates LAN sys-admins, but allowed mail to flow to Multics and Univac machines which (back then) adhered to the letter of the specification and could only handle 512 byte TCP segments. Since at the time I was the moderator of the TCP-IP Digest, the Unix-Wizards Digest and the INFO-UNIX Digest, I had a lot of packets to send to Multics and Univac machines, and the mail had to go through.
The TTCP Program
Along with Terry Slattery (then of the US Naval Academy), we took inspiration from an (Excelan? Interlan?) network test program, and evolved it into a program called TTCP, to "Test TCP". In addition to performing its intended function of testing TCP performance from user-memory to user-memory, TTCP has remained an excellent tool for bootstrapping hosts onto the network, by providing (essentially) a UNIX "pipe" between two machines across the network. For example, on the destination machine, use:
ttcp -r | tar xvpf -
and on the source machine:
tar cf - directory | ttcp -t dest_machine
To work around routing problems, additional intermediate machines can be included by:
ttcp -r | ttcp -t next_machine
TTCP has become another standard UNIX networking tool.
|John von Neumann||Herman Goldstine||John Mauchley||J. Prespert Eckert|
|Harry Huskey||Lt. Paul Gillon||Michael John Muuss||Women Pioneers|
- 50 Years of Army Computing: From ENIAC to DSRC
- The Computer: From Pascal to Von Neumann by H.H. Goldstine
- Guide to the ARL Major Shared Resource Center
- IEEE Annals These online journals are chock full of articles about many of the historical computers and computing pioneers that served at the Ballistic Research Laboratory. Use the link above to visit the IEEE website and perform a basic search on any of the computer names.
- Association for Computing Machinery (ACM) Digital Library
The technologies that actually make the hardware useful are discussed in this section, where we provide brief insight to the history, evolution, and growth of technologies such as software, networking, data storage, I/O devices and information assurance. The chart below shows the growth and capabilities of some of these technologies while the links provide more detail into what specific technologies were developed and used within the laboratory's scientific computing environment.
To make these machines communicate with each other, we needed to invent some networks. Originally, computing was easy. There were one or two computers and you went to them and did your work there. But when we started to have a dozen, and then hundreds of machines around, computers became much more distributed. At this point it was not always easy to get your data from the computer on which the data were stored to the one you wanted to use to compute these data on. To solve that problem, we needed to build networks.
BRL was one of the first nodes on the ARPANET back in the time of the earliest experiments. All the equipment was finally installed in 1974. That really carried BRL through more than ten years before there was a change in that technology. In local area nets, 16 megabits, ten megabits, and then 50 megabits communication links between them and then nothing really since 1985. Local area networking technology matured fast.
It was the campus area net hooking together the buildings and the laboratory that really took the most work. There were some really interesting experiments here in 1980 duplicating what the ARPANET had done using 56 kilobit communication lines and that served very well all the way up to 1985 when fiber optics were installed - a task that took three years from start to finish.
Historic events in Networking
These are some historic events in networking. BRL did its first remote computing exercises in 1953 using the then extensive teletype network of Western Union. The ORDVAC computer had the option of reading and writing its output on paper tapes. What people would do is they would send a telex to BRL with the program on paper tape. The people at University of Illinois, National Bureau of Standards, and other institutions would Western Union in one of these paper tapes, the operator at the ORDVAC would tear it off, stick it in the machine, and it would calculate for while, they would tear the paper off the output punch, send it back to Western Union, and zip it across the country - in 1953, network computing.
There was a lot of time in the early 1980s spent working on electronic mail and the TCP/IP protocols. It was very gratifying in 1984; all this home brew stuff that BRL had been testing with DARPA became military standard. All the work that was done here wound up embodied in Military Standard 1777 and 1778. This is now the foundation for the communications throughout the world. All the universities everywhere run these communications protocols, and BRL had a big hand in building that. Another first occurring at BRL is having the first supercomputer on the internet.
- The History of Computing Project
- The Computer Tree from Electronic Computers Within the Ordnance Corps: by Karl Kempf
- IEEE Annals of the History of Computing
- American University Computing History Museum: by Professor Tim Bergin
- Computer History Museum
- The History of Computing: Virginia Tech
- The Turing Archive for the History of Computing
- The History of Modern Computing in General
- Konrad Zuse: A Guided Tour of His Computers