In 2014, the Computer History Museum released the Xerox Alto file server archive, constituting about 15,000 files from the Xerox Alto personal computer including the Alto operating system; BCPL, Mesa, and (portions of the) Smalltalk programming environments; applications such as Bravo, Draw, and the Laurel email client; fonts and printing software (PARC had the first laser printers); and server software (including the IFS file server and the Grapevine distributed mail and name server). I told the story behind that archive here.
Today CHM released the Xerox PARC Interim File System (IFS) archive:
The archive contains nearly 150,000 unique files—around four gigabytes of information—and covers an astonishing landscape: programming languages; graphics; printing and typography; mathematics; networking; databases; file systems; electronic mail; servers; voice; artificial intelligence; hardware design; integrated circuit design tools and simulators; and additions to the Alto archive.
A blog post by David Brock introduces the archive. Access to the archive itself is available here.
I began working on this project in 2018 under an NDA with PARC: reading the old media prepared years earlier by Al Kossow, updating the conversion software I’d written for the earlier Alto project, and winnowing down a list of 300,000 files to the 150,000 files that I submitted to PARC management for approval. David Brock’s post ends with an Acknowledgments section noting all the people at CHM and PARC who contributed.
One of the artifacts preserved from the CAL Timesharing System project is a deck of 14 80-column binary cards labeled “TSS PM DUMP”. This is a program for a CDC 6000 series peripheral processor unit (PPU) to perform a post mortem dump to magnetic tape of the complete state of a (crashed) system: PPU memories, CPU memory, exchange package, and extended core storage. Another system utility program, TSS PP DUMP-TAPE SCANNER, allows selective display of portions of the dump to either the teletype or one of the system console displays. I believe Howard Sturgis wrote the PPU program and Keith Standiford wrote the CPU program.
I suspected this card deck was a version of a PPU program called DMP, for which a listing exists. Carl Claunch very generously offered to read (digitize) the cards. He doesn’t live nearby so we used the postal mail. He produced a binary file tss.hex. I queried the ControlFreaks mailing list re a PPU disassembler, and Daiyu Hurst sent me a program ppdis.c that generates a listing with opcodes resolved and addresses, octal, and textual representations of each 12-bit word. After upgrading it to eliminate some K&R C function definitions and changing its input format to match tss.hex, I ran it, captured the output, and then began annotating it. As expected, it matched the DMP listing very closely, so I used those variable names, labels, and comments to update the output of ppdis.c, and added a few additional comments, including slight differences from the DMP listing.
The first card is a loader that loads subsequent cards up until one with a 6-7-8-9 punch in column one is encountered. The first card is loaded via the deadstart panel. Page 14 of the CAL TSS Operator’s Manual explains:
HOW TO MAKE A DIAGNOSTIC DUMP OF A SICK SYSTEM
Unfortunately, the dump program requires a different deadstart panel from the system dead start program. Reset the deadstart panel to
CAL TSS I, push the deadstart button, read the deck “TSS POST MORTEM” into the card reader, mount a tape on unit 0, and stand back and
watch it go. After the tape unloads, reset the deadstart panel to CAL TSS II and dead start the system as usual. Record the reel on which
the dump was made along with the other information relevant to the crash.
The Operator’s Manual also contains a set of CAL-TSS FAILURE LOG forms recording crashes and attempts to diagnose them.
The program on the cards is very similar to the DMP listing (which doesn’t include the loader card), with slightly different addresses and one or two small changes in the code.
The general structure of the program is to dump PPU 0 (whose memory is partially overlaid by the DMP program), then use this working space to dump PPUs 1-9, the exchange package (CPU registers), the CPU memory, extended core storage, and finally write a trailer record. The console display is used for operator messages: mounting a tape on drive zero, progress messages indicating which phase is taking place, and several error messages.
This doesn’t sound like a terribly difficult task, but it requires about 1000 instructions on the PPU, which has 12-bit words, one register, no multiply or divide, and an instruction time of 1 to 4 microseconds. There are some additional complications:
A PPU can’t access the memory of another PPU. When the overall system is deadstarted, PPU 0 begins running a 12-instruction program loaded from toggle switches on the deadstart panel, and the other PPUs are each suspended on an input instruction on a different I/O channel. Thus PPU 0 sends a short program to each one instructing it to output its own memory on a channel, which PPU 0 inputs and then outputs to the tape drive.
Similarly, a PPU can’t access extended core storage (ECS). So PPU 0 repeatedly writes a short program to the CPU memory that reads the next block from ECS to CPU memory, then does an “exchange jump” to cause the CPU to execute that program. The PPU then reads the block from CPU memory and writes it to tape.
In 2023 computers are all around us: our phones, tablets, laptops, and desktops, and lurking inside our television sets, appliances, automobiles, to say nothing of our workplaces and the internet. It wasn’t always that way: I was born in 1949, just as the first stored-program digital computers were going into operation. Those computers were big, filling a room, and difficult to use. Initially a user would sign up for a block of time to test and run a program that had been written and punched into paper tape or 80-column cards.cThe fact that an expensive computer sat idle while the user was thinking or mounting tapes seemed wasteful, so people designed batch operating systems that would run programs one after the other, with a trained operator mounting tapes just before they were needed. The users submitted their card decks and waited in their offices until their programs had run and the listings had been printed. While this was more efficient, there was a demand for computers that operated in “real time”, interacting with people and other equipment. MIT’s Whirlwind, TX-0, and TX-2 and Wes Clark’s LINC are examples.
The ability to interact directly with a computer via a terminal (especially when a display was available) was compelling, and computers were becoming much faster, which led to the idea of timesharing: making the computer divide its attention among a set of users, each with a terminal. Ideally the computer would have enough memory and speed so each user would get good service. Early timesharing projects included CTSS at MIT, DTSS at Dartmouth, and Project Genie at Berkeley. By 1966, Berkeley (that is, the University of California at Berkeley) decided to replace its IBM batch system with a larger computer that would provide interactive (time-shared) service as well as batch computing. None of the large commercial computers came with a timesharing system, so Berkeley decided they would build their own. The story of that project—from conception, through funding, design, implementation, (brief) usage, to termination—is told here:
Paul McJones and Dave Redell. History of the CAL Timesharing System. To appear in the July-September 2023 issue of IEEE Annals of the History of Computing. IEEE Xplore (open access)
How did I come to write that paper? In the winter of 1968-1969 I was invited to join the timesharing project. At that time I had about 2 years of programming experience gained in classes and on-the-job experience during high school and college (Berkeley). That wasn’t much, but it included one good-sized project—a Snobol4 implementation with Charles Simonyi—so the team welcomed me to the project. For the next three years I helped build the CAL Timesharing System, performed some maintenance on the Snobol4 system, and finished my bachelor’s degree. In December 1971, CAL TSS development was canceled, and I graduated and moved on to the CRMS APL project elsewhere on campus.
Those three years were hectic but immensely enjoyable. The team was small, with under a dozen people, housed first in an old apartment on Channing Way and then in the brand-new Evans Hall. Lifelong friendships were formed. People often worked into the night, when the computer was available, and then trooped over to a nearby hamburger joint for a late meal. Exciting things were going on around us. There were protests, the Vietnam War, and the first moon landings. Rock music seemed fresh and exciting. I had met my future wife in 1968, and we were married in 1970.
As CAL TSS came to an end, we all agreed the experience could never be equalled. But we didn’t realize people in the future would be interested in studying our system, so we weren’t careful about preserving the magnetic tapes. However many of us kept manuals, design documents, and listings, plus a few tapes. In 1980 and again in 1991 we had reunions and I offered to store everything until it became clear what to do for the long run. Around 2003 I started scanning the materials and organizing a web site. In 2022 the Computer History Museum agreed to accept the physical artifacts, and this year they agreed to host the web site:
In April 2020, just as the Covid pandemic began, Annie Liu, a professor at Stony Brook University, emailed me to chat about programming language history. She suggested that Python, with its antecedents SETL, ABC, and C, would be a good topic for historical study and preservation. I mentioned that I’d considered SETL as an interesting topic back in the early 2000s, but unfortunately had not acted. After a few more rounds of email with her, I began looking around the web and Annie introduced me to several SETL people. Starting with these people, a few other personal contacts, and some persistence, I was soon in touch with much of the small but friendly SETL community, who very generously pored through their files and donated a wide variety of materials. The result is an historical archive of materials on the SETL programming language, including source code, documentation, and an extensive set of design notes that is available at the Software Preservation Group web site:
In addition, the digital artifacts and some of the physical artifacts are now part of the Computer History Museum’s permanent collection.
The SETL programming language was designed by Jack Schwartz at the Courant Institute of Mathematical Sciences at New York University. Schwartz was an accomplished mathematician who became interested in computer science during the 1960s. While working with John Cocke to learn and document a variety of compiler optimization algorithms, he got the idea of a high-level programming language able to describe such complex algorithms and data structures.  It occurred to him that set theory could be the basis for such a language since it was rich enough to serve as a foundation for all of mathematics. As his colleagues Martin Davis and Edward Schonberg described it in their biographical memoir to him: 
The central feature of the language is the use of sets and mappings over arbitrary domains, as well as the use of universally and existentially quantified expressions to describe predicates and iterations over composite structures. This set-theoretic core is embedded in a conventional imperative language with familiar control structures, subprograms, recursion, and global state in order to make the language widely accessible. Conservative for its time, it did not include higher-order functions. The final version of the language incorporated a backtracking mechanism (with success and fail primitives) as well as database operations. The popular scripting and general purpose programming language Python is understood to be a descendent of SETL, and its lineage is apparent in Python’s popularization of the use of mappings over arbitrary domains.
Schwartz viewed SETL first as a specification language allowing complex algorithms and data structures to be written down, conveyed to other humans, and executed as a part of algorithm development or even as a component of a complete prototype system. Actual production use would typically require reprogramming in terms of data structures closer to the machine such as arrays and lists. Schwartz believed that SETL programs could be optimized “by a combination of automatic and programmer-assisted procedures.” [3, page 70] He wrote several memos about his ideas for SETL [4, 5], and began assembling a project team — mostly graduate students. A series of design notes and memos called the SETL Newsletter was launched.  Malcolm Harrison, another NYU professor, had designed an extensible LISP-like language called BALM; in the first SETL Newsletter he sketched a simple prototype of SETL as a BALM extension. 
Over the following years the SETL Newsletters chronicled a long and confusing series of SETL implementations implemented with various versions of BALM and also with LITTLE, a low-level systems programming language.
BALMSETL (1971-1972) consisted of a runtime library of procedures corresponding to the various SETL operations, and a modification of BALM which replaced the standard BALM syntactic forms with calls to the appropriate procedures in the library. This runtime library used a hash-based representation of sets (earlier prototypes had used lists).
SETLB (spring 1972) consisted of a preprocessor (written in Fortran) that translated a simplified subset of SETL to BALMSETL. BALM was converted from producing interpretative code for a generalized BALM machine to producing CDC 6600 machine code.
SETLB.2 (1973?) was based upon a version of the BALM interpreter written in LITTLE, plus the SETL Run Time Library. It offered a limited capability for variation of the semantics of subroutine and function invocation by the SETLB programmer.
SETLA (1974?)’s input language was closer to SETL, but it still used the BALMSETL-based runtime library and BALM-based name scoping.
SETLC (1975?) consisted of a lexical scanner and syntactic analyzer (written in LITTLE), tree-walking routines (written in BALM) that built BALM parse trees), a translator that emitted LITTLE from the parse trees (written in BALM), and the LITTLE compiler. The generated LITTLE code used the SETL Run Time Library.
SETL/LITTLE (1977-1978?) consisted of a SETL-to-LITTLE translator, a runtime library, and a LITTLE-to-CDC 6600 machine code compiler (all written in LITTLE).
The final system (the only one for which source code is available) was ported to the IBM System/370, Amdahl UTS, DECsystem-10, and DEC VAX. There was also a sophisticated optimizer, itself written in SETL, which however was too large and slow to use in production. Work stopped around the end of 1984 as Schwartz’s focus moved to other fields such as parallel computing and robotics and many of the graduate students received their degrees. A follow-on SETL2 project produced more SETL Newsletters but no system.
Many reports and theses were written and papers were published. Perhaps the most well-known result was the NYUAda project, which was an “executable specification” for Ada that was the first validated Ada implementation. The project members went on to found AdaCore and GNAT Ada compiler.
 John Cocke and Jacob T. Schwartz. Programming Languages and Their Compilers. Preliminary Notes. 1968-1969; second revised version, Apri1 1970. Courant Institute of Mathematical Sciences, New York University. PDF at software preservation.org
 Martin Davis and Edmond Schonberg. Jacob Theodore Schwartz 1930-2009: A Biographical Memoir. National Academy of Science, 2011. PDF at nasonline.org
 Jacob T. Schwartz. On Programming: An Interim Report on the SETL Project. Installment 1: Generalities; Installment 2: The SETL Language, and Examples of Its Use. Computer Science Department, Courant Institute of Mathematical Sciences, New York University, 1973; revised June 1975. PDF at softwarepreservation.org
 Jacob T. Schwartz. Set theory as a language for program specification and programming. Courant Institute of Mathematical Sciences, September 1970, 97 pages.
 Jacob T. Schwartz. Abstract algorithms and a set theoretic language for their expression. Computer Science Department, Courant Institute of Mathematical Sciences, New York University. Preliminary draft, first part. 1970-1971, 16+289 pages. PDF at softwarepreservation.org
Maarten van Emden died on January 4, 2023, at the age of 85. He was a pioneer of logic programming, a field he explored for much of his career. I was not in his field, and only got to know him starting in 2010, so this is a personal, but not professional, remembrance of a very dear friend.
Maarten was born in Velp, the Netherlands, but his family soon moved to the Dutch East Indies, where his botanist father worked on improving tea plants. In 1942 the Japanese invaded. Maarten’s father escaped to join the resistance, but Maarten, his younger sister, and his mother were sent to a detention camp. As the war came to a close, his father was able to rescue and reunite the family. Over the next few years they returned to the Netherlands, with a brief return to the newly-formed Indonesia, followed by boarding school in Australia for Maarten. They were finally reunited in the Netherlands in 1954, where Maarten began his final year of high school. After graduating in 1955, he went to national flight school (Rijksluchtvaartschool). He did a year of military service, including flight training, and then joined KLM Royal Dutch Airlines. But KLM was adopting DC-8 jets for transatlantic service, whose speed, capacity, and ease of operation led to the need for fewer pilots. Maarten took advantage of a company program to enroll part-time in an engineering curriculum at the University of Delph. Later he was laid off by KLM and finished a master’s degree as a full-time student. He then enrolled in the PhD program administered by the University of Amsterdam with research at the Mathematisch Centrum (now CWI), and also made several visits to the University of Edinburgh. His 1971 dissertation was An Analysis of Complexity and his advisor was Adriaan van Wijngaarden. Maarten was awarded a post-doctoral fellowship by IBM, which he spent at the Thomas J. Watson Research Center in Yorktown Heights, NY during the 1971-1972 academic year, before returning to Edinburgh for a research position under Donald Michie in the Department of Machine Intelligence. In 1975 he accepted a professorship at the University of Waterloo, and in 1987 he moved to the University of Victoria.
Maarten was one of 15 individuals recognized as Founders of Logic Programming by the Association for Logic Programming. His work began with an early collaboration with Bob Kowalski and continued throughout his career with collaborations and individual projects to explore many aspects of the field. Underlying his interest in logic programming was a fascination with programming and programming languages of all sorts. His first language was Algol 60, which he taught himself using McCracken’s new book when his university suddenly switched from Marchant calculators to a Telefunken TR-4 computer for the numerical methods course. Moving on to the MC he was surrounded by ALGOL experts (his advisor van Wijngaarden was a member of the ALGOL 60 Committee and the instigator of the infamous ALGOL 68). Maarten was originally attracted to Edinburgh after hearing about the POP-2 timesharing system of Burstall and Popplestone; it was only later that he realized he’d initially used POP-2 as if it was ALGOL rather than a rich functional programming language. During his post-doc at IBM he learned APL and Lisp. Fred Blair was implementing a statically-scoped Lisp for the SCRATCHPAD computer algebra group. And William Burge, who had worked with Burstall and Landin, was spreading the gospel of functional programming. Ensconced in Edinburgh in 1972, he became an early convert to Kowalski’s logic programming, which he noted could be traced back as early as Cordell Green’s paper at the 4th Machine Intelligence workshop. But Maarten’s first impression of Preliminary Prolog was not positive — the frequent control annotations seemed to detract from the logic. Nevertheless, he and Kowalksi began writing short programs to explore the ideas. And when David Warren returned from a visit to Marseille with a box of cards containing Final Prolog as well as his short but powerful WARPLAN program, things changed. The language no longer needed the control annotations, and Warren quickly ported its Fortran-coded lowest layer to the local DEC-10. WARPLAN served as a tutorial for all sorts of programs in the new language. He was surprised his friend Alan Robinson, the inventor of resolution logic, wouldn’t give up Lisp for logic programming. At Waterloo, he advised Grant Roberts, who built Waterloo Prolog for the IBM System /370, and another series of students who built several Prologs for Unix. At Victoria, he wrote a first-year textbook for science and engineering students based on C:
It is indeed true that object-oriented programming represents a great advance. It is also true that polymorphism is object-oriented programming does away with many if-statements and switch statements; that iterators replace or simplify many loops. But experience has shown that introducing objects first does not lead to a first course that produces better programmers; on the contrary. It is as much necessary as in the old days to make sure that students master variables, functions, branches, loops, arrays, and structures.
I had the good fortune to grow up in three distinctive programming cultures: the Mathematical Centre in Amsterdam, the Lisp group in the IBM T.J. Watson Research Center, and the Department of Machine Intelligence in the University of Edinburgh. Though all of these entities have ceased to exist, I trust I am not the only surviving beneficiary.
If this book is better than others, it is due to my choice of those who were, often without knowing it, my teachers: H. Abelson, J. Bentley, W. Burge, R. Burstall, M. Cheng, A. Colmerauer, T. Dekker, E. Dijkstra, D. Gries, C. Hoare, D. Hoffman, N. Horspool, B. Kernighan, D. Knuth, R. O’Keefe, P. Plauger, R. Popplestone, F. Roberts, G. Sussman, A. van Wijngaarden, N. Wirth.
As different as we were, Maarten and I had a few things in common: fathers who piloted B-24 bombers in WWII, a charismatic mutual friend named Jim Gray, attendance at the 1973 NATO Summer School on Structured Programming, books named Elements of Programming, and a fascination with the early development of programming languages. Jim Gray had been an informal mentor for me at UC Berkeley as I worked on CAL Snobol and Cal TSS. Then he left Berkeley for IBM Research in Yorktown, and made friends with Maarten. Jim soon decided he couldn’t tolerate life on the east coast, but before leaving he made Maarten and his wife Jos promise to drive across the country and visit him in California, where he would show them around. They took him up on the offer, and during a brief stay in fall 1972 at Jim’s home in Berkeley I met Maarten, but didn’t make much of an impression on him (although he later told me Jim had told him about the “great programmers on Cal TSS”). The next summer both Maarten and I attended the NATO Summer School on Structured Programming at Marktoberdorf, but neither of us remembered encountering the other. Maarten mentioned the summer school in his remembrance of Dijkstra.
In 1974 I caught up with Jim Gray again, joining IBM Research in San Jose (before Almaden). The next summer Maarten visited Jim, although I didn’t learn of it until much later:
“After I returned to Europe Jim and I kept writing letters. In the summer of 1975 I was in a workshop in Santa Cruz and Jim came up in a beautiful old Porsche. I was at the height of my logic programming infatuation. Jim was rather dismissive of it. Nothing of what he told me about System R turned me on; the relationship died with that meeting. How I wish I could talk to him now about the mathematics of RDBs, which I started working on recently.”
[Maarten van Emden, personal communication, September 3, 2010]
Maarten apparently left three technical reports with Jim, who passed them along to me. I looked at them, and then put them aside for the next 35 years. In the fall of 2010 I had retired and was spending more time on software history projects. I’d been following Maarten’s blog; a recent pair of articles about the Fifth Generation Computer System project and the languages Prolog and Lisp prompted me to contact him about a project I was contemplating: an historical archive of implementations of Prolog. That began a friendship carried out mostly through some 2000 emails and almost 400 weekly video calls, plus one in-person visit when Maarten visited the Bay Area in early 2011. I will always remember his charming manners, gentle humor, wide-ranging interests, and intriguing stories.
Thanks to Maarten’s daughter Eva van Emden for information about his life.
50 years ago Alain Colmerauer and his colleagues were working on Prolog 0:
“A draconian decision was made: at the cost of incompleteness, we chose linear resolution with unification only between the heads of clauses. Without knowing it, we had discovered the strategy that is complete when only Horn clauses are used.”
Now the friends of Alain Colmerauer are calling for 2022 to be “The Year of Prolog”. They’re marking the 50th anniversary with:
An Alain COLMERAUER Prize awarded by an international jury for the most significant achievement in Prolog technology.
A “Prolog School Bus” that will travel to reintroduce declarative programming concepts to the younger generation. This is a long-term initiative that will be initiated during the year. The purpose of this “Tour de France” (and elsewhere) will be to familiarize schoolchildren with Prolog, as they are already familiar with the Scratch language. At the end of this school caravan, a prize will be awarded to the ‘nicest’ Prolog program written by a student.
It’s called An idea crazy enough…..Artificial Intelligence and it’s being developed by Colmerauer‘s friends at Prolog Heritage via a crowd-funded project at Ulele.
Colmerauer died 12 May 2017; the hoped-for tribute this fall has evolved:
The project is a film to portray Alain Colmerauer’s life and work – his contribution to Logic Programming and Constraints Logic Programming – all brought to life through interviews with some of the key participants of his time, complemented by images and documents from the archives. In fact that was the best solution to invite witnesses in this period of sanitary difficulties.
A 20 Euro contribution gets you an invitation to an exclusive preview of the film online; a 50 Euro contribution gets you the invitation, your name in the credits as a donor, and a digital version of the documentary.
On May 22 and 23, 2017, the Computer History Museum held a two-day meeting with more than 15 pioneering participants involved in the creation of the desktop publishing industry. There were a series of moderated group sessions and one-on-one oral histories of some of the participants, all of which were video recorded and transcribed.
Building on this meeting, three special issues of the IEEE Annals of the History of Computing were published, telling the stories of people, technologies, companies, and industries — far too much for me to cover here, so I will provide these links:
Web extras: Dave Walden’s page of corrections, links to the original video recordings and transcriptions, and legend for the group photograph heading this post.
Last but not least, I had the pleasure of interviewing Liz Bond Crews, who worked first at Xerox and then Adobe to forge relationships and understanding between the purveyors of new technology (laser printers and PostScript) and the type designers, typographers, and designers who adopted that technology. An edited version of that interview appears in the third special issue of Annals:
Logic programming has a long and interesting history with a rich literature comprising newsletters, journals, monographs, and workshop and conference proceedings. Much of that literature is accessible online, at least to people with the appropriate subscriptions. And there are a number of logic programming systems being actively developed, many of which are released as open source software.
Unfortunately, the early years of logic programming are not as consistently preserved. For example, according to dblp.org, the proceedings of the first two International Logic Programming Conferences are not available online, and according to worldcat.org, the closest library copies of the two are 1236 km. and 8850 km. away from my home in Silicon Valley. Early workshop proceedings and many technical reports are similarly hard to find (but see [1, 2]!). And the source code of the early systems, although at one time freely distributed from university to university, is now even more difficult to find.
As noted by people like Donald Knuth , Len Shustek , and Roberto Di Cosmo , software is a form of literature, and deserves to be preserved and studied in its original form: source code. Publications can provide overviews and algorithms, but ultimately the details are in the source code. About a year ago I began a project to collect and preserve primary and secondary source materials (including specifications, source code, manuals, and papers discussing design and implementation) from the history of logic programming, beginning with Marseille Prolog. This article is intended to bring awareness of the project to a larger circle than the few dozen people I’ve contacted so far. A web site with the materials I’ve found is available . I would appreciate suggestions for additional material , especially for the early years (say up through the mid 1980s). The web site is hosted by the Computer History Museum , which welcomes donations of historic physical and digital artifacts. It’s also worth noting the Software Heritage Acquisition Process , a process designed by Software Heritage  in collaboration with the University of Pisa to curate and archive historic software source code.
Maarten van Emden provided the initial artifacts and introductions enabling me to begin this project. Luís Moniz Pereira provided enthusiastic support, scanned literature from the early 1980s [1, 2], and encouraged me to write this article. And a number of other people have generously contributed time and artifacts; they are listed in the Acknowledgements section of  as well as in individual entries of that web site.
A few months after my article “The LISP 2 Project” was published, I learned from Paul Kimpel that the language GTL includes a “non-standard” version of LISP 2. GTL stands for Georgia Tech Language. It is an extension of the Burroughs B 5500 Algol language, and its implementation extends the Burroughs Algol compiler. There is a new data type, SYMBOL, whose value can be an atomic symbol, a number, or a dotted pair. There is a garbage collector, and a way to save and restore memory using the file system. GTL was designed by Martin Alexander at the Georgia Institute of Technology between 1968 and 1969. The source code is available as part of the Burroughs CUBE library, version 13, and the manual is available via bitsavers.org; see here for details.
I first heard about LISP 2 around 1971, from a 1966 conference paper included in the reading for a U.C. Berkeley seminar on advanced programming languages. The goal of LISP 2 was to combine the strengths of numerically-oriented languages such as ALGOL and FORTRAN with the symbolic capabilities of LISP. The paper described the language and its implementation at some length, but by 1971 it was pretty clear that LISP 2 had not caught on; instead, the original LISP 1.5 had spawned a variety of dialects such as BBN-LISP, MACLISP, and Stanford LISP 1.6.
In 2005 I began a project to archive LISP history and kept encountering people who’d been involved with LISP 2, including Paul Abrahams, Jeff Barnett, Lowell Hawkinson, Michael Levin, Clark Weissman, Fred Blair, Warren Teitelman, and Danny Bobrow. By 2010 I had been able to scan LISP 2 documents and source code belonging to Barnett, Herbert Stoyan, and Clark Weissman. In 2012, after writing about Hawkinson and others in an extended blog post “Harold V. McIntosh and his students: Lisp escapes MIT,” I decided to try to tell the story of the LISP 2 project, where so many interesting people’s paths had crossed. My sources included original project documents as well as telephone and email interviews with participants, and several participants were kind enough to provide feedback on multiple drafts. I let the article sit in limbo for five years, but last year after I published another anecdote in the Annals, editor Dave Walden encouraged me to submit this one.
On December 28, 2017, as the article was about to go to press, Lowell Hawkinson died suddenly from an accident.
Lowell Hawkinson passed away at the age of 74 on December 28, 2017 as a result of an accident. Lowell was a pioneer in LISP implementation and artificial intelligence. He co-founded Gensym Corporation in 1986 and served as its CEO through 2006. This obituary gives more details of his life and accomplishments.
I first got in touch with Lowell in 2010 because of my interest in archiving LISP history. We exchanged emails (and had one phone conversation), and over the years I wrote several blog posts and a journal article about work involving him:
I wrote the article to chronicle the search I began in late 2003 to find the source code for the original FORTRAN compiler for the IBM 704. Much of the search was documented right here on this Dusty Decks blog, which I created in July 2004 as a sort of advertisement.
I’d like to thank Burt Grad for encouraging me to write the article. Burt is a friend who began working on computer software in 1954 and hasn’t stopped — for example, see here and here and here and here.
We are very grateful to our publisher and our translator Yoshiki Shibata for this opportunity to address our Japanese readers.
This book is in the spirit of Japanese esthetics: it tries to say as much as possible in as few words as necessary. We could not reduce it to 17 sounds like a traditional haiku, but we were inspired with its minimalist approach. The book does not have much fat. We actually hope that it does not have any fat at all. We tried to make every word to matter.
We grew up when Japanese engineering was beginning to revolutionize the world with cars that did not break and television sets that never needed repairs. Our hope is that our approach to software will enable programmers to produce artifacts as solid as those products. Japanese engineers changed the experience of consumers from needing to know about cars, or, at least, knowing a good mechanic, to assuming that the vehicle always runs. We hope that the software industry will become as predictable as the automotive industry and software will be produced by competent engineers and not by inspired artist-programmers.
Our book is just a starting point; most work to industrialize software development is in the future. We hope that readers of this book will bring this future closer to reality.
We would like to thank Yoshiki Shibata not only for his very careful translation, but also for finding several mistakes in the original.
Update 5/21/2019:Genaro J. Martínez, Juan C. Seck-Tuoh-Mora, Sergio V. Chapa-Vergara, and Christian Lemaitre recently published a paper “Brief Notes and History [of] Computing in Mexico during 50 years” centered around McIntosh’s accomplishments. arXiv:1905.07527 [cs.GL] DOI
Update 3/31/2019: Here are photos from a November 2017 memorial held for McIntosh at the Faculty of Computer Science of the Institute of Science at Autonomous University of Puebla, Puebla, Mexico.
Harold V. McIntosh died November 30, 2015 in Puebla, Mexico. He was an American mathematician who became interested in what is now known as computer algebra to solve problems in physics, leading to his early adoption of the programming language LISP and to his design of the languages CONVERT (in collaboration with Adolfo Guzmán) and REC. His early education and employment was in the United States, but he spent the last 50+ years in Mexico, and received a Ph.D. in Quantum Chemistry at the University of Uppsala in 1972.
McIntosh was born in Colorado in 1929, the oldest of four sons of Charles Roy and Lillian (Martin) McIntosh. He attended Brighton High School in Brighton, near Denver. In 1949 he received a Bachelor of Science in physics from the Colorado Agricultural and Mechanical College, and in 1952 he received a Master of Science in mathematics from Cornell University. He did further graduate studies at Cornell and Brandeis, but stopped before receiving a Ph.D. to take a job at the Aberdeen Proving Ground. Two years later, he moved to RIAS (Research Institute for Advanced Studies), a division of the Glenn L. Martin Company. Around 1962 he accepted a position in the Physics and Astronomy department and the Quantum Theory Project at the University of Florida. After two years at the University of Florida, McIntosh accepted an offer at CENAC (Centro Nacional de Calculo, Instituto Politecnico Nacional) in Mexico. Over the next years, McIntosh worked in various positions in Mexico at Instituto Politecnico Nacional, Instituto Nacional de Energía Nuclear, and, from 1975 on, Universidad Autónoma de Puebla.
McIntosh was widely regarded for his research, writing and teaching; for details, see Gerardo Cisneros-S.: “La computación en México y la influencia de H. V. McIntosh en su desarrollo” (PDF). He organized several special summer programs in the early 1960s that introduced a number of students to higher mathematics and computer programming (see here for example). He also had a lifelong interest in flexagons, which he shared with his students. A symposium in his honor was held a month before he died.
Celebration of late Prof. Harold V. McIntosh achievements. Faculty of Computer Science of the Institute of Science at Autonomous University of Puebla, Puebla, Mexico, November 29, 2017. Online at uncomp.uwe.ac.uk
José Manuel Gómez Soto for notifying me of McIntosh’s death and supplying the link to this obituary; Robert Yates, Lowell Hawkinson, and Adolfo Guzmán Arenas for their contributions to “Harold V. McIntosh and his students: Lisp escapes MIT”; and Genaro Juarez Martinez for informing me about the memorial celebration.
Deutsch is a computer scientist who made important contributions to interactive implementations of Lisp and Smalltalk. While he was in high school, he implemented the first interactive Lisp interpreter, running on a DEC PDP-1 computer. While still in high school, he worked with Calvin Mooers on the design of TRAC, and implemented the language on a PDP-1 at BBN. Then Deutsch enrolled at the University of California at Berkeley, where he soon joined Project Genie, one of the earliest timesharing systems. Meanwhile, at BBN, Deutsch’s original PDP-1 Lisp became the “conceptual predecessor” of BBN-Lisp, running first on the PDP-1, then the SDS-940 (running the Project Genie timesharing system), and finally the PDP-10 running BBN’s own TENEX. After several of the BBN-Lisp creators, including Deutsch, moved to Xerox PARC, BBN-Lisp became INTERLISP. By this time, Deutsch had received his bachelor’s degree at Berkeley, and with other Project Genie alumni had co-founded Berkeley Computer Corporation, which built a large timeshared computer (the BCC-500) but then went bankrupt. While working at PARC, Deutsch also attended graduate school at Berkeley, carrying out the research on program verification that produced the PIVOT system.
Deutsch was kind enough to donate his only source listing of PIVOT to the Computer History Museum (Lot number X7485.2015), and to allow scans of the listing and his thesis to be posted on the SPG web site.
Programming languages have emerged over the last fifty or so years as one of the most important tools allowing humans to convert computers from theoretical curiosities to the foundation of a new era. By inventing C++ and working tirelessly to evolve, implement, standardize, and propagate it, Bjarne has provided a foundation for software development across the gamut from systems to applications, embedded to desktop to server, commercial to scientific, across CPUs and operating systems.
C was the first systems programming language to successfully abstract the machine model of modern computers. It contained a simple but expressive set of built-in types and structuring methods (structs, arrays, and pointers) allowing efficient access to both the processor and the memory. C++ preserves C’s core model, but adds abstraction mechanisms allowing programmers to maintain efficiency and control while extending the set of types and structures.
While several of the individual ideas in C++ occurred in earlier research or programming languages, Bjarne has synthesized them into an industrial-strength language, making them available to production programmers in an coherent language. These ideas include type safety, abstract data types, inheritance, parametric polymorphism, exception handling, and more.
In addition to its synthesis of earlier paradigms, C++ pioneered a thorough and robust implementation of templates and overloading. This has enabled the field of generic programming to advance from early research to sufficient maturity for inclusion in the C++ standard library in the form of STL. Additional generic libraries for graphs, matrix algebra, image processing, and other areas are available from other sources such as Boost.
By combining efficiency and powerful abstraction mechanisms in a single language, and by achieving ubiquity across machines and operating systems, C++ has replaced a variety of more specialized languages such as C, Ada, Smalltalk, and even Fortran. Programmers need to learn fewer languages; platform providers need to invest in fewer compilers. Even newer languages such as Java and C# are clearly influenced by the design of C++.
C++ has been the major focus of Bjarne’s career, and he has had unprecedented success at incremental development of C++, keeping it stable enough for use by real programmers while gradually evolving the language to serve the broadest possible audience. As he evolved the language, Bjarne took on many additional roles in order to make C++ successful: author (books, papers, articles, lectures, and more), teacher, standardization leader, and marketing person.
Bjarne started working on C with Classes, a C++ precursor, in 1979; the first commercial release of C++ was in 1985; the first standard was in 1998, and a major revision was completed in 2011. C++ was mature enough by 1993 to be included in the second History of Programming Languages conference, and a follow-on paper focusing on standardization appeared in the third HOPL conference in 2007.
I hope you enjoy reading the oral history as much as I did interviewing Bjarne.