John Allen (1937-2022) and Anatomy of LISP

When I began researching the history of LISP in 2005 [1], one of the first people I got in touch with was John Allen. Not only had he helped create Stanford LISP 1.6, written an influential book on the LISP language and its implementation (Anatomy of LISP), but also he and his wife, Ruth Davis, had organized (and funded!) the 1980 LISP conference [2] and founded The LISP Company, which produced TLC Lisp for Intel 8080- and 8086-based microcomputers. By the time I got in touch with him he was busy preparing a talk for another LISP conference [3], but told me:

in 1964 i got interested in lisp and wrote to mit. tim hart responded, sending the distribution tape and wished my luck. i needed it because the mit tape was for a machine related, but not identical, to the one i had at work. so to make a long story short, i got the tape converted and running.

what i have is a november 1964 listing of the tape’s contents. that includes the source card-images for lisp 1.5 plus the initial lisp library written in lisp. that includes bunches of test cases, the compiled compiler and other random crap. the listing does have some of my comments related to the conversion, but the original text is quite clear.

i was planning to bring it along when i go to stanford june 19-22, and would be willing to let someone scan it if desired. i’ve got other crap around in random piles, boxes, and “archives.”

I gave a 5-minute pitch at the conference for my LISP history project, but did not catch up with John. I tried to stay in touch with periodic emails, and ran into John and Ruth in person at John McCarthy’s 2012 Stanford memorial. But somehow we could never coordinate to scan his listing and other items from his “archives”.

Then in 2022 Ruth contacted me with the sad news that John had died in March of that year. Remembering our long correspondence regarding John’s LISP materials, she invited me to help her sort through John’s papers:

I have come across a PDP-1 notebook with a THOR manual (I think), some tapes containing I know not what, some copies of handwritten lecture notes (I think) of Dana Scott and Georg Kreisel. And I haven’t made it to the closet yet. I am throwing out a lot, and I may not have the right sensibilities to know what may be of interest.

I jumped at the chance, and spent an afternoon with her, bringing home several boxes of materials including the 1964 LISP 1.5 listing. Also she told me she’d discovered the copyright release for John’s famous book Anatomy of LISP and would be happy for an electronic edition to be posted online. That suggested an opportunity: ACM had included Anatomy in their 2006 Classic Books collection, but at that time the copyright status was not clear and an electronic edition could not be included, as was possible for many of the books in the collection. John wrote the book using Larry Tesler’s PUB document compiler, and many drafts were preserved in Bruce Baumgart’s SAILDART archive. John’s original plan was to adapt PUB’s output to a phototypesetter to achieve “book quality” output, but that did not work out so the book was published from pages printed on a Xerox XGP printer, at a low 192 dots/inch. Bruce Baumgart very generously did some work to recreate a bitmap-based PDF from SAILDART files, but it was still at 192 dpi and the content didn’t quite match the published version. So Ruth and I offered a clean, scanned PDF to ACM, and after verifying the rights they added this PDF to the website. [4]

The Computer History Museum accepted Ruth’s donation of the LISP 1.5 listing, some Stanford PDP-1 timesharing system documents (TVEDIT, RAID, and THOR), John’s Alvine LISP editor manual, John’s 1971-1972 lecture notes for the E123A course he taught at UCLA, three early MDL/MUDDLE documents, some ECL documents from Harvard, some early theorem proving reports (John implemented a theorem prover with David Luckham [5]), reports by Christopher Stratchey, Dana Scott, and Peter Wegner, an early 1979 version (by Harold Abelson and Robert Fano) of Structure and Interpretation of Computer Programs, and the four magnetic tapes Ruth had mentioned. (Al Kossow has promised to digitize the tapes; I suspect one contains John’s theorem prover; two may contain other LISP code.)

John Allen was a passionate computer scientist and educator. He held programming or research positions at Burroughs Sierra Madre, UC Santa Barbara, and GE Research (Goleta) in the early 1960s and at HP Labs and Signetics in the 1970s, as well as programming and research positions at Stanford from 1965 to about 1975, interspersed with teaching assignments at UCLA, UC Santa Cruz, and San Jose State. He also periodically taught part-time at Santa Clara University from 1984 to 2005. I wish I could have gotten to know him better.

[1] Paul McJones. Archiving LISP History, Dusty Decks blog, 22 May 2005.

[2] Ruth E. Davis and John R. Allen, co-organizers. Conference Record of the 1980 LISP Conference. Later reissued as: LFP ’80: Proceedings of the 1980 ACM conference on LISP and functional programming

[3] John Allen. History, Mystery, and Ballast. International Lisp Conference, Stanford University, 2005.

[4] John Allen. Anatomy of LISP. McGraw-Hill, Inc., 1978. Part of ACM’s Classic Books Collection. ACM Digital Library (open access)

[5] John Allen and David Luckham. An interactive theorem proving program. In Machine Intelligence 5, B. Meltzer and D. Michie, Eds., Edinburgh. U. Press, Edinburgh, 1970, pp. 321- 336.

P.S. After some study of the Georg Kreisel notes that Ruth mentioned, I believe they correspond to this item:

57. Kreisel, G. Intuitionistic Mathematics. Lecture delivered at Stanford University, 1962?, 270 pp. [Mimeographed].

in this Kreisel bibliography:

Xerox PARC IFS archive

In 2014, the Computer History Museum released the Xerox Alto file server archive, constituting about 15,000 files from the Xerox Alto personal computer including the Alto operating system; BCPL, Mesa, and (portions of the) Smalltalk programming environments; applications such as Bravo, Draw, and the Laurel email client; fonts and printing software (PARC had the first laser printers); and server software (including the IFS file server and the Grapevine distributed mail and name server). I told the story behind that archive here.

Today CHM released the Xerox PARC Interim File System (IFS) archive:

The archive contains nearly 150,000 unique files—around four gigabytes of information—and covers an astonishing landscape: programming languages; graphics; printing and typography; mathematics; networking; databases; file systems; electronic mail; servers; voice; artificial intelligence; hardware design; integrated circuit design tools and simulators; and additions to the Alto archive.

A blog post by David Brock introduces the archive. Access to the archive itself is available here.

I began working on this project in 2018 under an NDA with PARC: reading the old media prepared years earlier by Al Kossow, updating the conversion software I’d written for the earlier Alto project, and winnowing down a list of 300,000 files to the 150,000 files that I submitted to PARC management for approval. David Brock’s post ends with an Acknowledgments section noting all the people at CHM and PARC who contributed.

A CAL TSS debugging tool

One of the artifacts preserved from the CAL Timesharing System project is a deck of 14 80-column binary cards labeled “TSS PM DUMP”. This is a program for a CDC 6000 series peripheral processor unit (PPU) to perform a post mortem dump to magnetic tape of the complete state of a (crashed) system: PPU memories, CPU memory, exchange package, and extended core storage. Another system utility program, TSS PP DUMP-TAPE SCANNER, allows selective display of portions of the dump to either the teletype or one of the system console displays. I believe Howard Sturgis wrote the PPU program and Keith Standiford wrote the CPU program.

I suspected this card deck was a version of a PPU program called DMP, for which a listing exists. Carl Claunch very generously offered to read (digitize) the cards. He produced a binary file tss.hex. I queried the ControlFreaks mailing list re a PPU disassembler, and Daiyu Hurst sent me a program ppdis.c that generates a listing with opcodes resolved and addresses, octal, and textual representations of each 12-bit word. After upgrading it to eliminate some K&R C function definitions and changing its input format to match tss.hex, I ran it, captured the output, and then began annotating it. As expected, it matched the DMP listing very closely, so I used those variable names, labels, and comments to update the output of ppdis.c, and added a few additional comments, including slight differences from the DMP listing.

The first card is a loader that loads subsequent cards up until one with a 6-7-8-9 punch in column one is encountered. The first card is loaded via the deadstart panel. Page 14 of the CAL TSS Operator’s Manual explains:


       Unfortunately, the dump program requires a different deadstart panel from the system dead start program. Reset the deadstart panel to
CAL TSS I, push the deadstart button, read the deck “TSS POST MORTEM” into the card reader, mount a tape on unit 0, and stand back and
watch it go. After the tape unloads, reset the deadstart panel to CAL TSS II and dead start the system as usual. Record the reel on which
the dump was made along with the other information relevant to the crash.

The Operator’s Manual also contains a set of CAL-TSS FAILURE LOG forms recording crashes and attempts to diagnose them.

The program on the cards is very similar to the DMP listing (which doesn’t include the loader card), with slightly different addresses and one or two small changes in the code.

The general structure of the program is to dump PPU 0 (whose memory is partially overlaid by the DMP program), then use this working space to dump PPUs 1-9, the exchange package (CPU registers), the CPU memory, extended core storage, and finally write a trailer record. The console display is used for operator messages: mounting a tape on drive zero, progress messages indicating which phase is taking place, and several error messages.

This doesn’t sound like a terribly difficult task, but it requires about 1000 instructions on the PPU, which has 12-bit words, one register, no multiply or divide, and an instruction time of 1 to 4 microseconds. There are some additional complications:

  1. A PPU can’t access the memory of another PPU. When the overall system is deadstarted, PPU 0 begins running a 12-instruction program loaded from toggle switches on the deadstart panel, and the other PPUs are each suspended on an input instruction on a different I/O channel. Thus PPU 0 sends a short program to each one instructing it to output its own memory on a channel, which PPU 0 inputs and then outputs to the tape drive.
  2. Similarly, a PPU can’t access extended core storage (ECS). So PPU 0 repeatedly writes a short program to the CPU memory that reads the next block from ECS to CPU memory, then does an “exchange jump” to cause the CPU to execute that program. The PPU then reads the block from CPU memory and writes it to tape.

Here is the annotated listing.

Update 2024/01/18: Linked to the copy of the listing (etc.) at

CAL Timesharing System: Before computers were personal

In 2023 computers are all around us: our phones, tablets, laptops, and desktops, and lurking inside our television sets, appliances, automobiles, to say nothing of our workplaces and the internet. It wasn’t always that way: I was born in 1949, just as the first stored-program digital computers were going into operation. Those computers were big, filling a room, and difficult to use. Initially a user would sign up for a block of time to test and run a program that had been written and punched into paper tape or 80-column cards.cThe fact that an expensive computer sat idle while the user was thinking or mounting tapes seemed wasteful, so people designed batch operating systems that would run programs one after the other, with a trained operator mounting tapes just before they were needed. The users submitted their card decks and waited in their offices until their programs had run and the listings had been printed. While this was more efficient, there was a demand for computers that operated in “real time”, interacting with people and other equipment. MIT’s Whirlwind, TX-0, and TX-2 and Wes Clark’s LINC are examples.

The ability to interact directly with a computer via a terminal (especially when a display was available) was compelling, and computers were becoming much faster, which led to the idea of timesharing: making the computer divide its attention among a set of users, each with a terminal. Ideally the computer would have enough memory and speed so each user would get good service. Early timesharing projects included CTSS at MIT, DTSS at Dartmouth, and Project Genie at Berkeley. By 1966, Berkeley (that is, the University of California at Berkeley) decided to replace its IBM batch system with a larger computer that would provide interactive (time-shared) service as well as batch computing. None of the large commercial computers came with a timesharing system, so Berkeley decided they would build their own. The story of that project—from conception, through funding, design, implementation, (brief) usage, to termination—is told here:

  • Paul McJones and Dave Redell. History of the CAL Timesharing System. IEEE Annals of the History of Computing, Vol. 45, No. 3 (July-September 2023). IEEE Xplore (Open access)

How did I come to write that paper? In the winter of 1968-1969 I was invited to join the timesharing project. At that time I had about 2 years of programming experience gained in classes and on-the-job experience during high school and college (Berkeley). That wasn’t much, but it included one good-sized project—a Snobol4 implementation with Charles Simonyi—so the team welcomed me to the project. For the next three years I helped build the CAL Timesharing System, performed some maintenance on the Snobol4 system, and finished my bachelor’s degree. In December 1971, CAL TSS development was canceled, and I graduated and moved on to the CRMS APL project elsewhere on campus.

Those three years were hectic but immensely enjoyable. The team was small, with under a dozen people, housed first in an old apartment on Channing Way and then in the brand-new Evans Hall. Lifelong friendships were formed. People often worked into the night, when the computer was available, and then trooped over to a nearby hamburger joint for a late meal. Exciting things were going on around us. There were protests, the Vietnam War, and the first moon landings. Rock music seemed fresh and exciting. I had met my future wife in 1968, and we were married in 1970.

As CAL TSS came to an end, we all agreed the experience could never be equalled. But we didn’t realize people in the  future would be interested in studying our system, so we weren’t careful about preserving the magnetic tapes. However many of us kept manuals, design documents, and listings, plus a few tapes. In 1980 and again in 1991 we had reunions and I offered to store everything until it became clear what to do for the long run. Around 2003 I started scanning the materials and organizing a web site. In 2022  the Computer History Museum agreed to accept the physical artifacts, and this year they agreed to host the web site:

Jack Schwartz and SETL

In April 2020, just as the Covid pandemic began, Annie Liu, a professor at Stony Brook University, emailed me to chat about programming language history. She suggested that Python, with its antecedents SETL, ABC, and C, would be a good topic for historical study and preservation. I mentioned that I’d considered SETL as an interesting topic back in the early 2000s, but unfortunately had not acted. After a few more rounds of email with her, I began looking around the web and Annie introduced me to several SETL people. Starting with these people, a few other personal contacts, and some persistence, I was soon in touch with much of the small but friendly SETL community, who very generously pored through their files and donated a wide variety of materials. The result is an historical archive of materials on the SETL programming language, including source code, documentation, and an extensive set of design notes that is available at the Software Preservation Group web site: 

In addition, the digital artifacts and some of the physical artifacts are now part of the Computer History Museum’s permanent collection.

The SETL programming language was designed by Jack Schwartz at the Courant Institute of Mathematical Sciences at New York University. Schwartz was an accomplished mathematician who became interested in computer science during the 1960s. While working with John Cocke to learn and document a variety of compiler optimization algorithms, he got the idea of a high-level programming language able to describe such complex algorithms and data structures. [1] It occurred to him that set theory could be the basis for such a language since it was rich enough to serve as a foundation for all of mathematics. As his colleagues Martin Davis and Edward Schonberg described it in their biographical memoir to him: [2]

The central feature of the language is the use of sets and mappings over arbitrary domains, as well as the use of universally and existentially quantified expressions to describe predicates and iterations over composite structures. This set-theoretic core is embedded in a conventional imperative language with familiar control structures, subprograms, recursion, and global state in order to make the language widely accessible. Conservative for its time, it did not include higher-order functions. The final version of the language incorporated a backtracking mechanism (with success and fail primitives) as well as database operations. The popular scripting and general purpose programming language Python is understood to be a descendent of SETL, and its lineage is apparent in Python’s popularization of the use of mappings over arbitrary domains.

Schwartz viewed SETL first as a specification language allowing complex algorithms and data structures to be written down, conveyed to other humans, and executed as a part of algorithm development or even as a component of a complete prototype system. Actual production use would typically require reprogramming in terms of data structures closer to the machine such as arrays and lists. Schwartz believed that SETL programs could be optimized “by a combination of automatic and programmer-assisted procedures.” [3, page 70] He wrote several memos about his ideas for SETL [4, 5], and began assembling a project team — mostly graduate students. A series of design notes and memos called the SETL Newsletter was launched. [6] Malcolm Harrison, another NYU professor, had designed an extensible LISP-like language called BALM; in the first SETL Newsletter he sketched a simple prototype of SETL as a BALM extension. [7]

Over the following years the SETL Newsletters chronicled a long and confusing series of SETL implementations implemented with various versions of BALM and also with LITTLE, a low-level systems programming language.

  • BALMSETL (1971-1972) consisted of a runtime library of procedures corresponding to the various SETL operations, and a modification of BALM which replaced the standard BALM syntactic forms with calls to the appropriate procedures in the library. This runtime library used a hash-based representation of sets (earlier prototypes had used lists).
  • SETLB (spring 1972) consisted of a preprocessor (written in Fortran) that translated a simplified subset of SETL to BALMSETL. BALM was converted from producing interpretative code for a generalized BALM machine to producing CDC 6600 machine code.
  • SETLB.2 (1973?) was based upon a version of the BALM interpreter written in LITTLE, plus the SETL Run Time Library. It offered a limited capability for variation of the semantics of subroutine and function invocation by the SETLB programmer.
  • SETLA (1974?)’s input language was closer to SETL, but it still used the BALMSETL-based runtime library and BALM-based name scoping.
  • SETLC (1975?) consisted of a lexical scanner and syntactic analyzer (written in LITTLE), tree-walking routines (written in BALM) that built BALM parse trees), a translator that emitted LITTLE from the parse trees (written in BALM), and the LITTLE compiler. The generated LITTLE code used the SETL Run Time Library.
  • SETL/LITTLE (1977-1978?) consisted of a SETL-to-LITTLE translator, a runtime library, and a LITTLE-to-CDC 6600 machine code compiler (all written in LITTLE).

The final system (the only one for which source code is available) was ported to the IBM System/370, Amdahl UTS, DECsystem-10, and DEC VAX. There was also a sophisticated optimizer, itself written in SETL, which however was too large and slow to use in production. Work stopped around the end of 1984 as Schwartz’s focus moved to other fields such as parallel computing and robotics and many of the graduate students received their degrees. A follow-on SETL2 project produced more SETL Newsletters but no system.

Other SETL implementations

Starting in the mid 1970s, SETL-influenced languages were implemented at other institutions including Akademgorodok in Novosibirsk, and then at NYU itself. After a 30-year gestation period, GNU SETL was released in 2022. See for more.


Many reports and theses were written and papers were published. Perhaps the most well-known result was the NYUAda project, which was an “executable specification” for Ada that was the first validated Ada implementation. The project members went on to found AdaCore and GNAT Ada compiler.


In addition to Annie Liu, many people helped me on this project; see .


[1] John Cocke and Jacob T. Schwartz. Programming Languages and Their Compilers. Preliminary Notes. 1968-1969; second revised version, Apri1 1970. Courant Institute of Mathematical Sciences, New York University. PDF at software

[2] Martin Davis and Edmond Schonberg. Jacob Theodore Schwartz 1930-2009: A Biographical Memoir. National Academy of Science, 2011. PDF at

[3] Jacob T. Schwartz. On Programming: An Interim Report on the SETL Project. Installment 1: Generalities; Installment 2: The SETL Language, and Examples of Its Use. Computer Science Department, Courant Institute of Mathematical Sciences, New York University, 1973; revised June 1975. PDF at

[4] Jacob T. Schwartz. Set theory as a language for program specification and programming. Courant Institute of Mathematical Sciences, September 1970, 97 pages.

[5] Jacob T. Schwartz. Abstract algorithms and a set theoretic language for their expression. Computer Science Department, Courant Institute of Mathematical Sciences, New York University. Preliminary draft, first part. 1970-1971, 16+289 pages. PDF at

[6] SETL Newsletter. #1-#217, November 1970 – November 1981; #220-#233, April 1987 – November 1988. Online at software

[7] M. C. Harrison. BALM-SETL: A simple implementation of SETL. SETL Newsletter #1, 5 November 1970. PDF at

Remembering Maarten van Emden

Maarten van Emden died on January 4, 2023, at the age of 85.[1] He was a pioneer of logic programming, a field he explored for much of his career. I was not in his field, and only got to know him starting in 2010, so this is a personal, but not professional, remembrance of a very dear friend.

Maarten van Emden, 26 February 2011

His life

Maarten was born in Velp, the Netherlands, but his family soon moved to the Dutch East Indies, where his botanist father was working on improving tea plants. In 1942 the Japanese invaded. Maarten’s father escaped to join the resistance, but Maarten, his younger sister, and his mother were sent to a detention camp. As the war came to a close, his father was able to rescue and reunite the family. Over the next few years they returned to the Netherlands, with a brief return to the newly-formed Indonesia, followed by boarding school in Australia for Maarten. They were finally reunited in the Netherlands in 1954, where Maarten began his final year of high school. After graduating in 1955, he went to national flight school (Rijksluchtvaartschool). He did a year of military service, including flight training, and then joined KLM Royal Dutch Airlines. But KLM was adopting DC-8 jets for transatlantic service, whose speed, capacity, and ease of operation led to the need for fewer pilots. Maarten took advantage of a company program to enroll part-time in an engineering curriculum at the University of Delph. Later he was laid off by KLM and finished a master’s degree as a full-time student. He then enrolled in the PhD program administered by the University of Amsterdam with research at the Mathematisch Centrum (now CWI), and also made several visits to the University of Edinburgh. His 1971 dissertation was An Analysis of Complexity and his advisor was Adriaan van Wijngaarden. Maarten was awarded a post-doctoral fellowship by IBM, which he spent at the Thomas J. Watson Research Center in Yorktown Heights, NY during the 1971-1972 academic year, before returning to Edinburgh for a research position under Donald Michie in the Department of Machine Intelligence.  In 1975 he accepted a professorship at the University of Waterloo, and in 1987 he moved to the University of Victoria.


Maarten was one of 15 individuals recognized as Founders of Logic Programming by the Association for Logic Programming.[2] His work began with an early collaboration with Bob Kowalski[3] and continued throughout his career with collaborations and individual projects to explore many aspects of the field. Underlying his interest in logic programming was a fascination with programming and programming languages of all sorts.[4] His first language was Algol 60, which he taught himself using McCracken’s new book[5] when his university suddenly switched from Marchant calculators to a Telefunken TR-4 computer for the numerical methods course.[6] Moving on to the MC he was surrounded by ALGOL experts (his advisor van Wijngaarden was a member of the ALGOL 60 Committee and the instigator of the infamous ALGOL 68). Maarten was originally attracted to Edinburgh after hearing about the POP-2 timesharing system of Burstall and Popplestone; it was only later that he realized he’d initially used POP-2 as if it was ALGOL rather than a rich functional programming language. During his post-doc at IBM he learned APL and Lisp. Fred Blair was implementing a statically-scoped Lisp for the SCRATCHPAD computer algebra group.[7] And William Burge, who had worked with Burstall and Landin, was spreading the gospel of functional programming.[8] Ensconced in Edinburgh in 1972, he became an early convert to Kowalski’s logic programming, which he noted could be traced back as early as Cordell Green’s paper at the 4th Machine Intelligence workshop.[9] But Maarten’s first impression of Preliminary Prolog was not positive — the frequent control annotations seemed to detract from the logic. Nevertheless, he and Kowalksi began writing short programs to explore the ideas. And when David Warren returned from a visit to Marseille with a box of cards containing Final Prolog as well as his short but powerful WARPLAN program, things changed. The language no longer needed the control annotations, and Warren quickly ported its Fortran-coded lowest layer to the local DEC-10. WARPLAN served as a tutorial for all sorts of programs in the new language. Maarten was surprised that his friend Alan Robinson, the inventor of resolution logic, wouldn’t give up Lisp for logic programming.[10] At Waterloo, he advised Grant Roberts, who built Waterloo Prolog for the IBM System /370, and another series of students who built  several Prologs for Unix. At Victoria, he wrote a first-year textbook for science and engineering students based on C:

It is indeed true that object-oriented programming represents a great advance. It is also true that polymorphism in object-oriented programming does away with many if-statements and switch statements; that iterators replace or simplify many loops. But experience has shown that introducing objects first does not lead to a first course that produces better programmers; on the contrary. It is as much necessary as in the old days to make sure that students master variables, functions, branches, loops, arrays, and structures.

[11], page xi

In the acknowledgements of the book, he wrote:

I had the good fortune to grow up in three distinctive programming cultures: the Mathematical Centre in Amsterdam, the Lisp group in the IBM T.J. Watson Research Center, and the Department of Machine Intelligence in the University of Edinburgh. Though all of these entities have ceased to exist, I trust I am not the only surviving beneficiary.

If this book is better than others, it is due to my choice of those who were, often without knowing it, my teachers: H. Abelson, J. Bentley, W. Burge, R. Burstall, M. Cheng, A. Colmerauer, T. Dekker, E. Dijkstra, D. Gries, C. Hoare, D. Hoffman, N. Horspool, B. Kernighan, D. Knuth, R. O’Keefe, P. Plauger, R. Popplestone, F. Roberts, G. Sussman, A. van Wijngaarden, N. Wirth.

[11], page xi

Getting to know Maarten

As different as we were, Maarten and I had a few things in common: fathers who piloted B-24 bombers in WWII, a charismatic mutual friend named Jim Gray, attendance at the 1973 NATO Summer School on Structured Programming, books named Elements of Programming, and a fascination with the early development of programming languages. Jim Gray had been an informal mentor for me at UC Berkeley as I worked on CAL Snobol and Cal TSS. Then he left Berkeley for IBM Research in Yorktown, and made friends with Maarten. Jim soon decided he couldn’t tolerate life on the east coast, but before leaving he encouraged Maarten and his wife Jos to drive across the country and visit him in California, where he would show them around. They took him up on the offer, and during a brief stay in fall 1972 at Jim’s home in Berkeley I met Maarten, but didn’t make much of an impression on him (although he later told me Jim had mentioned the “great programmers on Cal TSS”). The next summer both Maarten and I attended the NATO Summer School on Structured Programming at Marktoberdorf, but neither of us remembered encountering the other. Maarten mentioned the summer school in his remembrance of Dijkstra.[12]

In 1974 I caught up with Jim Gray again, joining IBM Research in San Jose (before Almaden). The next summer Maarten visited Jim, although I didn’t learn of it until much later:

“After I returned to Europe Jim and I kept writing letters. In the summer of 1975 I was in a workshop in Santa Cruz and Jim came up in a beautiful old Porsche.  I was at the height of my logic programming infatuation. Jim was rather dismissive of it. Nothing of what he told me about System R  turned me on; the relationship died with that meeting. How I wish I could talk to him now about the mathematics of RDBs, which I started working on recently.”

[Maarten van Emden, personal communication, September 3, 2010]

Maarten left three technical reports with Jim, who passed them along to me.[13] [14] [15] I looked at them, and then put them aside for the next 35 years. In the fall of 2010 I had retired and was spending more time on software history projects. I’d been following Maarten’s blog; a recent pair of articles about the Fifth Generation Computer System project and the languages Prolog and Lisp[16] [17] prompted me to contact him about a project I was contemplating: an historical archive of implementations of Prolog.[18] That began a friendship carried out mostly through some 2000 emails and almost 400 weekly video calls, plus one in-person visit when Maarten visited the Bay Area in early 2011. I will always remember his charming manners, gentle humor, wide-ranging interests, and intriguing stories.


Thanks to Maarten’s daughter Eva van Emden for information about his life.

For more of his writing, see:

Update 1 January 2024

Shortly after I wrote my post, Maarten’s colleagues wrote this article for the Association of Logic Programming: In Memoriam: Maarten van Emden.

Maarten’s web site at UVic has moved to


[1] Eva van Emden. Maarten van Emden Obituary. The Times Colonist, January 10, 2023.
[2] ALP Awards. Association for Logic Programming.
[3] M. H. van Emden and R. A. Kowalski. The Semantics of Predicate Logic as a Programming Language. Journal of the ACM, Vol. 23, No. 4, 1976, pp. 733-742.
[4] Maarten van Emden. The Early Days of Logic Programming: A Personal Perspective. Association for Logic Programming Newsletter, August 2006.
[5] Daniel McCracken. A Guide to ALGOL Programming. John Wiley and Sons, 1962.
[6] Maarten van Emden. On Finding a Discarded Copy of “A guide to Algol Programming.” 1993 email to Frank Ruskey.
[7] J. H. Griesmer and R. D. Jenks. SCRATCHPAD/1: An interactive facility for symbolic mathematics. In Proceedings of the second ACM symposium on Symbolic and algebraic manipulation (SYMSAC ’71). Association for Computing Machinery, New York, NY, USA, 42–58.
[8] William Burge. Recursive Programming Techniques. Addison-Wesley 1975.
[9] Cordell Green. Theorem-Proving by Resolution as a Basis for Question-Answering Systems. Machine Intelligence 4, Bernard Meltzer and Donald Michie, editors, Edinburgh University Press, Edinburgh, Scotland, 1969, pages 183–205.
[10] Maarten van Emden. Interview with Alan Robinson, inventor of resolution logic. June 8, 2010.
[11] M. H. van Emden. Elements of Programming. Andromeda Research Associates, Ltd. Third edition, 2009, page ix.
[12] Maarten van Emden. I remember Edsger Dijkstra (1930 – 2002). August 2008.
[13] Robert Kowalski. Predicate Logic as Programming Language. Department of Computational Logic, University of Edinburgh, Memo No. 70, November 1973.
[14] M. H. van Emden and R. A. Kowalski. The semantics of predicate logic as a programming language. School of Artificial Intelligence, University of Edinburgh, MIP-R-103, February 1974.
[15] M. H. van Emden. First-order predicate logic as a high-level program language. School of Artificial Intelligence, University of Edinburgh, MIP-R-106, May 1974.
[16] Maarten van Emden. Who Killed Prolog? A Programmer’s Place blog, August 21, 2010.
[17] Maarten van Emden. The Fatal Choice. A Programmer’s Place blog, August 31, 2010.
[18] Paul McJones, editor. Prolog and Logic Programming Historical Sources Archive.

The Year of Prolog (1972-2022)

Prolog 50 1972-2022

50 years ago Alain Colmerauer and his colleagues were working on Prolog 0:

“A draconian decision was made: at the cost of incompleteness, we chose linear resolution with unification only between the heads of clauses. Without knowing it, we had discovered the strategy that is complete when only Horn clauses are used.”

[Colmerauer and Roussel, 2006]

With this system, and the idea of “metamorphosis grammars” (a generalization of what later became known as “definite clause grammars”), the team was able to implement a natural language man-machine communication system. The following year, the team released Prolog 1, the classic Marseille Prolog that quickly spread to Edinburgh, Leuven, Warsaw, Budapest, London, Waterloo, and beyond.

Now the friends of Alain Colmerauer are calling for 2022 to be “The Year of Prolog”. They’re marking the 50th anniversary with:

  1. An Alain COLMERAUER Prize awarded by an international jury for the most significant achievement in Prolog technology.
  2. A “Prolog School Bus” that will travel to reintroduce declarative programming concepts to the younger generation. This is a long-term initiative that will be initiated during the year. The purpose of this “Tour de France” (and elsewhere) will be to familiarize schoolchildren with Prolog, as they are already familiar with the Scratch language. At the end of this school caravan, a prize will be awarded to the ‘nicest’ Prolog program written by a student.

For more information, see:

A documentary about Alain Colmerauer

Alain Colmerauer – photo from his web site

It’s called An idea crazy enough…..Artificial Intelligence and it’s being developed by Colmerauer‘s friends at Prolog Heritage via a crowd-funded project at Ulele.

Colmerauer died 12 May 2017; the hoped-for tribute this fall has evolved:

The project is a film to portray Alain Colmerauer’s life and work – his contribution to Logic Programming and Constraints Logic Programming – all brought to life through interviews with some of the key participants of his time, complemented by images and documents from the archives. In fact that was the best solution to invite witnesses in this period of sanitary difficulties.

Alain Colmerauer documentary, Ulule

A 20 Euro contribution gets you an invitation to an exclusive preview of the film online; a 50 Euro contribution gets you the invitation, your name in the credits as a donor, and a digital version of the documentary.

Colmerauer’s web site:

Prolog Heritage:

Prolog and Logic Programming Historical Sources Archive:

Liz Bond Crews; Desktop Publishing Meeting

May 2017 Desktop Publishing Pioneers meeting, Computer History Museum. Liz Bond Crews is third from left in the front row. © Douglas Fairbairn Photography; courtesy of the Computer History Museum

On May 22 and 23, 2017, the Computer History Museum held a two-day meeting with more than 15 pioneering participants involved in the creation of the desktop publishing industry. There were a series of moderated group sessions and one-on-one oral histories of some of the participants, all of which were video recorded and transcribed.

Building on this meeting, three special issues of the IEEE Annals of the History of Computing were published, telling the stories of people, technologies, companies, and industries — far too much for me to cover here, so I will provide these links:

Last but not least, I had the pleasure of interviewing Liz Bond Crews, who worked first at Xerox and then Adobe to forge relationships and understanding between the purveyors of new technology (laser printers and PostScript) and the type designers, typographers, and designers who adopted that technology. An edited version of that interview appears in the third special issue of Annals:

Preserving (more of) the history of logic programming and Prolog

This is a preview of an article that later appeared in the Newsletter of the Association for Logic Programming. I posted it here first because of the infrequent publication schedule of that newsletter and my desire to announce a new website: Prolog and Logic Programming Historical Sources Archive.

Logic programming has a long and interesting history with a rich literature comprising newsletters, journals, monographs, and workshop and conference proceedings. Much of that literature is accessible online, at least to people with the appropriate subscriptions. And there are a number of logic programming systems being actively developed, many of which are released as open source software.

Unfortunately, the early years of logic programming are not as consistently preserved. For example, according to, the proceedings of the first two International Logic Programming Conferences are not available online, and according to, the closest library copies of the two are 1236 km. and 8850 km. away from my home in Silicon Valley. Early workshop proceedings and many technical reports are similarly hard to find (but see [1, 2]!). And the source code of the early systems, although at one time freely distributed from university to university, is now even more difficult to find.

As noted by people like Donald Knuth [3], Len Shustek [4], and Roberto Di Cosmo [5], software is a form of literature, and deserves to be preserved and studied in its original form: source code. Publications can provide overviews and algorithms, but ultimately the details are in the source code. About a year ago I began a project to collect and preserve primary and secondary source materials (including specifications, source code, manuals, and papers discussing design and implementation) from the history of logic programming, beginning with Marseille Prolog. This article is intended to bring awareness of the project to a larger circle than the few dozen people I’ve contacted so far. A web site with the materials I’ve found is available [6]. I would appreciate suggestions for additional material [7], especially for the early years (say up through the mid 1980s). The web site is hosted by the Computer History Museum [8], which welcomes donations of historic physical and digital artifacts. It’s also worth noting the Software Heritage Acquisition Process [9], a process designed by Software Heritage [5] in collaboration with the University of Pisa to curate and archive historic software source code.

Maarten van Emden provided the initial artifacts and introductions enabling me to begin this project. Luís Moniz Pereira provided enthusiastic support, scanned literature from the early 1980s [1, 2], and encouraged me to write this article. And a number of other people have generously contributed time and artifacts; they are listed in the Acknowledgements section of [6] as well as in individual entries of that web site.


  1. Luís Moniz Pereira, editor. Logic Programming Newsletter, Universidade Nova de Lisboa, Departamento de Informática. Issues #1-#5, 1981-1984. [Note that this newsletter was typeset, galley-proofed, and printed in color.]
  2. Luís Moniz Pereira, António Porto, Luís Monteiro, and Miguel Figueiras, editors. Proceedings of Logic Programming Workshop’83. Praia da Falésia, Algarve / PORTUGAL, 26 June – 1 July, 1983. Núcleo de Intelligência Artificial, Universidade Nova de Lisboa. and
  3. Donald Knuth. Let’s not dumb down the history of computer science. Kailath Lecture, Stanford University, May 7, 2014.
  4. Len Shustek. What Should We Collect to Preserve the History of Software? IEEE Annals of the History of Computing, Vol. 28, No. 4, October-December 2006.
  5. Roberto Di Cosmo, founder and CEO. Software Heritage.
  6. Paul McJones, editor. Prolog and Logic Programming Historical Sources Archive.
  7. Do you have a card deck or listing of the original Marseille interpreter? Or the source code for NIM IGÜSZI PROLOG, IC-Prolog, EMAS Prolog, or LogLisp, just to name a few?
  8. Computer History Museum. Mountain View, California.
  9. The Software Heritage Acquisition Process (SWHAP).

Update 1/21/2021: added the link to the article as published in the ALP Newsletter.

Update 5/5/2024: updated the link to the ALP Newsletter.

Authors’ Edition of Elements of Programming

Cover of the book Elements of Programming by Alexander Stepanov and Paul McJones
Elements of Programming by Alexander Stepanov and Paul McJones

After almost 10 years in print, Addison-Wesley elected to stop reprinting Elements of Programming and has returned the rights to us. We are releasing an “Authors’ Edition” in two versions:

1. A free PDF (download from here)

2. A trade paperback from our imprint, Semigroup Press (purchase here or at Amazon – US$14.20 in either case).

The text is unchanged from the previous printing, except for corrections for all errata reported to us.

Update 5/15/2020: Lulu seems to have dropped their discount feature, so the price is the same everywhere.

Update 11/6/2019: Added Amazon link.

GTL is a LISP 2 implementation

A few months after my article “The LISP 2 Project” was published, I learned from Paul Kimpel that the language GTL includes a “non-standard” version of LISP 2. GTL stands for Georgia Tech Language. It is an extension of the Burroughs B 5500 Algol language, and its implementation extends the Burroughs Algol compiler. There is a new data type, SYMBOL, whose value can be an atomic symbol, a number, or a dotted pair. There is a garbage collector, and a way to save and restore memory using the file system. GTL was designed by Martin Alexander at the Georgia Institute of Technology between 1968 and 1969. The source code is available as part of the Burroughs CUBE library, version 13, and the manual is available via; see here for details.

The LISP 2 Project

The LISP 2 Project” appears in the October-December 2017 issue of IEEE Annals of the History of Computing (open access).

I first heard about LISP 2 around 1971, from a 1966 conference paper included in the reading for a U.C. Berkeley seminar on advanced programming languages. The goal of LISP 2 was to combine the strengths of numerically-oriented languages such as ALGOL and FORTRAN with the symbolic capabilities of LISP. The paper described the language and its implementation at some length, but by 1971 it was pretty clear that LISP 2 had not caught on; instead, the original LISP 1.5 had spawned a variety of dialects such as BBN-LISPMACLISP, and Stanford LISP 1.6.

In 2005 I began a project to archive LISP history  and kept encountering people who’d been involved with LISP 2, including Paul Abrahams, Jeff Barnett, Lowell Hawkinson, Michael Levin, Clark Weissman, Fred Blair, Warren Teitelman, and Danny Bobrow. By 2010 I had been able to scan LISP 2 documents and source code belonging to BarnettHerbert Stoyan, and Clark Weissman. In 2012, after writing about Hawkinson and others in an extended blog post “Harold V. McIntosh and his students: Lisp escapes MIT,” I decided to try to tell the story of the LISP 2 project, where so many interesting people’s paths had crossed. My sources included original project documents as well as telephone and email interviews with participants, and several participants were kind enough to provide feedback on multiple drafts. I let the article sit in limbo for five years, but last year after I published another anecdote in the Annals, editor Dave Walden encouraged me to submit this one.

On December 28, 2017, as the article was about to go to press, Lowell Hawkinson died suddenly from an accident.

Lowell Hawkinson, 1943 – 2018

Lowell Hawkinson passed away at the age of 74 on December 28, 2017 as a result of an accident. Lowell was a pioneer in LISP implementation and artificial intelligence. He co-founded Gensym Corporation in 1986 and served as its CEO through 2006. This obituary gives more details of his life and accomplishments.

I first got in touch with Lowell in 2010 because of my interest in archiving LISP history. We exchanged emails (and had one phone conversation), and over the years I wrote several blog posts and a journal article about work involving him:

Although my interactions with Lowell were brief, his kindness and modesty were manifest. He will be deeply missed by his family and friends.

In Search of the Original Fortran Compiler

“In Search of the Original Fortran Compiler” appears in the April-June 2017 issue of IEEE Annals of the History of Computing. If that link doesn’t work, you can read my final submitted version here.

I wrote the article to chronicle the search I began in late 2003 to find the source code for the original FORTRAN compiler for the IBM 704. Much of the search was documented right here on this Dusty Decks blog, which I created in July 2004 as a sort of advertisement.

I’d like to thank Burt Grad for encouraging me to write the article. Burt is a friend who began working on computer software in 1954 and hasn’t stopped — for example, see here and here and here and here.

Miscellaneous Lisp updates

Recently I made some long-delayed updates to History of LISP.  In the Lisp I/1.5 for IBM 704, 709, 7090 section, I added links to the excellent work by Andru Livisi (here) and Dave Pitts (here) for running LISP on emulators.

In the Other Lisp 1.5 implementations, I added a mention of LISP 1.5 for IBM M44. The M44 was an experimental machine that served as a testbed for some of the earliest virtual machine research.

In the Other Lisps section I added Lisp 1.6 for IBM 1130 (Boston Latin School), which was the first Lisp of Guy L. Steele Jr., who went on to work on MacLispSchemeNILCommon Lisp, and Connection Machine Lisp.  I also added PDP-11 LISP (Massachusetts Institute of Technology), which was the first Lisp of Richard M. Stallman, who went on to work on MacLispLisp Machine Lisp, and Emacs Lisp.

In the Embedded Lisps section I added XLISP.

I made various additions in other sections including Scheme and Common Lisp.

New Japanese edition of Elements of Programming

Second Japanese edition of Elements of Programming
Elements of Programming, second Japanese edition
The original Japanese translation of Elements of Programming went out of print. But Yoshiki Shibata, the translator, proposed to Tokyo Denki University Press that they publish a new edition, and they agreed. It is available via, and joins the English, Russian, Chinese, and Korean editions.

We wrote a special preface for this edition:

To our Japanese readers:

We are very grateful to our publisher and our translator Yoshiki Shibata for this opportunity to address our Japanese readers.

This book is in the spirit of Japanese esthetics: it tries to say as much as possible in as few words as necessary. We could not reduce it to 17 sounds like a traditional haiku, but we were inspired with its minimalist approach. The book does not have much fat. We actually hope that it does not have any fat at all. We tried to make every word to matter.

We grew up when Japanese engineering was beginning to revolutionize the world with cars that did not break and television sets that never needed repairs. Our hope is that our approach to software will enable programmers to produce artifacts as solid as those products. Japanese engineers changed the experience of consumers from needing to know about cars, or, at least, knowing a good mechanic, to assuming that the vehicle always runs. We hope that the software industry will become as predictable as the automotive industry and software will be produced by competent engineers and not by inspired artist-programmers.

Our book is just a starting point; most work to industrialize software development is in the future. We hope that readers of this book will bring this future closer to reality.

We would like to thank Yoshiki Shibata not only for his very careful translation, but also for finding several mistakes in the original.

Harold V. McIntosh, 1929-2015

Update 5/21/2019: Genaro J. Martínez, Juan C. Seck-Tuoh-Mora, Sergio V. Chapa-Vergara, and Christian Lemaitre recently published a paper “Brief Notes and History [of] Computing in Mexico during 50 years” centered around McIntosh’s accomplishments. arXiv:1905.07527 [cs.GL] DOI

Update 3/31/2019: Here are photos from a November 2017 memorial held for McIntosh at the Faculty of Computer Science of the Institute of Science at Autonomous University of Puebla, Puebla, Mexico.

Update 1/5/2017: For more on McIntosh’s professional career, see these obituaries at Physics Today and Journal of Cellular Automata.

Harold V. McIntosh, 1929-2015Harold V. McIntosh died November 30, 2015 in Puebla, Mexico. He was an American mathematician who became interested in what is now known as computer algebra to solve problems in physics, leading to his early adoption of the programming language LISP and to his design of the languages CONVERT (in collaboration with Adolfo Guzmán) and REC. His early education and employment was in the United States, but he spent the last 50+ years in Mexico, and received a Ph.D. in Quantum Chemistry at the University of Uppsala in 1972.

McIntosh was born in Colorado in 1929, the oldest of four sons of Charles Roy and Lillian (Martin) McIntosh. He attended Brighton High School in Brighton, near Denver. In 1949 he received a Bachelor of Science in physics from the Colorado Agricultural and Mechanical College, and in 1952 he received a Master of Science in mathematics from Cornell University. He did further graduate studies at Cornell and Brandeis, but stopped before receiving a Ph.D. to take a job at the Aberdeen Proving Ground. Two years later, he moved to RIAS (Research Institute for Advanced Studies), a division of the Glenn L. Martin Company. Around 1962 he accepted a position in the Physics and Astronomy department and the Quantum Theory Project at the University of Florida. After two years at the University of Florida, McIntosh accepted an offer at CENAC (Centro Nacional de Calculo, Instituto Politecnico Nacional) in Mexico. Over the next years, McIntosh worked in various positions in Mexico at Instituto Politecnico Nacional, Instituto Nacional de Energía Nuclear, and, from 1975 on, Universidad Autónoma de Puebla.

McIntosh was widely regarded for his research, writing and teaching; for details, see Gerardo Cisneros-S.: “La computación en México y la influencia de H. V. McIntosh en su desarrollo” (PDF). He organized several special summer programs in the early 1960s that introduced a number of students to higher mathematics and computer programming (see here for example). He also had a lifelong interest in flexagons, which he shared with his students. A symposium in his honor was held a month before he died.

Other resources


1961 Annual report of RIAS. PDF at

Paul McJones. The First International LISP Conference (1963). Dusty Decks blog, April 23, 2012

Paul McJones. Harold V. McIntosh and his students: Lisp escapes MIT. Dusty Decks blog, July 6, 2012

Paul McJones, editor. History of Lisp : Other Lisp 1.5 implementations : MBLISP. Online at

Celebration of late Prof. Harold V. McIntosh achievements. Faculty of Computer Science of the Institute of Science at Autonomous University of Puebla, Puebla, Mexico, November 29, 2017. Online at


José Manuel Gómez Soto for notifying me of McIntosh’s death and supplying the link to this obituary; Robert Yates, Lowell Hawkinson, and Adolfo Guzmán Arenas for their contributions to “Harold V. McIntosh and his students: Lisp escapes MIT”; and Genaro Juarez Martinez for informing me about the memorial celebration.

L. Peter Deutsch’s PIVOT program verification system

L. Peter Deutsch in his office at Xerox PARC, around 1972.PIVOT, the program verification system written in BBN-Lisp by L. Peter Deutsch and described in his PhD thesis, “An interactive program verifier” is a recent addition to the Software Preservation Group web site.

Deutsch is a computer scientist who made important contributions to interactive implementations of Lisp and Smalltalk. While he was in high school, he implemented the first interactive Lisp interpreter, running on a DEC PDP-1 computer. While still in high school, he worked with Calvin Mooers on the design of TRAC, and implemented the language on a PDP-1 at BBN. Then Deutsch enrolled at the University of California at Berkeley, where he soon joined Project Genie, one of the earliest timesharing systems. Meanwhile, at BBN, Deutsch’s original PDP-1 Lisp became the “conceptual predecessor” of BBN-Lisp, running first on the PDP-1, then the SDS-940 (running the Project Genie timesharing system), and finally the PDP-10 running BBN’s own TENEX. After several of the BBN-Lisp creators, including Deutsch, moved to Xerox PARC, BBN-Lisp became INTERLISP. By this time, Deutsch had received his bachelor’s degree at Berkeley, and with other Project Genie alumni had co-founded Berkeley Computer Corporation, which built a large timeshared computer (the BCC-500) but then went bankrupt. While working at PARC, Deutsch also attended graduate school at Berkeley, carrying out the research on program verification that produced the PIVOT system.

Deutsch was kind enough to donate his only source listing of PIVOT to the Computer History Museum (Lot number X7485.2015), and to allow scans of the listing and his thesis to be posted on the SPG web site.