Remembering
Michael Levin (1940-2025)

L-R: Michael Levin, Lowell Hawkinson, Ed Fredkin. 16 August 2016. Courtesy of Mark David.

Michael I. Levin died on 14 August 2025. He played important roles in the original LISP 1.5 project and the follow-on LISP 2 project. Later he cofounded Gensym, which developed a real-time expert system. He had a lifelong interest in teaching, perhaps influenced by his parents, who were both school teachers. This account was pieced together from my email connections with him between 2005 and 2017, my research on the LISP 1.5 and LISP 2 projects, a 2019 interview by his cousin Alan Kadin, and a 2009 resume. It only covers Levin’s professional career, although an important part of his life was following the teaching of Chögyam Trungpa.

Levin was born in Paterson, New Jersey and grew up in nearby Totowa. He did well in school and applied to five colleges, choosing M.I.T. and enrolling in fall 1958. He majored in mathematics but was attracted to computer science, taking a graduate course by Marvin Minsky in his sophomore year. Minsky proposed a problem—providing an effective definition of “Random Sequence”—that engaged Levin and apparently provided a topic for his undergraduate thesis. He also coauthored a technical report with Minsky and Roland Silver entitled “On the effective definition of ‘Random sequence'” (AI Memo 36 revised; undated).

Apparently even before he graduated (in 1962), Levin had joined McCarthy’s Lisp project. His first publication was “Arithmetic in LISP 1.5” (AI Memo 24, April 1961), which noted it was an excerpt from the upcoming LISP 1.5 Programmer’s Manual. A version of that manual was released on July 14, 1961; its preface notes “This manual was written by Michael Levin starting from the LISP I Programmer’s Manual by Phyllis Fox.” But the final August 17, 1962 version simply states “This manual was written by Michael I. Levin.” There was another significant change between the two versions: the former said “The compiler was written by Robert Brayton with the assistance of David Park” while the latter said “The compiler and assembler were written by Timothy P. Hart and Michael I. Levin. An earlier compiler was written by Robert Brayton.” Apparently Brayton’s compiler, which was written in assembly language, was more difficult to maintain than the Hart-Levin compiler, which was written in Lisp (see “The New Compiler”, AI Memo 39) and could compile itself (after being “bootstrapped” using the interpreter). In 2013, he mentioned to me that one of his contributions to the compiler was converting a class of tail-recursive calls to iteration, a transformation that became much more well known in the 1970s. See this recent explanation of how Levin’s PROGITER function worked.

In 1962 McCarthy left M.I.T. but continued his work at Bolt Beranek & Newman (BBN); Levin joined him there. In 1963 McCarthy moved to Stanford to found an AI Laboratory. Levin may have briefly worked with McCarthy there, but by 1963 he was at M.I.T. working on a system for proof checking, as evidenced by his report “Primitive Recursion” (AI Memo 55, July 1963). Also he and his colleague Tim Hart wrote “LISP Exercises” (AI Memo 64, January 1964), intended to be used with the first chapter of The LISP 1.5 Programmer’s Manual. Perhaps they were teaching a course? Later in 1964, as the hardware for M.I.T.’s Multics operating system was being designed, Levin wrote: “Proposed Instructions on the GE 635 for List Processing and Push Down Stacks” (AI Memo 72, September 1964). Tom Van Vleck, creator of Multicians.org, told me:

I was at a big meeting where the 645 was presented to many users of the Project MAC facilities, mostly CTSS at the time. Corby led the presentation and multiple others covered various aspects. Ted Glaser talked about the associative memory and paging.

Many folks had questions. Mike raised the issues in his memo. … Ted Glaser responded, and Mike responded to that, and it was clear that the issue had many details that would have to be resolved. Mike kept asking for commitments that the presenters weren’t ready to talk about. Finally, Ted said, “We can’t do it that way, it would overload the busses.”

It was possible that Ted was making this up to end the discussion… but Ted was incredibly smart, and thought faster than anybody, and who was going to argue with a blind genius?

In any event Levin’s LISP instructions were not added to the 645.

Around this time discussions at M.I.T. and Stanford began on a follow-on language, eventually named LISP 2. Looking back in 1978, McCarthy noted:

As a programming language LISP had many limitations. Some of the most evident in the early 1960s were ultra-slow numerical computation, inability to represent objects by blocks of registers and garbage collect the blocks, and lack of a good system for input-output of symbolic expressions in conventional notations. All these problems and others were to be fixed in LISP 2.

The project expanded beyond what either of the AI labs could take on and moved to the Systems Development Corporation (SDC) in Santa Monica, California. SDC was experienced at software development, but not at Lisp, so Information International, Inc. (III) was brought in as a subcontractor. III, founded by Ed Fredkin, employed a number of experienced M.I.T. Lisp programmers, including Levin and Lowell Hawkinson.

One defining feature of LISP 2 was a syntax based on Algol 60, designed by Levin: “Syntax of the New Language” (AI Memo 68, May 1964). But by 2012, he told me:

An important idea for LISP II, which I emphasized from the beginning, but which was never implemented, and was eventually shown not to be needed, was the source language translator. This had its roots as a pedagogical device used by John McCarthy to show that there is a formal analogy between LISP and Gödel numbering. It was important to Gödel’s proof of the incompleteness of arithmetic to show that a language of logical theorems and proofs could be mapped into the domain of non-negative integers. McCarthy wanted to emphasize that LISP had the same theoretical foundation as Gödel’s proof, only with the difference that the mapping was easy and intuitive rather than being humanly intractable.

Although Levin changed his mind about the importance, I think it had an impact through the paper “The LISP 2 Programming Language and System,” (FJCC 1966) planting the idea of Algol-like languages with dynamic data structures.

Levin was also thinking about the runtime requirements for the new language at this time: “New Language Storage Conventions” (AI Memo 69, May 1964). Around this time, it appears he moved to Santa Monica to work more closely with the SDC team, and wrote a series of technical reports probably best studied in context at this web site dedicated to the project.

Levin returned to M.I.T. in 1965, apparently as an instructor, thinking about mathematical logic: “Topics in Model Theory” (AI Memo 78, May 1965; revised as AI Memo 92, January 1966). His resume notes: “Developed and taught graduate course entitled Mathematical Logic for Computer Scientists“. The resultant book was published as an M.I.T. LCS report (TR 131, June 1974). The book caught the eye of famous logician Stephen Cole Kleene, who noted in a footnote of a 1978 paper:

It was pointed out to me in Oslo on June 13, 1977 by David MacQueen that computer scientists have developed the theory of working with the expressions of a given formal language as a generalized arithmetic to the extent that it is not necessary to reduce the language to the simple arithmetic of the natural numbers by a Gödel numbering or indexing. Cf. Levin 1974, Scott 1976. (An early example of generalized arithmetic is in IM § 50.)

(in the interview by Alan Kadin, Levin joked: “During my time at Stanford, I studied a book by Stephen Cole Kleene, Introduction to Metamathematics. As far as I know, I may be the only person who carefully read this entire book.)

In addition to BBN and III, Levin worked for Composition Technology, where he said he implemented the world’s first typesetting program.

In the early 1970s Levin went to Colorado, apparently attracted by the teaching of Chögyam Trungpa. He held several teaching and research positions there, and then in 1982 he became an associate professor of computer science at Acadia University in Nova Scotia.

Returning to Massachusetts in 1984, he joined LISP Machine, Inc. (LMI), one of the two companies that spun out of M.I.T.’s Lisp machine project. Levin designed the inference engine for PICON, LMI’s real-time expert system. By 1986 it became clear to him and several of his colleagues (including Lowell Hawkinson, with whom he’d worked at III) that LMI was failing, so they left and co-founded Gensym. Gensym’s G2 real-time expert system, similar to PICON but with a from-scratch implementation, was very successful for industrial process control. A 1996 initial public offering allowed Levin to finance his later retirement, although he also did consulting work from 1999 to 2004.

John McCarthy had a vision of a programming language for symbolic processing in artificial intelligence. Michael Levin was a central member of the team that turned that vision into a practical implementation. Throughout his career he combined effective engineering with mathematical insights.

This essay benefited from comments by Mark David, Roger Frye, and Alan Kadin. I’m also indebted to Alan for his oral history.

The Software Preservation Group

Twenty-two years ago the Computer History Museum established an advisory committee (with staff, trustees, and volunteers) to  explore issues of identifying, collecting, preserving, and presenting software. The Museum had just moved into its present quarters; the main Revolution exhibit was still far in the future. But  the previous year, Grady Booch had gotten the ball rolling with a mass email entitled “Preserving classic software products” asking for people’s “top ten list of classic software products.” The replies to that email and the ensuing discussion made it clear that software collection and preservation were worth doing and needed to begin immediately, before more pioneers died and more source code was lost or destroyed.

Monthly meetings of the Software Collection Committee began in November 2003, covering a wide range of topics. Should collection be proactive or reactive? What priorities should guide proactive collecting? (Grady’s compiled top-ten lists provided initial input.) How should software be cataloged? How should it be presented? An initial workshop in October 2003 mostly asked questions, but a follow-up in May 2006 showed off work from CHM and from other organizations and individuals.

While many discussions raged for months, restless folks began “pilot projects” to collect materials from the original IBM 704 FORTRAN project, Doug Engelbart’s NLS/Augment project, and the APL programming language. A web site was set up to present the results of these pilot projects and the monthly meetings. An email list with an archive allowed people who couldn’t attend the meetings to participate in many discussions.

As time passed, interest in software preservation expanded. The Museum opened its Revolution! exhibit in 2011 followed by Make Software: Change the World! in 2017. Its Software History Center centralized and expanded the software-related curatorial staff. Source code releases became a regular occurrence. Work also went on outside the Museum. Software Heritage, founded by INRIA, maintains a replicated repository and populates it by crawling the world’s open-source software “forges” such as GitHub.com. 

At the same time, the committee (renamed Software Preservation Group) lost mindshare: The meetings wound down in 2007, and traffic on the email list tapered down to a standstill by 2017. However, encouraged by the success of my original FORTRAN project, I had continued with a series of projects including LISP and C++. After retiring in 2010 I took on larger projects on ALGOL, Prolog, SETL, and BCPL and smaller ones on GEDANKEN, Mesa, PAL, and Poplar, plus program verification systems AFFIRM and PIVOT. Along the way, a few other people found their way to the Software Preservation Group, and created projects: Interactive C Environments (Wendell R. Pepperdine), Emacs (Lars Brinkhoff), and FOCAL (Bruce Ray).

By 2025, the “Projects” section of the Software Preservation Group web site was still serving a useful purpose, but the overall web site was showing its age: the landing page suggested activities no longer in progress, the underlying Plone content management  system hadn’t been updated in 20 years created a burden on the Museum’s IT group to keep running and backed-up, and the use of a different domain deemphasized the connection to the Museum. Discussions with David Brock and Hansen Hsu of the Museum’s Software History Center led to the idea of rebuilding the web site with static HTML on a subdomain of the Museum’s regular computerhistory.org.

As the author of most of the content, I took on this effort in July 2025. It was fairly easy to convert the project web pages I’d created, but I wasn’t sure what to do with projects created by others that had seen little or no activity for years and decades. I managed to contact the original authors, and worked with them to tidy things up (especially Emacs). I also tracked down materials from the 2003 workshop, almost all the meetings (agendas, minutes, and presentation handouts), and the email archive.

While waiting for word back from the NLS/Augment and APL projects, I couldn’t resist starting a brand-new project: the Modula-3 programming language. That will be the subject of another post.

So please visit the new web site at:

https://softwarepreservation.computerhistory.org/

Most links to the old web site will be automatically redirected. If you see things amiss, let me know.

Xerox PARC IFS archive

In 2014, the Computer History Museum released the Xerox Alto file server archive, constituting about 15,000 files from the Xerox Alto personal computer including the Alto operating system; BCPL, Mesa, and (portions of the) Smalltalk programming environments; applications such as Bravo, Draw, and the Laurel email client; fonts and printing software (PARC had the first laser printers); and server software (including the IFS file server and the Grapevine distributed mail and name server). I told the story behind that archive here.

Today CHM released the Xerox PARC Interim File System (IFS) archive:

The archive contains nearly 150,000 unique files—around four gigabytes of information—and covers an astonishing landscape: programming languages; graphics; printing and typography; mathematics; networking; databases; file systems; electronic mail; servers; voice; artificial intelligence; hardware design; integrated circuit design tools and simulators; and additions to the Alto archive.

A blog post by David Brock introduces the archive. Access to the archive itself is available here.

I began working on this project in 2018 under an NDA with PARC: reading the old media prepared years earlier by Al Kossow, updating the conversion software I’d written for the earlier Alto project, and winnowing down a list of 300,000 files to the 150,000 files that I submitted to PARC management for approval. David Brock’s post ends with an Acknowledgments section noting all the people at CHM and PARC who contributed.

Liz Bond Crews; Desktop Publishing Meeting

May 2017 Desktop Publishing Pioneers meeting, Computer History Museum. Liz Bond Crews is third from left in the front row. © Douglas Fairbairn Photography; courtesy of the Computer History Museum

On May 22 and 23, 2017, the Computer History Museum held a two-day meeting with more than 15 pioneering participants involved in the creation of the desktop publishing industry. There were a series of moderated group sessions and one-on-one oral histories of some of the participants, all of which were video recorded and transcribed.

Building on this meeting, three special issues of the IEEE Annals of the History of Computing were published, telling the stories of people, technologies, companies, and industries — far too much for me to cover here, so I will provide these links:

Last but not least, I had the pleasure of interviewing Liz Bond Crews, who worked first at Xerox and then Adobe to forge relationships and understanding between the purveyors of new technology (laser printers and PostScript) and the type designers, typographers, and designers who adopted that technology. An edited version of that interview appears in the third special issue of Annals:

Computer History Museum videos coming to YouTube

The Computer History Museum has just launched a partnership with YouTube to provide a ComputerHistory “channel”. Right now it has 23 videos from various events and lectures at the museum; if you subscribe (via the orange button), you’ll be notified when more are uploaded. In the mean time, the museum maintains a calendar of past events with links to video, where available.

Update 1/1/2016: Updated URL for CHM past events.

Introduction

My name is Paul McJones. I am using this weblog to discuss historic computer software and hardware among other topics. For several months I’ve been studying the early history of Fortran, and trying to track down the source code for the original Fortran compiler. Although I just set up this weblog recently (June-July 2004), I’ve created back-dated entries to document my quest in chronological order, starting here.

I welcome suggestions for additional topics, and also would like to invite others to contribute articles on the history of early programming languages, operating systems, database management systems, and applications.

If you like this web log, you might be interested in the System R website documenting the history of the System R relational database research project, which gave birth to the SQL query language.

Paul McJones (photo by Kelly Castro)

Software Collection Committee

The first meeting of the Software Collection Committee of the Computer History Museum was held November 19th. The purpose of the committee is to help the museum, whose mission statement is “To preserve and present for posterity the artifacts and stories of the information age”, bootstrap its software activities. Expected activities include establishing standards for categorization, preservation, etc., testing these standards on some representative software, establishing priorities for software to collect, etc. The committee was launched partly as a response to a workshop on Preserving Classic Software moderated by Grady Booch and held October 16-17.

The discussion of what software would be worth collecting first was quite interesting — people proposed various criteria, but clearly age and historical significance are key. At this meeting, or shortly thereafter, I began thinking about the first Fortran compiler, and my friend Alex Stepanov reinforced this interest. Fortran was arguably the first higher-level programming language, and its compiler was the first optimizing compiler.

Update 2024/05/07: Changed link for Alex Stepanov from (broken) DBLP to stepanovpapers.com.