A CAL TSS debugging tool

One of the artifacts preserved from the CAL Timesharing System project is a deck of 14 80-column binary cards labeled “TSS PM DUMP”. This is a program for a CDC 6000 series peripheral processor unit (PPU) to perform a post mortem dump to magnetic tape of the complete state of a (crashed) system: PPU memories, CPU memory, exchange package, and extended core storage. Another system utility program, TSS PP DUMP-TAPE SCANNER, allows selective display of portions of the dump to either the teletype or one of the system console displays. I believe Howard Sturgis wrote the PPU program and Keith Standiford wrote the CPU program.

I suspected this card deck was a version of a PPU program called DMP, for which a listing exists. Carl Claunch very generously offered to read (digitize) the cards. He produced a binary file tss.hex. I queried the ControlFreaks mailing list re a PPU disassembler, and Daiyu Hurst sent me a program ppdis.c that generates a listing with opcodes resolved and addresses, octal, and textual representations of each 12-bit word. After upgrading it to eliminate some K&R C function definitions and changing its input format to match tss.hex, I ran it, captured the output, and then began annotating it. As expected, it matched the DMP listing very closely, so I used those variable names, labels, and comments to update the output of ppdis.c, and added a few additional comments, including slight differences from the DMP listing.

The first card is a loader that loads subsequent cards up until one with a 6-7-8-9 punch in column one is encountered. The first card is loaded via the deadstart panel. Page 14 of the CAL TSS Operator’s Manual explains:

    HOW TO MAKE A DIAGNOSTIC DUMP OF A SICK SYSTEM

       Unfortunately, the dump program requires a different deadstart panel from the system dead start program. Reset the deadstart panel to
CAL TSS I, push the deadstart button, read the deck “TSS POST MORTEM” into the card reader, mount a tape on unit 0, and stand back and
watch it go. After the tape unloads, reset the deadstart panel to CAL TSS II and dead start the system as usual. Record the reel on which
the dump was made along with the other information relevant to the crash.

The Operator’s Manual also contains a set of CAL-TSS FAILURE LOG forms recording crashes and attempts to diagnose them.

The program on the cards is very similar to the DMP listing (which doesn’t include the loader card), with slightly different addresses and one or two small changes in the code.

The general structure of the program is to dump PPU 0 (whose memory is partially overlaid by the DMP program), then use this working space to dump PPUs 1-9, the exchange package (CPU registers), the CPU memory, extended core storage, and finally write a trailer record. The console display is used for operator messages: mounting a tape on drive zero, progress messages indicating which phase is taking place, and several error messages.

This doesn’t sound like a terribly difficult task, but it requires about 1000 instructions on the PPU, which has 12-bit words, one register, no multiply or divide, and an instruction time of 1 to 4 microseconds. There are some additional complications:

  1. A PPU can’t access the memory of another PPU. When the overall system is deadstarted, PPU 0 begins running a 12-instruction program loaded from toggle switches on the deadstart panel, and the other PPUs are each suspended on an input instruction on a different I/O channel. Thus PPU 0 sends a short program to each one instructing it to output its own memory on a channel, which PPU 0 inputs and then outputs to the tape drive.
  2. Similarly, a PPU can’t access extended core storage (ECS). So PPU 0 repeatedly writes a short program to the CPU memory that reads the next block from ECS to CPU memory, then does an “exchange jump” to cause the CPU to execute that program. The PPU then reads the block from CPU memory and writes it to tape.

Here is the annotated listing.

Updated 18 January 2024

To link to the copy of the listing (etc.) at https://caltss.computerhistory.org.

CAL Timesharing System: Before computers were personal

In 2023 computers are all around us: our phones, tablets, laptops, and desktops, and lurking inside our television sets, appliances, automobiles, to say nothing of our workplaces and the internet. It wasn’t always that way: I was born in 1949, just as the first stored-program digital computers were going into operation. Those computers were big, filling a room, and difficult to use. Initially a user would sign up for a block of time to test and run a program that had been written and punched into paper tape or 80-column cards.cThe fact that an expensive computer sat idle while the user was thinking or mounting tapes seemed wasteful, so people designed batch operating systems that would run programs one after the other, with a trained operator mounting tapes just before they were needed. The users submitted their card decks and waited in their offices until their programs had run and the listings had been printed. While this was more efficient, there was a demand for computers that operated in “real time”, interacting with people and other equipment. MIT’s Whirlwind, TX-0, and TX-2 and Wes Clark’s LINC are examples.

The ability to interact directly with a computer via a terminal (especially when a display was available) was compelling, and computers were becoming much faster, which led to the idea of timesharing: making the computer divide its attention among a set of users, each with a terminal. Ideally the computer would have enough memory and speed so each user would get good service. Early timesharing projects included CTSS at MIT, DTSS at Dartmouth, and Project Genie at Berkeley. By 1966, Berkeley (that is, the University of California at Berkeley) decided to replace its IBM batch system with a larger computer that would provide interactive (time-shared) service as well as batch computing. None of the large commercial computers came with a timesharing system, so Berkeley decided they would build their own. The story of that project—from conception, through funding, design, implementation, (brief) usage, to termination—is told here:

  • Paul McJones and Dave Redell. History of the CAL Timesharing System. IEEE Annals of the History of Computing, Vol. 45, No. 3 (July-September 2023). IEEE Xplore (Open access)

How did I come to write that paper? In the winter of 1968-1969 I was invited to join the timesharing project. At that time I had about 2 years of programming experience gained in classes and on-the-job experience during high school and college (Berkeley). That wasn’t much, but it included one good-sized project—a Snobol4 implementation with Charles Simonyi—so the team welcomed me to the project. For the next three years I helped build the CAL Timesharing System, performed some maintenance on the Snobol4 system, and finished my bachelor’s degree. In December 1971, CAL TSS development was canceled, and I graduated and moved on to the CRMS APL project elsewhere on campus.

Those three years were hectic but immensely enjoyable. The team was small, with under a dozen people, housed first in an old apartment on Channing Way and then in the brand-new Evans Hall. Lifelong friendships were formed. People often worked into the night, when the computer was available, and then trooped over to a nearby hamburger joint for a late meal. Exciting things were going on around us. There were protests, the Vietnam War, and the first moon landings. Rock music seemed fresh and exciting. I had met my future wife in 1968, and we were married in 1970.

As CAL TSS came to an end, we all agreed the experience could never be equalled. But we didn’t realize people in the  future would be interested in studying our system, so we weren’t careful about preserving the magnetic tapes. However many of us kept manuals, design documents, and listings, plus a few tapes. In 1980 and again in 1991 we had reunions and I offered to store everything until it became clear what to do for the long run. Around 2003 I started scanning the materials and organizing a web site. In 2022  the Computer History Museum agreed to accept the physical artifacts, and this year they agreed to host the web site:

Jack Schwartz and SETL

In April 2020, just as the Covid pandemic began, Annie Liu, a professor at Stony Brook University, emailed me to chat about programming language history. She suggested that Python, with its antecedents SETL, ABC, and C, would be a good topic for historical study and preservation. I mentioned that I’d considered SETL as an interesting topic back in the early 2000s, but unfortunately had not acted. After a few more rounds of email with her, I began looking around the web and Annie introduced me to several SETL people. Starting with these people, a few other personal contacts, and some persistence, I was soon in touch with much of the small but friendly SETL community, who very generously pored through their files and donated a wide variety of materials. The result is an historical archive of materials on the SETL programming language, including source code, documentation, and an extensive set of design notes that is available at the Software Preservation Group web site:

http://www.softwarepreservation.org/projects/SETL/ 

In addition, the digital artifacts and some of the physical artifacts are now part of the Computer History Museum’s permanent collection.

The SETL programming language was designed by Jack Schwartz at the Courant Institute of Mathematical Sciences at New York University. Schwartz was an accomplished mathematician who became interested in computer science during the 1960s. While working with John Cocke to learn and document a variety of compiler optimization algorithms, he got the idea of a high-level programming language able to describe such complex algorithms and data structures. [1] It occurred to him that set theory could be the basis for such a language since it was rich enough to serve as a foundation for all of mathematics. As his colleagues Martin Davis and Edward Schonberg described it in their biographical memoir to him: [2]

The central feature of the language is the use of sets and mappings over arbitrary domains, as well as the use of universally and existentially quantified expressions to describe predicates and iterations over composite structures. This set-theoretic core is embedded in a conventional imperative language with familiar control structures, subprograms, recursion, and global state in order to make the language widely accessible. Conservative for its time, it did not include higher-order functions. The final version of the language incorporated a backtracking mechanism (with success and fail primitives) as well as database operations. The popular scripting and general purpose programming language Python is understood to be a descendent of SETL, and its lineage is apparent in Python’s popularization of the use of mappings over arbitrary domains.

Schwartz viewed SETL first as a specification language allowing complex algorithms and data structures to be written down, conveyed to other humans, and executed as a part of algorithm development or even as a component of a complete prototype system. Actual production use would typically require reprogramming in terms of data structures closer to the machine such as arrays and lists. Schwartz believed that SETL programs could be optimized “by a combination of automatic and programmer-assisted procedures.” [3, page 70] He wrote several memos about his ideas for SETL [4, 5], and began assembling a project team — mostly graduate students. A series of design notes and memos called the SETL Newsletter was launched. [6] Malcolm Harrison, another NYU professor, had designed an extensible LISP-like language called BALM; in the first SETL Newsletter he sketched a simple prototype of SETL as a BALM extension. [7]

Over the following years the SETL Newsletters chronicled a long and confusing series of SETL implementations implemented with various versions of BALM and also with LITTLE, a low-level systems programming language.

  • BALMSETL (1971-1972) consisted of a runtime library of procedures corresponding to the various SETL operations, and a modification of BALM which replaced the standard BALM syntactic forms with calls to the appropriate procedures in the library. This runtime library used a hash-based representation of sets (earlier prototypes had used lists).
  • SETLB (spring 1972) consisted of a preprocessor (written in Fortran) that translated a simplified subset of SETL to BALMSETL. BALM was converted from producing interpretative code for a generalized BALM machine to producing CDC 6600 machine code.
  • SETLB.2 (1973?) was based upon a version of the BALM interpreter written in LITTLE, plus the SETL Run Time Library. It offered a limited capability for variation of the semantics of subroutine and function invocation by the SETLB programmer.
  • SETLA (1974?)’s input language was closer to SETL, but it still used the BALMSETL-based runtime library and BALM-based name scoping.
  • SETLC (1975?) consisted of a lexical scanner and syntactic analyzer (written in LITTLE), tree-walking routines (written in BALM) that built BALM parse trees), a translator that emitted LITTLE from the parse trees (written in BALM), and the LITTLE compiler. The generated LITTLE code used the SETL Run Time Library.
  • SETL/LITTLE (1977-1978?) consisted of a SETL-to-LITTLE translator, a runtime library, and a LITTLE-to-CDC 6600 machine code compiler (all written in LITTLE).

The final system (the only one for which source code is available) was ported to the IBM System/370, Amdahl UTS, DECsystem-10, and DEC VAX. There was also a sophisticated optimizer, itself written in SETL, which however was too large and slow to use in production. Work stopped around the end of 1984 as Schwartz’s focus moved to other fields such as parallel computing and robotics and many of the graduate students received their degrees. A follow-on SETL2 project produced more SETL Newsletters but no system.

Other SETL implementations

Starting in the mid 1970s, SETL-influenced languages were implemented at other institutions including Akademgorodok in Novosibirsk, and then at NYU itself. After a 30-year gestation period, GNU SETL was released in 2022. See https://www.softwarepreservation.org/projects/SETL/#Dialects for more.

Aftermath

Many reports and theses were written and papers were published. Perhaps the most well-known result was the NYUAda project, which was an “executable specification” for Ada that was the first validated Ada implementation. The project members went on to found AdaCore and GNAT Ada compiler.

Acknowledgements

In addition to Annie Liu, many people helped me on this project; see https://www.softwarepreservation.org/projects/SETL/#Acknowledgments .

References

[1] John Cocke and Jacob T. Schwartz. Programming Languages and Their Compilers. Preliminary Notes. 1968-1969; second revised version, Apri1 1970. Courant Institute of Mathematical Sciences, New York University. PDF at software preservation.org

[2] Martin Davis and Edmond Schonberg. Jacob Theodore Schwartz 1930-2009: A Biographical Memoir. National Academy of Science, 2011. PDF at nasonline.org

[3] Jacob T. Schwartz. On Programming: An Interim Report on the SETL Project. Installment 1: Generalities; Installment 2: The SETL Language, and Examples of Its Use. Computer Science Department, Courant Institute of Mathematical Sciences, New York University, 1973; revised June 1975. PDF at softwarepreservation.org

[4] Jacob T. Schwartz. Set theory as a language for program specification and programming. Courant Institute of Mathematical Sciences, September 1970, 97 pages.

[5] Jacob T. Schwartz. Abstract algorithms and a set theoretic language for their expression. Computer Science Department, Courant Institute of Mathematical Sciences, New York University. Preliminary draft, first part. 1970-1971, 16+289 pages. PDF at softwarepreservation.org

[6] SETL Newsletter. #1-#217, November 1970 – November 1981; #220-#233, April 1987 – November 1988. Online at software preservation.org

[7] M. C. Harrison. BALM-SETL: A simple implementation of SETL. SETL Newsletter #1, 5 November 1970. PDF at softwarepreservation.org