Decades ago, compilation was much simpler.
The first compiler was written by Grace Hopper in 1952 while the Lisp interpreter was written in 1958 by John McCarthy’s student Steve Russell. Writing a compiler seems like a much harder problem than an interpreter. If that is so, why was the first compiler written six years before the first interpreter?
See the full, original question and all answers here.
Euphoric answers (78 votes):
Writing a compiler seems like a much harder problem than an interpreter.
That might be true today, but I would argue that it was not the case some 60 years ago. A few reasons why:
- With an interpreter, you have to keep both it and the program in memory. In an age where 1KB of memory was a massive luxury, keeping the running memory footprint low was key. And interpreting requires a bit more memory than running a compiled program.
- Modern CPUs are extremely complex with huge catalogs of instructions. So writing a good compiler is truly a challenge. Old CPUs were much simpler, so even compilation was simpler.
- Modern languages are much more complex than old languages, so even compilers are much more complex. Old languages would thus have simpler compilers.
mctylr answers (35 votes):
The fundamental point is that the computing hardware environment of the 1950s made it such that only a compiler was feasible given the batch-oriented processing of computers back then.
At the time the better user interfaces were primarily limited to punch cards and teletype printers. In 1961 the SAGE system became the first Cathode-Ray Tube (CRT) display on a computer. So the interactive nature of an interpreter was not preferable or natural until much later.
Numerous computers in the 1950s used front panel switches to load instructions and the output was simply rows of lamps/LEDs. Hobbyists even used front-panel switches and LEDs into the 1970s. Maybe you’re familiar with the infamous Altair 8800.
Other hardware limitations also made interpreters unfeasible. There was the extreme limited availability of primary memory (e.g. RAM) in computers in the 1950s. Prior to the semiconductor integrated circuit (which didn’t come until 1958) memory was limited to magnetic core memory ordelay line memory which was measured in bits or words, no prefix. Combined with the slowness of secondary storage memory (e.g. disk or tape), it would be considered wasteful, if not unfeasible to have much of the memory used for the interpreter, even before the program being interpreted was loaded.
Memory limitations were still a major factor when the team lead by John Backus at IBM created the FORTRAN compiler in 1954-57. This innovative compiler was successful only because it was anoptimizing compiler.
Most of the computers in the 1950s barely had any Operating System, let alone modern features such as dynamic linking and virtual memory management, so the idea of an interpreter was too radical and impractical at that time.
The languages of the 1950s were primitive. They included only a small handful of operations, often influenced either by the underlying hardware’s instructions or the problem definition of their targeted use.
At that time, computers were rarely general purpose computers in the sense that we think of computers today. That they were reprogrammable without having to be rebuilt was considered a revolutionary concept—previously people had been using electromechanical machines (typically calculators) to compute or calculate answers (the majority of applications in the 1950s were numeric in nature).
From a Computer Science point of view, compilers and interpreters are both translators, and roughly equal in complexity to implement.
Find more answers or leave your own answer at the original post. See more Q&A like this atProgrammers, a question and answer site for professional programmers interested in conceptual questions about software development. If you’ve got your own programming problem that requires a solution, log in to Programmers and ask a question (it’s free).