This series complements the “Introduction to modern computers” (ITMC) series and focuses on software rather than hardware. I recommend reading the introduction series first, as it goes in dept and low-level discussing elements like memory (and use of memory) and assembly programming, while this series makes free use of these elements. The idea of this series is to give a complete, up-to-date and high/low-level picture of how operating systems work (along with a historic review). Lets begin.
“The history of software in the United States has been somewhat under documented.” – This is the first line of the abstract of a short RAND document (“the document” for this post), and it’s true today just even more than it was back in 1987. The birth and early evolution of operating system is one of these scarcely documented subjects which, in my opinion, are important to know in order to understand why operating systems today look and behave like they do.
The document was written by Robert L. Patrick, and gave an interesting view of how computers were used back in the 40 and 50’s. Try to put yourself in the shoes of a student or a scientist back in those days (since the dawn of history till the 1960s) – You couldn’t just pick up a calculator to calculate the results of multiplication of two real numbers (like π and e), or get the result of sin(3.4). There weren’t any handheld calculators during those times, and humanity was on the brink of harnessing electricity to build circuits that could perform these kind of calculations. In Part 3 of the ITMC for example, we saw how a combination of transistors and some simple logic can create a circuit that could add two small numbers. After figuring out how to add numbers, engineers and mathematicians (and any other curious scientist who got involved with computers back then) figured out and began to standardize circuits that could perform subtraction (along with the representation of signed numbers), multiplication and division (along with the representation of floating-point numbers – here’s a nice video showing how to manually represent real numbers in binary).
Programmers back then had to figure out how to do complex calculations with only addition, subtraction, multiplication and division. There were mathematical methods that could solve these type of problems – using Taylor series for example, a programmer could write code that would approximate the result of a trigonometric function with good precision using a combination of multiplication, addition and division. Since the result of a calculation using Taylor series gets more precise with each iteration, but more iterations mean more time to calculate, the programmer was in charge of the balance between precision and time. Questions like “How can I solve this problem using a computer?”, and “How can I get these calculations to run faster on the computer?” gave birth to the computer science field.
Now back to the subject. During the 40’s and early 50’s computers were used without operating systems at all. This means that they could run a single program at a time, and a considerable amount of hardware setup was needed in between programs. Consider the fact that these early computers were far from being a common sight back then, and that they were far from “personal”. A computer would fill an entire room (remember the vacuum-tubes), cost a small fortune, and be shared by a large number of workers. This snip from the above-mentioned document will help you imagine how it looked like:
To use this computer (and all of its resources and peripherals, like the punched-card reader, the magnetic-core memory, and the magnetic-tape storage media) you’d have to get in a FIFO queue and wait till your turn comes up. Once in the computer room, you have a limited time quanta to run your program through the computer. Part 7 of the ITMC briefly mentioned punched-cards that were used back then. Imagine a programmer sitting at his desk, punching holes in a bunch of cards, running back and forth to the computer room to check if it’s his time to use the computer, and carrying around a stack of punched cards that are his computer program:
This was of course a mess (those of you who work as programmers probably understand why). As Patrick described in the document:
So the very expensive computer sat idle most of the time, and it’s usage was inefficient (to say the least). The lack of efficiency mainly came from the fact that each programmer had to manually setup the entire computer system before he could run his program. Programmers also had to “reinvent the wheel” and write their own I/O handling code and service routines (like a binary-to-decimal converter for printer output). Patrick mentioned that around mid-1955 an IBM computer user’s group (SHARE) was formed to tackle these subjects – code and knowledge sharing for mutual benefit. The programmer’s proposal highlights, which were derived during these meetings, is described on pages 7-10 in Patrick’s document. The main thought was that in order to make the usage of the computer system more efficient, a main program should reside in memory to handle user input and output, following these guide lines:
- Input and output will be implemented using magnetic tapes (not punched cards) containing files known as SYSIN and SYSOUT – operators now came with their tapes, ran the program (paper card jam free), and took their tapes back to a machine that would print out the results (allowing for someone else to use the computer).
- SYSIN contained batch of independent jobs (programs), along with meta-data describing each job.
- I/O peripherals and external memory modules were standardized. This removed the need to set the hardware up, and programmers made use of different memory modules and I/O peripherals by programmatic means (described in Part 8A and Part 8B of the ITMC)
- Programmers were kicked out of the computer room. Computer operators handled the machinery while the programmers wrote the code.
- Standard decimal-to-binary/binary-to-decimal routines were available for usage. The programmer only needed to know how to call these routines for his service.
These ideas marked the birth of the first operating system, the GM-NAA I/O System (General Motors and North American Aviation Input/Output system) in 1957. The first OS was born from the frustration of programmers, and the interest of corporations to maximize the usage of the computer which was very expensive to rent. The operating system contained code which provided the infrastructure to assemble and run programs. SYSIN contained the job’s assembly code, and with a flick of a few switches, the OS read the code form the tape, compiled it to object code (which was kept in the much faster magnetic core memory), sample the system’s hardware clock and finally run the program (object code) while keeping track of the time and resources used. Once the program ended (completed or failed somewhere along the way), the OS would add an advisory invoice to the trailing page of each printout so that the programmer will be aware of the resources used and the cost of these resources each time a run was made. To quote Patrick’s document: “We found the resulting self-discipline of great benefit since programmers naturally do more desk checking when a wasted shot at the machine costs more than a day’s wage”. Fun times. When one program finished, the the OS was ready to run another program (tapes still needed to be manually switched in and out by the computer operators).
Beside compiling and running user programs, the OS offered standard, ready-to-use routines that the programmer could use for his (and everyone’s) benefit stands at the base of modern operating systems. This will be discussed in the following parts of the series.
The OS’s code did not sit in ROM (probably sat in magnetic tape, and loaded into the magnetic-core memory during run-time), meaning that rogue programs could corrupt the OS’s code. Furthermore, programs had unrestricted access to all of the computer’s resources. Thinking in more malicious terms, a programmer could theoretically modify the records of resource usage to get a nice “discount” on the usage of equipment. Here’s a snip concerning that subject:
The first thing operating-system developers (and OS code programmers) are taught today is to treat the user as an entity that will always try to destroy the OS and burn the hardware on which it runs (intentionally or not), this way of thinking has its roots back to the first use of operating systems, and it is not baseless.
Soon after the GM/NAA I/O System was implemented, it was upgraded to include a FORTRAN compiler (which compiled FORTRAN code to object code). FORTRAN was a high-level programming language, which means it provided an abstraction in terms of hardware usage. For example, programmers writing FORTRAN programs could now use a “print” function which handled the peripheral I/O code for them. Instead of writing god knows how many lines of IBM-704 assembly code, here’s a program that prints (using the standard output device, which ever was set at the time) the “Hello, World!” string:
program Hello
print *, "Hello, World!"
end program Hello
Again, the OS came to the programmer’s rescue by including the code for the compiler. If by this point you are thinking “generating the object code for a ‘print’ function for each program over and over again is a horrible waste of time”, than you are starting to think like an OS developer, and these are exactly the thoughts that turned these basic I/O systems to OSs that we know today.
This is how the beginning of Operating Systems looked like. OSs were created out of a need to provide the user with an infrastructure to run his programs as quickly and efficiently as possible (because there are 30 angry programmers with stacks of punched-cards/tapes in their hand waiting in line). In the next post we’ll continue going over the historical development of operating systems with new concepts of standardization, portability and protection. Hope you found this post informative. Feel free to leave comments, and ask questions.
demo




