These classnotes are depreciated. As of 2005, I no longer teach the classes. Notes will remain online for legacy purposes
Classnotes | UNIX01 | RecentChanges | Preferences
Pipes and Files
A pipe in computer terms is an interface layer by which the output of a given program is fed back as the input to another program. By the time UNIX was beginning to be deployed in 1971, pipes were not a new technology by any means. However, few OSes at the time were utilizing them (and even fewer were using them effectively).
Under UNIX, pipes took on a whole new functionality: They became the basis of core interaction in the system. The OS, its user interface, and its communication, were broken down into "bite-sized chunks"- many small programs which would be assembled in such a way as to make larger, more complex, functionality. Imagine "Lego-blocks" with computer OSes, and you start to get the idea.
The team adopted a set of design principles with respect to UNIX,
- "Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features."
- "Expect the output of every program to become the input to another, as yet unknown, program. Don't clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don't insist on interactive input."
- "Design and build software, even operating systems, to be tried early, ideally within weeks. Don't hesitate to throw away the clumsy parts and rebuild them."
- "Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you've finished using them."
In addition to pipes, the UNIX developers adopted another excellent concept: The File Abstraction. File Abstraction means that all things (well, most things) in the OS are handled as though they were files. They could be read from, written to, altered, changed. Reading from a pipe was no different than reading from a file. Writing to a pipe was no different than writing to a file (okay, this isn't exactly correct, but we will assume it is for the purposes of this course). Using this abstraction otherwise complex functionality such as device interaction could be abstracted away behind a simple file-like interface.
Universities and Expansion
Out of the anti-trust lawsuit against AT&T, they were not able to legally charge much money for the UNIX operating system. As a result, AT&T practically gave UNIX away (University's could purchase unlimitted licenses to UNIX and its source code for as little as $150).
This actually helped many Universities at the time. Computer Science was still relatively new, and many Computer Science departments were struggling financially against their technology costs. Often the software for a given mainframe was as expensive as the mainframe itself. Because UNIX was written in C, and because Universities could obtain licenses allowing them access to the source of UNIX (and even allowing them to change the code), it wasn't long before many Univeristies were running custom-made UNIXes behind the scenes on their otherwise expensive hardware.
As you can imagine, having access to the source code to their OS (and being able to modify it) was often seen as a heaven-sent in the classroom. An entire generation of programmers were trained working against the UNIX source code.
Many of these programmers contributed back to UNIX, and much of the progress and advancement of the original UNIX during the late 1970s and early 1980s was due to the efforts of countless programmers at Universities all over the country.
One group of developers was located at UC Berkely. They modified UNIX and released what was known as BSD (Berkely Software Distribution). Their UNIX-like BSD will be what we will look at next week.
By the mid-1980s, the legal restrictions posed on AT&T subsided, and AT&T decided to sell UNIX commercially. The UNIX code that they had licensed was always their 'property' (at the time, the concept of software 'property' was still quite new) and they even had rights to much of the code contributed by programmers at Universities across the country.
They created something called AT&T System V, began touting that it was better than the previous AT&T UNIX and the BSDs, and subsequently sold it to vendors. System V was sold under a very restrictive license, that forced the vendors to hold the source code for themselves and disallow cooperation between vendors.
Thus, the open and free development and contribution to UNIX ended. An OS that had been largely a grass-roots effort became the ammunition in a corporate machine (at least, this is the way many at the time and even now see it).
Soon, there was an explosion of commercial UNIXes, and places such as Universities which had previously enjoyed so much freedom, were now required to pay expensive fees for restrictive licenses to their UNIX-like OSes.
- Why didn't they switch from UNIX to something else? Well, consider what the alternatives were at the time. You had a fledgeling and primitive OS in the form of MS-DOS that was tied to low-end IBM machines and was not multi-tasking, multi-user, or anything else that they needed. There was CP/M which was only marginally better but was on its way out. You had VAX/VMS, but these required significant relearning for UNIX greybeards and wasn't much cheaper than UNIX. The options for real alternatives were limitted...
Rise of GNU
As you can imagine, this action ruffled the feathers of many old-UNIX hands around the country. Two groups were formed to combat this action. One was centered around BSD (which we will look at next week), the other was centered around a man named Richard Stallman and was called GNU.
GNU means "GNU's Not Unix". The GNU in that means "GNU's Not Unix" (etc. etc., this sort of recursive humor is common amongst hardcore computer geeks). GNU was started to create a complete alternative "UNIX-like" operating system and give it away (much like it had been previously, except with the addition of preventing the sort of thing that happenned with AT&T's UNIX from happenning again).
The initial announcement for GNU in 1983 is here: http://www.gnu.org/gnu/initial-announcement.html
This movement RMS started became known as the "Free Software" movement, and it became centered around a scheme of copyright licensing known as "copyleft".
A copyleft license, in a nutshell, is one which requires that all derivative works of a given piece of software code must remain free and open to all. In copyleft, the term "Free" means "Freedom", as in "The Freedom to contribute and share code". Often people will misinterpret this "Free" to mean "no cost". While it is true that the result of "Free Software" is that it is generally "no cost" (at least for the code), that is not the aim of "Free Software."
GNU started in 1984, but as of the time of this writing they have yet to complete the OS kernel for their system. They have completed every other component necessary for a UNIX-like system, but they do not have a kernel of their own.
As a result, when the Linux kernel entered the scene, it had an otherwise complete OS waiting for it in the form of GNU utilities. When you combine a Linux kernel with the GNU system, you get a complete UNIX-like OS.
The most common copyleft license in use today is the GNU GPL, or the GNU General Public License. The Linux OS Kernel is published under the GNU GPL, as is much of the software included in a Linux distribution.
As a result, many people prefer to call a complete Linux distribution, GNU/Linux? (as Linux is just the kernel, and is but one part of many other GNU utilities comprising the complete OS). While it is an argument that I tend to agree with, I am a very lazy person, and will typically refer to an entire GNU/Linux? system as simply "Linux."
For now, we will leave our discussion of Linux and GNU until another day. But if you would like to read more about the FSF, GNU GPL, or other things, you may want to look at the following links: