The History Of Computers

written by:







Companies promote it for their employees.Parents demand it for their children . Those who have it believe they have a competitive edge. Those who don't have it seek it out. "It" is computer. We are rapidely becoming a computer society" Until recently computers were found only in environmentally controlled rooms behind locked doors. Only computer professionals dared enter these secured premises. In contrast today computers are found in millions of homes and just about every office. In fact there is a computer for one in every eight people. Eventually all of us will have at least one computer and use it every day in ourwork and leisure.

The history of modern electronic computers may have began in 1942. but several earlier events helped to set the stage.

click here:

An Abacus

Early History:

The abacus was probably the original mechanical counting device it has been tracedback at least 5000 years and its effectiveness has withstood the test of time.

The inventor and painter Leonardo de Vince (1452-1519) sketched ideas for a mechanical adding machine. A century and a half later the french philosopher and mathmatician Blaise Pascal (1623-1662) finally invented and built the first mechanicaladding machine. It was called the Pascaline and used year-driven counting wheels to do addition.

Pascal's Adding Machine

Another device, the Jacquand loom was developed by the french inventor Joseph Marie Jacquard. To automate rug weaving on a loom in 1804. The device used holes punched in cards to determine the settings for the loom. By using a set of punched cards the loom could be "programmed" to weave an entire rug in a compleated pattern. This system of encoding information by punching a series of holes in paper was to provide the basis for the data handling methods that would eventually be used in the early computers

Jacquard's Loom

A few years later Charles Babbage (1793-1871) an english visionarry and cambridge professor advanced the state of computational hardware by inventing a "different engine" capable of computing mathematical tables. In 1834 Babbage conceived the idea of an "analytic engine". In essence this was a general purpose computer. As designed his analytical engine would add, subtract, multiply and divide in automatic sequence at a rate of 60 additions per minute. The design called four thousands of years and drives that would cover the area of a football field and be powered by a locomotive engine. However Babbage worked on his analytical engine until his death.

Fourty years later, Pr. Herman Hollenith, an employee of the U.S Census Burear put Jacquard's punched cards concept together with some of the same kind of ideas that had been proposed by Charles Babbage to solve a real world problem. The census bureau had taken seven a half years to complete the 1880 census. By 1890, with a 3-million person increase in the population , the bureau anticipated that the census would take even longer. Holleuth proposed a solution based on what he termed a census machine that would count data that was fed in punched cards. The holes in the cards represented the census data and as the machine detected the holes, their values were incremented on numbered deals.

Hollerith's Census

Hollenith formed the tabulating machine company in 1896 which merged into an organization that would grow and evolve into the international business machine (IBM) company the the world's largest computer company.

Although computational machines comtinued to evolve the invention of modern computers couldn't come about until the supporting technologies of electrical switching devices were in place.

By 1937 electricity was in general use in most of the world's cities and the principles of radio were well understood. Using these new tools, several researchers were working on electrically powered versions of the earlier computing devices. Among them was Howard Aiken of Harvard university. In 1944 he completed the basic development of the machine was dubbed the Mark 1. The machine which was also known as the Automatic sequence controlled calculator is now seen as the first full-sized digital computer. The Mark 1 weighted 5 tons included 500 miles of wired and was used only for numeric calculations and took three seconds to carry out one multiplication.


By 1946 John Mauchly and J Presper Eckent were developing a large scale computing device at the university of Pennsylvania. They had a working device based on electronic switches and radio vacuum tubes. This device was known as the electronic numerical integration and calculation (ENIAC) and is now seen as the first electronic computer. It could perform thousands of calculations per seconds and was used for a variety of purpose including scientific research and weather prediction.

John Von Newmann a member of the Mauchly and Eckert team introduced a significant improvement in the method of controlling this computer he suggested that instead of changing around cables that connected one part of the computer with another in order to set up a particular computing function, A set of standard conections between machine components be established and that a special area of computer memory should be developed to store both data and programming orders.

The Eniac

The concept of computer memory is still used in today's computers. The imposing scale and general applicability of the ENIAC signaled the beginning of the first generation of computers.

The first generation computer

IBM 709

The IAS computer can be taken as reprisentive computer of what are now called first generation computer. It include main registers,processing circuits,and information paths within the central processing unit.

Arhitecture of a first generation computer

Structure of a first-generation computer:IAS

Information format

The basic unit of information in the IAS machine is a 40-bit word,which may be defined as the amount of information that can be transferred between the main memory (M) and the CPU in one step . The memory (M) has the capacity 4096 words. A word stored in(M) can represent either instruction or data. IAS instruction are twenty bits long ,so that to instruction can be stored in each 40-bit memory instruction consist of two parts :an 8 bit operation code that definesto be preformed and a 12-bit address part that that can identify any of 2 in power 12 location that may be used to store an operand of the instruction. while each EDVAC instruction contained four memory address, the IAS instruction allows only one .This results in a substantial reduction in word length .two aspects of the IAS organization make this possible .
  1. Fixed registers in in the CPU are used to store oprand and results .the IAS instructions automatically make use of this registers as required .In other words ,CPU register addresses are implicity specified by the operation code .
  2. The instructios of a program are stored in main memory in approximately the squence in which they are excuted .Hence the address of the next instruction pair is usually the address of the current instruction pair plus one . The need for a next instruction address in the instruction format is eliminated.

System organization The CPU of the IAS computer consist of a data processing unit and aprogram control unit .It contains various processing and control circuits ,along with a set of high-speed registers (AC,MQ,DR,IBR,PC,IR,AR)intended temporary storage of instructions,meory addresses , and data.The main actions specified by instructions are preformed by the arithemethic-logic circuits of the data processing unit . The control circuits in the routing information correctly through the system ,and providing proper control signals for all CPU actions. An electronig clock circuit is used to generate the basic timing signals needed to synchronize the opperation of the diffrent parts of the system . The main memory M is used for storing program and data .A word transfer can take place between 40-bit data register DR of the CPU and any location M(X) with address X in M .The address X to be used is stored in 12-bit address register AR . The DR may be used to store an opperand during the excution of an instruction .Two additional registers for the temporary storage of operand and results are included : the accumolator AC and the multiplier-quotient register MQ . Two instruction are fetched simultaneously from M and transferred to the program control unit .The instruction that is not to be excuted immediately is placed in an instruction buffer register IBR .The opration code of the other instruction is placed in instruction register IR where it is decoded.The address field of the current instruction is transferred to the memory address register AR.Another address register called the instruction address registeror the program counter PC is used to store the address of the next instruction to be excuted.

Method of operation

Patrial flow chart of IAS computer operation."

Instructions are fetched and excuted in two superate secutive steps called the fetch cycle and the excution cycle .Together they from the instruction cycle (click the image above)-shows the princapleaction that carried out in each cycle . The fetch cycle is common to all instructions Since two instruction are obtained simultaneously from M ,the next instruction may be in the instruction buffer register .If not ,the previously incremeneted contents of the program counter are transferrd to the address register and A READ request is sent to M .the required data at memory location X is then transferred to the data register DR .The operation code of the required instruction is sent to the instruction register and decoded.The address part of the instruction goes to the address register,while the second instruction may be transferred to the instruction buffer register IBR . The program counter PC is incremeneted whenever the next instruction is not IBR. The computer now enters the excution cycle ,and its subsequent actions depend on a particular instruction being excuted .

The second generation computer

The so-called the second-generation computers can be taken to be those produced during the second decade of the electronic computer era (approximately 1955-1964).They are mainly charecterized by the change from vacum tube to transistor technolgy ;however,several other, important developments also accured ,which are summerized below .

The earliest transistor computer appears to have been an exprimental machine,the TX-O ,which was operational in 1953. Many of the improvements associated with second-generation computers actually first appeard in vacume tube or hydrid machines.The IBM 704 vacuum tube had index registers and floating-point which was rudimentry operating system .The later models of 704,709,had input out-put processors(then called "data synchronizers"and later "channels")which were special-purpose proccesors used exclusively to control actions were controlled directly by the CPU;this is now termed programmed IO . were very successfull commercialy. With the second genretion it became necessery to talk about copmuter systems ,since the number of memory units processors,IO devices , and other system component cold vary between diffrent installation, even though the same basic computer was used.

IBM 7094

The IBM 7094 Model I was amiddel number of IBM's second generation of scientific computer . built with discret transistors. The shows the the console .The 7094 sytem is about the same size as 709 and has the same kind of princeple .The IBM 2302 disk drive for the 7094.The disk platters are 24 inches in diameter and the head assembly is possitioned with compressed air.It is one of the last model this size and can store 300 MB .

IBM 2302 disc drive

A second generation computer

Structure of a second-generation computer: the IBM 7094

The IBM 7049 is a representative large-scale scientific machine of the second generation .The CPU differs from that of the IAS computer mainley in the addition of a set of index registers,and arithmetic circuits that can handel both floating point and fixed-point operations.All input output operations are are controlled by a set of IO processors which have direct access to the main memory M . A control unit is used to switch the memory between the CPU and the various IOPs.In the following discription of the 7049,only those aspects that are sagnifcantly diffrent from the IAS machine are discussed. Most of the CPU rgisters are similar to those in the IAS computer and have been assigned in the same name .During an instruction cycle, the CPU fetches two succssecive instructions from memory; the second instruction is stored in the instruction buffer register IBR. The 7049 has seven index registers ,each of which can store a 15 bit address .A 3-bit "tag" subfied of the operation code of an instruction is used to indicate if indexing is rquired and which index register is to be used .If index register XR(I)is indicated , then the address field currentlyin the address register AR has the contents of XR(I) subtracted from it using special set of index adders to from effective address AR-XR(I).The affective address is used to access main memmory .The program does not alter its own instruction , since the address modification are carried out in the CPU and not in main memory. The instruction repertoire of the 7094 has more than 200 types of instructions They can be classified as following
  1. Data-transfer instructions for transferring aword of informatio between the CPU and memory or between two CPU registers.
  2. Fixed-point and floating point arithemethic instructions.
  3. logical (nonnumerical) insructions.
  4. Instructions for modifying index registers .
  5. Conditional and unconditional branching ,and related control instructions.
  6. Input-output operations for transferring data between IO devices and main memory (some excuted in CPU and some in IOPs)

An important feature of second-genration machines is the provision of special branch instructions to facilitate the trnsfer of control between diffrent programs, e.g,calling subroutines .In the 7094 an instruction TSX (transfer and set index) is available for this purpose. Suppose excution of a subroutine that begins in location SUB is desired . Then instruction LINK TSX SUB ,4 causes its own address (link) to be placed in the designated index index register XR(4) and the next instruction is taken from the memory location SUB .In order to return control to the calling program ,the subrotine must terminate with an instruction such as TRA 1,4 meaning go (transfer) to the address 1+XR(4) , which contains the next instruction after link in the main program .

Input-output processing

IO processors such as those of the 7094 , supervise the folow of information between main storage and IO devices . They do so by excuting special IO programs, which are compposed of IO instructions and are stored in main memory .An IOP begins excution of an IO program only when initiation instruction is sent to the IOP by the CPU.This instruction typically contains the address of the first instruction in the IO program to be excuted .The IOP can then excute the program without reference to the CPU . The CPU can,however,monitor the IO operations by means of instructions that obtain status information IOPS .An IOP may also be able to communucate with the CPU to indicate unusual conditios,such as the end of the IO operation,by meaans of special control signals calld iterupts.Interupts facilities,introduced in some second generation machines, enable the CPU to respond rapidly to the changes in IO activity and greatly improve its overal efficiency. The structure of an IOP based on that of the IBM 7094 computer system (see image).Data is transferred between the IOP and main memory aword (36 bit) at a time ,but transfer between the IOP and IO devices e.g magnetic tapes ,is by charecter (6 bits) .The IOP therefore circuits for assembling charecters into words an dissembling words into characters .The main data register DR stores one word and is connected to the memory data bus .Its role is that the buffer register .A 5-bit instruction register IR stores operation code part of the current IO instruction ,while the address register AR holds 15 bits memory address . The number of words to be transferred during data-transfer operations is is stored in data count register DC. A program counter register PC stores the address of the next IO instruction to be excuted by the IOP .Finally ,the status of the current IO operation is maintained in astatus register called SR in the IOP.This register may be used to store abnormal or error condition information and can be examined by the CPU . An IO operation typically proceeds in the following way:
  1. The CPU initiates the operation when it encounters an IO instruction while excuting some program .This instruction specifies the IO device d connected to IOP c is to be selected for an input (read)or output (write) operation .It also specifies the address a in main memory of the first instruction in the IO program to be excuted by designated IOP .
  2. The CPU transfers the IO device name d and the IO program starting address a to the IOP c.
  3. IOP c the proceeds to excute the IO program in question .When the IO operation terminates either normally or abnormally,the status register SR is set accordingly, and interrupt signal may be sent to the CPU.

IOP instruction fall into three groups

  1. IO device-control instructions .These are transmitted from the IOP to the active IO device and are peculiar to the device in question. two examples are rewined magnetic tape,and position a magnetic-disk unit's read write head .
  2. Data transfer instructions.These have the from: transfer n words between the active IO device and main storage.Each such instruction contains the data count n ,which is stored in IOP data count register DC ,and the intial address of the main mamory data area to be used, which is placed in in the address register AR .The IOP then proceeds to carry out the data transfer .Each time a ward is transmitted through the IOP,DC is decremeneted by 1and AR is incremented by1.When DC reaches zero ,ie,all n words have been transferred,the IOP can proceed to the next instuction whose address is in its program counter PC.
  3. IOP control instructions.These are mainly conditional and un conditional branch instructions not unlike those excuted by the CPU.

IOPs and the CPU shre a common access path to the main mamory ,usually via a memory control unit and set of shared communication lines called a system bus .Since IO operations are typically vary slow compare with the CPU speed ,most of the memory-access request can be expected to come from the CPU .

Large system

in the early days all programs where run seperately , and the computer had to be halted and prepared manually for each new program to be excuted .With the improvement of IO equipment and program that came with second generation computer ,it became feasible to prepare a betch of jobs in advance,store them on magnetic tape ,and the have the computer process them in one continuous sequence ,placing the results on another magnetic tape .This mode of IO operation management is termed batch processing .It also became common to employ asmall auxiliary computer to process the input output tapes off line off-line, allowing the main computer to devote all its attention to a user program execution . Batch processing requires the use of a supervisory program termed a monitor ,which is permanently resident in main memory.later operating system were designed to enable a single CPU to process a set of independent program concurently a technique called multiprogramming.This is accomplished by the CPU temporarily suspend execution of its cuurrent program,beginning excution of another program ,and then returning to the first program later.Multiprogramming systems that process many users progarams concurrently in an interactive manner are called timesharing system.

Third Genaration



Parallel processing

Operating systems

1965 is considered the first year in which third generation computers began to operate. Distinction between second-third genarations is'nt so clear so it's used to be up to these:
  1. INTEGRATED CIRCUITS (ICs) began to replace DISCRETE TRANSISTOR CIRCUITS used in SEC-nd generation machines.Results were substantional reduction in physical size and cost.

  2. SEMICONDUCTOR(IC) memories began to augment and finaly replaced ferrite cores in main memory designs.The to main types are READ ONLY MEMORIES (ROMs) and read-and write -memories called also random access memories(RAMs)

  3. A technique calld MICROPROGRAMING widespreded and simplified the design of the CPUs and increased their flexibility.

  4. A variety of techniques for cocurrent or PARALLEL PROCESSING born like PIPELINING and MULTIPROCESSING.Results were accelarating speed at which programs could be executed.

  5. Efficient methods for automatic sharing of the facilities or resources of a computer system,processors and memory space, were developed and incroporated into operating systems.

The number of different thrid-generation computers is very great. The most influental computer introduced was the IBM's System/360 series. This is a family of computers intended to cover a wide range of computing performance. the various models are largely compatible in thst a program written for one model should be capable of being executed any other model in the series.What would differ may be the execution time and perhaps the memory space requirements. Many of IBM's features have become standards in computer industry. Design of large powerful computers began with the LARC. Other models were CDC 6600 of Control Data Corporation (CDC) in 64' 7600 in 69,and the subsequent CYBER series. The machines were characterized by the inclusion of many IOPs (called peripheral processors) with a high degree of autonomy.In addition,each CPU is subdivided into a number of independent processing units which can be oparatad simultaneously. A CPU organization called PIPELINING was used to achieve very fast processing in several computers such as the CDC STAR-100 (string array computer) and the Texas Instruments ASC (Advanced Scientific -Computer). The LILLIAC IV was another supercomputer designed in the late 60's at the university of Illinois. It had 64 separate CPU-like processing elements all supervised by a common control unit and all capable of operating simultaneously.


PDP 11

An interesting contrasting development in this generation was the mass production of small low-cost computers called MINICOMPUTERS.Its important representative was the LINC developed at MIT 1963. Minicomputers are charaterized by short word lenghts 8 to 32 bits,limited hardware and software facilities, and small physical size. Their low cost makes them suitable for a wide variety of applications such as industrial control, where a dedicated computer for exm. a computer which is permanently assigned to one application, is needed. In recent years, improvments in device technology have resulted in minicomputrs which are comparable in performance to large second generation machines and greatly exceed the performance of most first generation machines.


Microp. is a technique for implementing the control function of a processor in a systematic flexible manner. Maurice V.Wilkes in 1951 enunciated the concept. Althogh it has been implemented in several first and second generation machines it was not until the mid 60's with its appearance in some models of the IBM System/360 series, that the use of microp. became widespread. Micro.p may be considered as an alternative to HARDWIRED CONTROL. A hardwired control unit for a processor is typically a sequental circuit shown in figure "shekef-1." on the other hand a microprogrammed processor control circuit has the structure shown in "shekef-2" And i works this way:

* Each instruction of the processor being controlled causes asequence of microinstructions,called a MICROPROGRAM,to be fetched from a special ROM or RAM ,called a CONTROLL MEMORY.

* The microinstructions specify the sequence of microoperations or register transfer operations needed to interpret and execute the main instruction.

* Each instruction fetch from main memory thus initiates a sequence of microinstruction feches from control memory.

Microp. provides a simpler and more systematic of designing control circuits and greatly increases the flexibility of a computer. The instruction set of microprogrammed machine changed merely by replacing the contents of the control memory. This makes it possible for a microprogrammed computer to execute directly programs written in the machine language of a different computer,aprocess called EMULATION. Microprogrammed control units tend to be more costly and slower then hardwired units,but these drawbacks are generaly outweighed by the greater flexibility provided by micro.p. Because of the close interaction of software and hardware in microprogrammed systems,microprograms are sometimes reffered to as FIRMWARE.

Parallel processing

The increased level of parallel processing characteristic of the third generation was achieved in part by the use of multiple processors with a high degree of autonomy and flexible intrasystem communication facilities. This is illustrated in "shekef-3". The illustration shows a possible configuration of the Burroughs B5000 and its successor the B5500. The main memory is partitioned into eight independently accesible modules M1:M8. These are connected via the memory exchange to two CPUs and four IOPs. The memory exchange,which is a crossbar switching network,permits simultaneous access by the six processors to main memory provided that they each access different modules. A similar interconnection network,the IO exchange,connects the IOPs to up to 32 input-output devices. This organization permits many operations to take place simultaneously.

Parallelism can also be introduced on a lower level by overlapping the fetching and the execution of individual instructions by a single CPU. Two distinct methods of achieving this have evolved.

1. More then one unit can be provided to carry out a particular operation, lets say addition. By employing n independent adders, n additions can be performed simultaneously.This type of structure permits array operations to be performed very rapidly.

2. A processing unit can be designed in the form of a pipeline, which allows the execution of a sequence of microoperations to be overlapped.

Operating systems

The existence of many concurent processes in a computer system requeirs the presence of an entity that exercises overall controll, supervises the allocation of system resources,scheduals operations,prevents interference between different programs and more. This is termed an executive,a master controll program (burroughs),or more commonly an OPERATING SYSTEM. The operating systemt of a large computer is generaly a complex program,although some of its functions may be implemented in hardware. The widespread use of operating systems is an important characteristic of third generation computers. The development of operating-systems can be traced to the batch-processing monitors designed in the 1950s. Manchester University's Atlas computer,which became operational around 1961,had one of the first operating systems. The design of timesharing systems to allow many users simultaneous accsses to a computer in an interactive ,or "conversational" manner must also be mentioned.CTSS(Compatible Time-Sharing System), developed at MIT in the early 1960s,had considerable influence on the design of subsequent timesharing and operating systems.

Multiprogramming and multiprocessing usually involve a number of concurently executing programs sharing the same main memory. Because main memory capacity is limited by cost considerations,it is generally inpossible to store all executing programs and their data sets in main memory simultaneously.Thus it becomes necessary to allocate memoty space dynamically among differnent competing programs and move or "swap" information back and forth between main and secondary memory as required. A major function of an operating system is to perform this memory management operations automatically.

The Fourth Generation

The VLSI Era

Since the 1960s the dominant tencnology for manufacturing computer components has been the integrated circuit (IC).

This tecnology has evolved steadily from ICs containing just a few transistors to those containing hundreds of thousands of transistors; the latter case is termed "Very large scale integration", or VLSI.

The impact of VLSI technology on computer design has been profound. It has made it possible to fabricate an entire CPU, main memory, or similar device with a single IC than can be mass produced at very low cost. This has resulted in new classes of machines such as inexpensive personal computers, and high performance parallel processors that contains thousands of CPUs.

The term "FOURTH GENERATION" is occasionally apleid to VLSI-based computer arcitecture.

Integrated Circuits

An "INTEGRATED CIRCUITS" (IC) incorporates a complete transistor circuit into a tiny rectangle or "chip" of semiconductor material, typically silicon. The IC chip is then mounted in a suitable package that protects it and provides electrical connection points (pis or leads) to allow several ICs to be connected to one another, to IO devices, and to power supplies. In the "DUAL IN-LINE PACKAGE (DIP)" the pinsare organised in two parallel rows with 2.54 mm spacing between adjacent pins in each row. For very complex chips requiring a hundred or more pins,a "PIN-GRID ARRAY (PGA)" package may be used, where less surface area is needed to accommodate a given number of pins.

A complete computer system can be constracted by mounting a set of ICs on carriers or substrates that provide bouth mecanical suport for the computers and a means for interconnecting them. A typical IC carrier is a circuit board made of fiberglass or a similar insulating material. The interconnections can be formed either by discrete wires or by conductors that are printed - again a manufucturing technology that facilitates low-cost mass production - in one or more layers on the circuit board. In the latter case, the substrate is called a "PRINTED - CIRCUIT BOARD (PCB)". Finlly a set of circuit boards can be mounted in a metal enclosur or cabinet that contains power supplies, cooling fans to dissipate the heat generated by the ICs as operate, and possibly some IO equipment.

IC types

Two of the more important IC technology are "BIPOLAR" and "MOS". They both use transistors as the basic switching elements; they differ however, in the polarities of the charges associated with the primary carriers of electric current within the IC chips.

Bipolar circuits used both negative carriers and positive carriers. MOS circuits use field-effect transistors in which there is only one type of charge carrier: positive in the case of P-type MOS and negative in the case of N-type MOS. The term MOS (metal oxide semiconductor) describes the materials from which MOS circuits are typically formed; the term unipolar might be more appropriate, but it is not used. An important MOS subtechnology called CMOS combines N- and P-type MOS transistors in the same IC in a very efficient manner. MOS ICs are generally smaller and consume less power than the corresponding bipolar circuits. On the other hand, bipolar ICs generally have faster switching speeds. Although most ICs are presently manufuctured from silicon, increasing attention is being given to other semiconducting materials such as gallium arsenade. Gallium arsenade ICs are more difficult to process than silicon ICs,, but they are inherently faster by a factor of about 5.

Integrated circuits may be roughly classified on the basis of their DENCITY, which is defined either as the number of transistors included in a chip, or else as the number of logic gates per chip, where a typical logic gate is composed of about five transistors. The earliest ICs - contained from 1 to about 10 gates - SMALL SCALE INTEGRETION (SSI). MEDIUM SCALE INTEGRATION (MSI) implies a density of 10 to 100 gates per chip, while the term LARGE SCALE INEGRATION (LSI) covers ICs containing hundreds or thousands of gates. The term VERY LARGE SCALE INTEGRATION (VLSI) is employed for the densest ICs, such as 1 M-bit memory chips first marketed in 1986, each of which contains more than 1 million MOS transistors.

Because IC manufacture is almost entirely automated, the cost of making a complex IC is small provided a high production volume is maitained.

LSI circuits began to be produced in large quantities around 1970 for computer main memories and pocket caculators.For the first time it became possible to fabricate a CPU or even an entire computer on a single IC chip. A CPU or similar programmable processor on a single IC or, occasionally, several ICs , is called MICROCOMPUTER.

Because of the low manufacturing costs noted above, MICROPROCESSORS AND MICROCOMPUTERS are very inexpensive. This enables them to be sold high-volume users at prices comparable to those of the discrete transistors used in second- generation. Low cost and small size also make microcomputers suitable for many applications where computers where not priviosly used.

VLSI design

IC technology has not only introduced a new class of mass-production or off-the-shelf computer components in the form of standatd microprocessors and microcomputers , it has also mide it possible for designers to produce nonstandard or "custom" computer components quickly and at relatively low cost. This has resulted from the introduction of computer-aided design (CAD) techniques for IC design that can easily be coupled to the already highly automated IC fabrication technology. The influential VLSI design methodology enables custom MOS ICs to be produced as follows:

1. The proposed design is "drawn" or laid out in terms of cells whose complexity can very from a single transistor to a complex processor. Computer programs have been developed that convert the graphical input into a computer file in some standard format. Typically the CRT screen of a computer, termed a CAD workstation, serves as the designer`s drawing board.

2. A veriety of CAD programs have been developed to assist in the design process by, for instance, allowing trial layout to be easily modified, and allowing cells from a precomputed cell library to be incorporated into the current design. Programs called simulators are used to verify the corectness of a proposed design whose complexity may make it difficult to check manually.

3. The computer file describing the proposed circuit is processed automatically to create the optical templats or masks, which are the printiing plates from which the ICs are manufactured.


Motorola's 601RISC Microprocessor

The first commercial microprocessor was Intel Corp.s' 4004, which appeared in 1971. This was the CPU member of a set of four p-MOS LSI ICs called the MCS-4, which was originally designed for use in a culculator but was marketed as "programmable controller for logic replacement".The 4004 is referred to as a 4-bit microprocessor since it processes only 4 bits of data at a time. This very short word size is due mainly to the limitations imposed by the maximum IC density then achievable.

The MCS-4 series was soon followed by a large number of microprocessors families produced by various manufucturers; most of these employ the faster N_MOS and CMOS technologies.

As IC densities increased with the rapid development of IC manufacturing technology, the power and performance of the microprocessors also increased.

This is reflected in the increase in the CPU word size to 4, 8, 16, and by mid- 1980s, 32 bits. The smaller microprocessors have relatevely simple instraction sets, e.g., no floating points instractions, but they are nevertheless suitable as controllers for a very wide range of applications such as automobile engines and microwave ovens.

The larger and more recent microprocessors families have gradually acquired most of the features of large computers.

As the microprocessors industry has matured, several families of microprocessors have evolved into de facto industrial standards with multiple manufacturers and numerous "support" chips including RAMs, ROMs ,IO controllers ect.


A powerful microcomputer with a large memory and many IO ports, can be typically implemented with 50 or so ICs on a single printed-circuit board; the result is a single-board computer.

A microcomputer with a relatively small maim memory and limited IO connections can not be implemented on a single VLSI chip.

The resulting one-chip microcomputer is in many respects, a landmark development in computer technology because it reduces the computer to a small, inexpensive, and easily replaceable design component.

The production of single chip microcomputers began in the mid 1970s, shortly after the appearance of the first microprocessors. They are typically used as programmable controllers for a wide range of devices and are consequently sometimes referred as microcontrollers.

Microcomputers have given rise to a new class of general-purpose machines called PERSONAL COMPUTERS. These are small low cost computers that are designed to sit on an ordinary office desk and, in some cases, to fold into a compact form that is easily carried. A typical PC has perhaps 1M bytes of main memory capacity, and the following IO devices: a keyboard; a vidoe monitor, a compact disk driver and interface circuits for connecting the PC to telephone or computer networks.

One of the most widely used PC families is the PC series from IBM, which, following its introduction in 1981 and following precedents set by earlier IBM computers, has become a standard for this class of machine.

Another noteworthy PC is apple computer's Macintosh.

Their small size and low cost have made it feasible to use microcomputers in many applications which previosly employed special-purpose logic circuits. The microcomputer is tailored to a particular appliction by means of programs which era frequently stored in ROM chips.Changes are made merely by replacing the ROM programs.

PC have proliferated the point that they have become as common as typwriters in business offices. Theire main applications are word processing, where PC have assumed and greatly expended many of the traditional typewriter's functions, accounting and similar data processing tasks, and as communication terminals with other computers all over the world.

Fifth generation

many believe that we are about to enter the fifth computer generation, a generation marked by the evolution of computer that use newer, faster technologies to carry out a broader veriety of tasks.

Some of the tasks that computers will do in the next generation of computering can be defined as ARTIFICIAL INTELLIGENCE .

Such computers would make decisions based on available evidence rather than on hard and fast rules. If computers could be taught the rules that are used in decisions making, they might be able to replace the human experts who are currently charged with those decisions.

Some versions of these types of EXPERT SYSTEM are already in use and new ones are now under development.

However, new compiter programs and new methods of programming computers will have to be designed and put into operation before it can be said to be fully engaged in this latest generation of computers.


It is only a matter of time before the computer will be able to interpret a wide range of vocal commands.When this technology becomes reality and is combined with existing synthesized-speech technology,we should be able to have a meaningful verbal dialogue with a computer.Instead of keying in a computer program on a keyboard you would simply enter each line verbally.

We are still a few steps away, but life saving computer implants are only around the corner.These tiny computers will control mechanical devices that can replace living organs that have ceased to function. Other medical reaserch has given paralegics renewed hope that they may someday walk again with the assistance of a computerised nervous system.Withen a few years,pocket computers will be able to read 50 diffrent newspapers,turn on the heat at home, call a cub, order groceries, buy share of stock, make hotel reservations.Our pocket computers will also serve as credit cards.

ALBERT EINSTEIN said that " concern for man himself and his fate must always form the chief interst of all technical endeavors".

There are those who believe that a rapidly advancing computer technology exhibits little regard for " man himself and his fate ".

Whether it is good or bad society reached the point of no return with regard to its dependence on computers.


1993 Grolier Electronic Publishing,Inc.

Bell,C.G,and A.Newell. Computer Structures:Readings and Example,McGraw-Hill,.

David A.Patterson, John L. Hennessy Computer Architecture and Quantitive Approach. 1990

John p. Hayes. Computer Architecture and Organization. 1988

The Origins of Digital Computers:Selected Papers,3d ed.,Springer-Verlag,berlin,1982.

Computer History

Intels Computers of the 1990s.

Larry Long, Nancy Long. Computers. Prentic-Hall, 1986
Murdock, Everett E. Computers today, 1995