The history of modern electronic computers may have began in 1942. but several earlier events helped to set the stage.
The inventor and painter Leonardo de Vince (1452-1519) sketched ideas for a mechanical adding machine. A century and a half later the french philosopher and mathmatician Blaise Pascal (1623-1662) finally invented and built the first mechanicaladding machine. It was called the Pascaline and used year-driven counting wheels to do addition.
Pascal's Adding Machine
Another device, the Jacquand loom was developed by the french inventor Joseph Marie Jacquard. To automate rug weaving on a loom in 1804. The device used holes punched in cards to determine the settings for the loom. By using a set of punched cards the loom could be "programmed" to weave an entire rug in a compleated pattern. This system of encoding information by punching a series of holes in paper was to provide the basis for the data handling methods that would eventually be used in the early computers
A few years later Charles Babbage (1793-1871) an english visionarry and cambridge professor advanced the state of computational hardware by inventing a "different engine" capable of computing mathematical tables. In 1834 Babbage conceived the idea of an "analytic engine". In essence this was a general purpose computer. As designed his analytical engine would add, subtract, multiply and divide in automatic sequence at a rate of 60 additions per minute. The design called four thousands of years and drives that would cover the area of a football field and be powered by a locomotive engine. However Babbage worked on his analytical engine until his death.
Fourty years later, Pr. Herman Hollenith, an employee of the U.S Census Burear put Jacquard's punched cards concept together with some of the same kind of ideas that had been proposed by Charles Babbage to solve a real world problem. The census bureau had taken seven a half years to complete the 1880 census. By 1890, with a 3-million person increase in the population , the bureau anticipated that the census would take even longer. Holleuth proposed a solution based on what he termed a census machine that would count data that was fed in punched cards. The holes in the cards represented the census data and as the machine detected the holes, their values were incremented on numbered deals.
Hollenith formed the tabulating machine company in 1896 which merged into an organization that would grow and evolve into the international business machine (IBM) company the the world's largest computer company.
Although computational machines comtinued to evolve the invention of modern computers couldn't come about until the supporting technologies of electrical switching devices were in place.
By 1937 electricity was in general use in most of the world's cities and the principles of radio were well understood. Using these new tools, several researchers were working on electrically powered versions of the earlier computing devices. Among them was Howard Aiken of Harvard university. In 1944 he completed the basic development of the machine was dubbed the Mark 1. The machine which was also known as the Automatic sequence controlled calculator is now seen as the first full-sized digital computer. The Mark 1 weighted 5 tons included 500 miles of wired and was used only for numeric calculations and took three seconds to carry out one multiplication.
THE MARC 1
By 1946 John Mauchly and J Presper Eckent were developing a large scale computing device at the university of Pennsylvania. They had a working device based on electronic switches and radio vacuum tubes. This device was known as the electronic numerical integration and calculation (ENIAC) and is now seen as the first electronic computer. It could perform thousands of calculations per seconds and was used for a variety of purpose including scientific research and weather prediction.
John Von Newmann a member of the Mauchly and Eckert team introduced a significant improvement in the method of controlling this computer he suggested that instead of changing around cables that connected one part of the computer with another in order to set up a particular computing function, A set of standard conections between machine components be established and that a special area of computer memory should be developed to store both data and programming orders.
The concept of computer memory is still used in today's computers. The imposing scale and general applicability of the ENIAC signaled the beginning of the first generation of computers.
The IAS computer can be taken as reprisentive computer of what are now called first generation computer.
It include main registers,processing circuits,and information paths within the central processing unit.
Arhitecture of a first generation computer
Structure of a first-generation computer:IAS
System organization The CPU of the IAS computer consist of a data processing unit and aprogram control unit .It contains various processing and control circuits ,along with a set of high-speed registers (AC,MQ,DR,IBR,PC,IR,AR)intended temporary storage of instructions,meory addresses , and data.The main actions specified by instructions are preformed by the arithemethic-logic circuits of the data processing unit . The control circuits in the routing information correctly through the system ,and providing proper control signals for all CPU actions. An electronig clock circuit is used to generate the basic timing signals needed to synchronize the opperation of the diffrent parts of the system . The main memory M is used for storing program and data .A word transfer can take place between 40-bit data register DR of the CPU and any location M(X) with address X in M .The address X to be used is stored in 12-bit address register AR . The DR may be used to store an opperand during the excution of an instruction .Two additional registers for the temporary storage of operand and results are included : the accumolator AC and the multiplier-quotient register MQ . Two instruction are fetched simultaneously from M and transferred to the program control unit .The instruction that is not to be excuted immediately is placed in an instruction buffer register IBR .The opration code of the other instruction is placed in instruction register IR where it is decoded.The address field of the current instruction is transferred to the memory address register AR.Another address register called the instruction address registeror the program counter PC is used to store the address of the next instruction to be excuted.
Method of operation
Patrial flow chart of IAS computer operation."
The earliest transistor computer appears to have been an exprimental machine,the TX-O ,which was operational in 1953. Many of the improvements associated with second-generation computers actually first appeard in vacume tube or hydrid machines.The IBM 704 vacuum tube had index registers and floating-point which was rudimentry operating system .The later models of 704,709,had input out-put processors(then called "data synchronizers"and later "channels")which were special-purpose proccesors used exclusively to control actions were controlled directly by the CPU;this is now termed programmed IO . were very successfull commercialy. With the second genretion it became necessery to talk about copmuter systems ,since the number of memory units processors,IO devices , and other system component cold vary between diffrent installation, even though the same basic computer was used.
The IBM 7094 Model I was amiddel number of IBM's second generation of scientific computer . built with discret transistors. The shows the
the console .The 7094 sytem is about the same size as 709 and has the same kind of princeple .The IBM 2302 disk drive
for the 7094.The disk platters are 24 inches in diameter and the head assembly is possitioned with compressed air.It is one of the
last model this size and can store 300 MB .
IBM 2302 disc drive
An important feature of second-genration machines is the provision of special branch instructions to facilitate the trnsfer of control between diffrent programs, e.g,calling subroutines .In the 7094 an instruction TSX (transfer and set index) is available for this purpose. Suppose excution of a subroutine that begins in location SUB is desired . Then instruction LINK TSX SUB ,4 causes its own address (link) to be placed in the designated index index register XR(4) and the next instruction is taken from the memory location SUB .In order to return control to the calling program ,the subrotine must terminate with an instruction such as TRA 1,4 meaning go (transfer) to the address 1+XR(4) , which contains the next instruction after link in the main program .
IOP instruction fall into three groups
IOPs and the CPU shre a common access path to the main mamory ,usually via a memory control unit and set of shared communication lines called a system bus .Since IO operations are typically vary slow compare with the CPU speed ,most of the memory-access request can be expected to come from the CPU .
The number of different thrid-generation computers is very great. The most influental computer introduced was the IBM's System/360 series. This is a family of computers intended to cover a wide range of computing performance. the various models are largely compatible in thst a program written for one model should be capable of being executed any other model in the series.What would differ may be the execution time and perhaps the memory space requirements. Many of IBM's features have become standards in computer industry. Design of large powerful computers began with the LARC. Other models were CDC 6600 of Control Data Corporation (CDC) in 64' 7600 in 69,and the subsequent CYBER series. The machines were characterized by the inclusion of many IOPs (called peripheral processors) with a high degree of autonomy.In addition,each CPU is subdivided into a number of independent processing units which can be oparatad simultaneously. A CPU organization called PIPELINING was used to achieve very fast processing in several computers such as the CDC STAR-100 (string array computer) and the Texas Instruments ASC (Advanced Scientific -Computer). The LILLIAC IV was another supercomputer designed in the late 60's at the university of Illinois. It had 64 separate CPU-like processing elements all supervised by a common control unit and all capable of operating simultaneously.
* Each instruction of the processor being controlled causes asequence of microinstructions,called a MICROPROGRAM,to be fetched from a special ROM or RAM ,called a CONTROLL MEMORY.
* The microinstructions specify the sequence of microoperations or register transfer operations needed to interpret and execute the main instruction.
* Each instruction fetch from main memory thus initiates a sequence of microinstruction feches from control memory.
Microp. provides a simpler and more systematic of designing control circuits and greatly increases the flexibility of a computer. The instruction set of microprogrammed machine changed merely by replacing the contents of the control memory. This makes it possible for a microprogrammed computer to execute directly programs written in the machine language of a different computer,aprocess called EMULATION. Microprogrammed control units tend to be more costly and slower then hardwired units,but these drawbacks are generaly outweighed by the greater flexibility provided by micro.p. Because of the close interaction of software and hardware in microprogrammed systems,microprograms are sometimes reffered to as FIRMWARE.
Parallelism can also be introduced on a lower level by overlapping the fetching and the execution of individual instructions by a single CPU. Two distinct methods of achieving this have evolved.
1. More then one unit can be provided to carry out a particular operation, lets say addition. By employing n independent adders, n additions can be performed simultaneously.This type of structure permits array operations to be performed very rapidly.
2. A processing unit can be designed in the form of a pipeline, which allows the execution of a sequence of microoperations to be overlapped.
Multiprogramming and multiprocessing usually involve a number of concurently executing programs sharing the same main memory. Because main memory capacity is limited by cost considerations,it is generally inpossible to store all executing programs and their data sets in main memory simultaneously.Thus it becomes necessary to allocate memoty space dynamically among differnent competing programs and move or "swap" information back and forth between main and secondary memory as required. A major function of an operating system is to perform this memory management operations automatically.
This tecnology has evolved steadily from ICs containing just a few transistors to those containing hundreds of thousands of transistors; the latter case is termed "Very large scale integration", or VLSI.
The impact of VLSI technology on computer design has been profound. It has made it possible to fabricate an entire CPU, main memory, or similar device with a single IC than can be mass produced at very low cost. This has resulted in new classes of machines such as inexpensive personal computers, and high performance parallel processors that contains thousands of CPUs.
The term "FOURTH GENERATION" is occasionally apleid to VLSI-based computer arcitecture.
An "INTEGRATED CIRCUITS" (IC) incorporates a complete transistor circuit into
a tiny rectangle or "chip" of semiconductor material, typically silicon.
The IC chip is then mounted in a suitable package that protects it and provides
electrical connection points (pis or leads) to allow several ICs to be
connected to one another, to IO devices, and to power supplies.
In the "DUAL IN-LINE PACKAGE (DIP)" the pinsare organised in two parallel rows
with 2.54 mm spacing between adjacent pins in each row.
For very complex chips requiring a hundred or more pins,a "PIN-GRID ARRAY (PGA)"
package may be used, where less surface area is needed to accommodate a given
number of pins.
A complete computer system can be constracted by mounting a set of ICs on carriers or substrates that provide bouth mecanical suport for the computers and a means for interconnecting them. A typical IC carrier is a circuit board made of fiberglass or a similar insulating material. The interconnections can be formed either by discrete wires or by conductors that are printed - again a manufucturing technology that facilitates low-cost mass production - in one or more layers on the circuit board. In the latter case, the substrate is called a "PRINTED - CIRCUIT BOARD (PCB)". Finlly a set of circuit boards can be mounted in a metal enclosur or cabinet that contains power supplies, cooling fans to dissipate the heat generated by the ICs as operate, and possibly some IO equipment.
Two of the more important IC technology are "BIPOLAR" and "MOS".
They both use transistors as the basic switching elements; they differ however,
in the polarities of the charges associated with the primary carriers of
electric current within the IC chips.
Bipolar circuits used both negative carriers and positive carriers. MOS circuits use field-effect transistors in which there is only one type of charge carrier: positive in the case of P-type MOS and negative in the case of N-type MOS. The term MOS (metal oxide semiconductor) describes the materials from which MOS circuits are typically formed; the term unipolar might be more appropriate, but it is not used. An important MOS subtechnology called CMOS combines N- and P-type MOS transistors in the same IC in a very efficient manner. MOS ICs are generally smaller and consume less power than the corresponding bipolar circuits. On the other hand, bipolar ICs generally have faster switching speeds. Although most ICs are presently manufuctured from silicon, increasing attention is being given to other semiconducting materials such as gallium arsenade. Gallium arsenade ICs are more difficult to process than silicon ICs,, but they are inherently faster by a factor of about 5.
Integrated circuits may be roughly classified on the basis of their DENCITY, which is defined either as the number of transistors included in a chip, or else as the number of logic gates per chip, where a typical logic gate is composed of about five transistors. The earliest ICs - contained from 1 to about 10 gates - SMALL SCALE INTEGRETION (SSI). MEDIUM SCALE INTEGRATION (MSI) implies a density of 10 to 100 gates per chip, while the term LARGE SCALE INEGRATION (LSI) covers ICs containing hundreds or thousands of gates. The term VERY LARGE SCALE INTEGRATION (VLSI) is employed for the densest ICs, such as 1 M-bit memory chips first marketed in 1986, each of which contains more than 1 million MOS transistors.
Because IC manufacture is almost entirely automated, the cost of making a complex IC is small provided a high production volume is maitained.
LSI circuits began to be produced in large quantities around 1970 for computer main memories and pocket caculators.For the first time it became possible to fabricate a CPU or even an entire computer on a single IC chip. A CPU or similar programmable processor on a single IC or, occasionally, several ICs , is called MICROCOMPUTER.
Because of the low manufacturing costs noted above, MICROPROCESSORS AND MICROCOMPUTERS are very inexpensive. This enables them to be sold high-volume users at prices comparable to those of the discrete transistors used in second- generation. Low cost and small size also make microcomputers suitable for many applications where computers where not priviosly used.
1. The proposed design is "drawn" or laid out in terms of cells whose complexity can very from a single transistor to a complex processor. Computer programs have been developed that convert the graphical input into a computer file in some standard format. Typically the CRT screen of a computer, termed a CAD workstation, serves as the designer`s drawing board.
2. A veriety of CAD programs have been developed to assist in the design process by, for instance, allowing trial layout to be easily modified, and allowing cells from a precomputed cell library to be incorporated into the current design. Programs called simulators are used to verify the corectness of a proposed design whose complexity may make it difficult to check manually.
3. The computer file describing the proposed circuit is processed automatically to create the optical templats or masks, which are the printiing plates from which the ICs are manufactured.
The MCS-4 series was soon followed by a large number of microprocessors families produced by various manufucturers; most of these employ the faster N_MOS and CMOS technologies.
As IC densities increased with the rapid development of IC manufacturing technology, the power and performance of the microprocessors also increased.
This is reflected in the increase in the CPU word size to 4, 8, 16, and by mid- 1980s, 32 bits. The smaller microprocessors have relatevely simple instraction sets, e.g., no floating points instractions, but they are nevertheless suitable as controllers for a very wide range of applications such as automobile engines and microwave ovens.
The larger and more recent microprocessors families have gradually acquired most of the features of large computers.
As the microprocessors industry has matured, several families of microprocessors have evolved into de facto industrial standards with multiple manufacturers and numerous "support" chips including RAMs, ROMs ,IO controllers ect.
A microcomputer with a relatively small maim memory and limited IO connections can not be implemented on a single VLSI chip.
The resulting one-chip microcomputer is in many respects, a landmark development in computer technology because it reduces the computer to a small, inexpensive, and easily replaceable design component.
The production of single chip microcomputers began in the mid 1970s, shortly after the appearance of the first microprocessors. They are typically used as programmable controllers for a wide range of devices and are consequently sometimes referred as microcontrollers.
Microcomputers have given rise to a new class of general-purpose machines called PERSONAL COMPUTERS. These are small low cost computers that are designed to sit on an ordinary office desk and, in some cases, to fold into a compact form that is easily carried. A typical PC has perhaps 1M bytes of main memory capacity, and the following IO devices: a keyboard; a vidoe monitor, a compact disk driver and interface circuits for connecting the PC to telephone or computer networks.
One of the most widely used PC families is the PC series from IBM, which, following its introduction in 1981 and following precedents set by earlier IBM computers, has become a standard for this class of machine.
Another noteworthy PC is apple computer's Macintosh.
Their small size and low cost have made it feasible to use microcomputers in many applications which previosly employed special-purpose logic circuits. The microcomputer is tailored to a particular appliction by means of programs which era frequently stored in ROM chips.Changes are made merely by replacing the ROM programs.
PC have proliferated the point that they have become as common as typwriters in business offices. Theire main applications are word processing, where PC have assumed and greatly expended many of the traditional typewriter's functions, accounting and similar data processing tasks, and as communication terminals with other computers all over the world.
Some of the tasks that computers will do in the next generation of computering can be defined as ARTIFICIAL INTELLIGENCE .
Such computers would make decisions based on available evidence rather than on hard and fast rules. If computers could be taught the rules that are used in decisions making, they might be able to replace the human experts who are currently charged with those decisions.
Some versions of these types of EXPERT SYSTEM are already in use and new ones are now under development.
However, new compiter programs and new methods of programming computers will have to be designed and put into operation before it can be said to be fully engaged in this latest generation of computers.
We are still a few steps away, but life saving computer implants are only around the corner.These tiny computers will control mechanical devices that can replace living organs that have ceased to function. Other medical reaserch has given paralegics renewed hope that they may someday walk again with the assistance of a computerised nervous system.Withen a few years,pocket computers will be able to read 50 diffrent newspapers,turn on the heat at home, call a cub, order groceries, buy share of stock, make hotel reservations.Our pocket computers will also serve as credit cards.
ALBERT EINSTEIN said that " concern for man himself and his fate must always form the chief interst of all technical endeavors".
There are those who believe that a rapidly advancing computer technology exhibits little regard for " man himself and his fate ".
Whether it is good or bad society reached the point of no return with regard to its dependence on computers.
Bell,C.G,and A.Newell. Computer Structures:Readings and Example,McGraw-Hill,.
David A.Patterson, John L. Hennessy Computer Architecture and Quantitive Approach. 1990
John p. Hayes. Computer Architecture and Organization. 1988
The Origins of Digital Computers:Selected Papers,3d ed.,Springer-Verlag,berlin,1982.
Intels Computers of the 1990s.
Larry Long, Nancy Long. Computers. Prentic-Hall, 1986
Murdock, Everett E. Computers today, 1995
Intels Computers of the 1990s.
Larry Long, Nancy Long. Computers. Prentic-Hall, 1986