Chapter 1 The History and Future of Computers
学习指导
20 世纪 40 年代,世界上诞生了第一台电子计算机。此后,随着真空管、晶体管、集成电 路与超大规模集成电路的发展及其在计算机中的应用, 计算机从第一代发展到第四代。 而今天, 由于科学技术的变化日新月异,计算机的发展进入了“无代”时代。通过本章学习,读者应掌 握以下内容: l 现代计算机的共同特征和各代计算机的特点; l 计算机技术的发展趋势; l 了解科技英语的特点,掌握科技英语翻译要点。1.1 The Invention of the Computer
It is hard to say exactly when the modern computer was invented. Starting in the 1930s and through the 1940s, a number of machines were developed that were like computers. But most of these machines did not have all the characteristics that we associate with computers today. These characteristics are that the machine is electronic, that it has a stored program, and that it is general purpose.
One of the first computerlike devices was developed in Germany by Konrad Zuse in 1941. Called the Z3, it was generalpurpose, storedprogram machine with many electronic parts, but it had a mechanical memory. Another electromechanical computing machine was developed by Howard Aiken, with financial assistance from IBM, at Harvard University in 1943. It was called the Automatic Sequence Control Calculator Mark I, or simply the Harvard Mark I. Neither of these machines was a true computer, however, because they were not entirely electronic.
1.1.1 The ENIAC
Perhaps the most influential of the early computerlike devices was the Electronic Numerical Integrator and Computer, or ENIAC. It was developed by J. Presper Eckert and John Mauchly at the University of Pennsylvania. The project began in 1943 and was completed in 1946. The machine was huge; it weighed 30 tons and contained over 18,000 vacuum tubes.
The ENIAC was a major advancement for its time. It was the first generalpurpose, electronic computing machine and was capable of performing thousands of operations per second. It was controlled, however, by switches and plugs that had to be manually set. Thus, although it was a
characteristics of a computer.
While working on the ENIAC, Eckert and Mauchly were joined by a brilliant mathematician, John von Neuman. Together, they developed the idea of a stored program computer. This machine, called the Electronic Discrete Variable Automatic Computer, or EDVAC, was the first machine whose design included all the characteristics of a computer. It was not completed, however, until 1951.
Before the EDVAC was finished, several other machines were built that incorporated elements of the EDVAC design of Eckert, Mauchly, and von Neuman. One was the Electronic Delay Storage Automatic Computer, or EDSAC, which was developed in Cambridge, England. It first operated in May of 1949 and is probably the world’s first electronic storedprogram, generalpurpose computer to become operational. The first computer to operate in the United States was the Binary Automatic Computer, or BINAC, which became operational in August of 1949.
1.1.2 The UNIVAC I
Like other computing pioneers before them, Eckert and Mauchly formed a company in 1947 to develop a commercial computer. The company was called the EckertMauchly Computer Corporation. Their objective was to design and build the Universal Automatic Computer or UNIVAC. Because of difficulties of getting financial support, they had to sell the company to Remington Rand in 1950. Eckert and Mauchly continued to work on the UNIVAC at Remington Rand and completed it in 1951. Known as the UNIVAC I, this machine was the first commercially available computer.
The first UNIVAC I was delivered to the Census Bureau and used for the 1950 census. The second UNIVAC I was used to predict that Dwight Eisenhower would win the 1952 presidential election, less than an hour after the polls closed. The UNIVAC I began the modern of computer use.
New Words & Expressions
computerlike a. 计算机似的 electromechanical a. 机电的,电机的
vacuum tubes 真空管 Census Bureau 人口普查局 thousands of 成千上万的 known as 通常所说的,以……著称 Abbreviations ENIAC (Electronic Numerical Integrator and Computer ) 电子数字积分计算机,ENIAC 计算机 EDSAC (Electronic Delay Storage Automatic Computer) 延迟存储电子自动计算机 BINAC (Binary Automatic Computer) 二进制自动计算机 UNIVAC (Universal Automatic Computer) 通用自动计算机
1.2 Computer Generations
Since the UNIVAC I computers have evolved rapidly. Their evolution has been the result of changes in technology that have occurred regularly. These changes have resulted in four main
generations of computers.
1.2.1 First-Generation Computers: 1951~1958
Firstgeneration computers were characterized by the use of vacuum tubes as their principal electronic component. Vacuum tubes are bulky and produce a lot of heat, so firstgeneration computers were large and required extensive air conditioning to keep them cool. In addition, because vacuum tubes do not operate very fast, these computers were relatively slow.
The UNIVAC I was the first commercial computer in this generation. As noted earlier, it was used in the Census Bureau in 1951. It was also the first computer to be used in a business application. In 1954, General Electric took delivery of a UNIVAC I and used it for some of its business data processing.
The UNIVAC I was not the most popular firstgeneration computer, however. This honor goes to the IBM 650. It was first delivered in 1955 before Remington Rand could come out with a successor to the UNIVAC I. With the IBM 650, IBM captured the majority of the computer market, a position it still holds today.
At the same time that hardware was evolving, software was developing. The first computers were programmed in machine language, but during the first computer generation, the idea of programming language translation and highlevel languages occurred. Much of the credit for these ideas goes to Grace Hopper, who, as a Navy lieutenant in 1945, learned to program the Harvard Mark I. In 1952, she developed the first programming language translator, followed by others in later years. She also developed a language called Flowmatic in 1957, which formed the basis for COBOL, the most commonly used business programming language today.
Other software developments during the first computer generation include the design of the FORTRAN programming language in 1957. This language became the first widely used highlevel language. Also, the first simple operating systems became available with firstgeneration computers.
1.2.2 Second-Generation Computers: 1959~1963
In the second generation of computers, transistors replaced vacuum tubes. Although invented in 1948, the first alltransistor computer did not become available until 1959. Transistors are smaller and less expensive than vacuum tubes, and they operate faster and produce less heat. Hence, with secondgeneration computers, the size and cost of computers decreased, their speed increased, and their airconditioning needs were reduced.
Many companies that had not previously sold computer entered the industry with the second generation. One of these companies that still makes computers is Control Data Corporation (CDC). They were noted for making highspeed computers for scientific work.
Remintong Rand, now called SperrRand Corporation, made several secondgeneration UNIVAC computers. IBM, however, continued to dominate the industry. One of the most popular secondgeneration computers was the IBM 1401, which was a mediumsized computer used by many businesses.
All computers at this time were mainframe computers costing over a million dollars. The first minicomputer became available in 1960 and cost about $120,000. This was the PDP1, manufactured by Digital Equipment Corporation (DEC).
Software also continued to develop during this time. Many new programming languages were designed, including COBOL in 1960. More and more businesses and organizations were beginning to use computers for their data processing needs.
1.2.3 Third-Generation Computers: 1964~1970
The technical development that marks the third generation of computers is the use of integrated circuits or ICs in computers. An integrated circuit is a piece of silicon (a chip) containing numerous transistors. One IC replaces many transistors in a computer; result in a continuation of the trends begun in the second generation. These trends include reduced size, reduced cost, increased speed, and reduced need for air conditioning.
Although integrated circuits were invented in 1958, the first computers to make extensive use of them were not available until 1964. In that year, IBM introduced a line of mainframe computers called the System/360. The computers in this line became the most widely used thirdgeneration machines. There were many models in the System/360 line, ranging from small, relatively slow, and inexpensive ones, to large, very fast, and costly models. All models, however, were compatible so that programs written for one model could be used on another. This feature of compatibility across many computers in a line was adopted by other manufacturers of thirdgeneration computers.
The third computer generation was also the time when minicomputers became widespread. The most popular model was the PDP8, manufactured by DEC. Other companies, including Data General Corporation and HewlettPackard Company, introduced minicomputers during the third generation.
The principal software development during the third computer generation was the increased sophistication of operating systems. Although simple operating systems were developed for firstand secondgeneration computers, many of the features of modern operating systems first appeared during the third generation. These include multiprogramming, virtual memory, and timesharing. The first operating systems were mainly batch systems, but during the third generation, interactive systems, especially on minicomputers, became common. The BASIC programming language was designed in 1964 and became popular during the third computer generation because of its interactive nature.
1.2.4 Fourth-Generation Computers: 1971~?
The fourth generation of computers is more difficult to define than the other three generations. This generation is characterized by more and more transistors being contained on a silicon chip. First there was Large Scale Integration (LSI), with hundreds and thousands of transistors per chip, then came Very Large Scale Integration (VLSI), with tens of thousands and hundreds of thousands of transistors. The trend continues today.
Although not everyone agrees that there is a fourth computer generation, those that do feel that it began in 1971, when IBM introduced its successors to the System/360 line of computers. These mainframe computers were called the System/370, and currentmodel IBM computers, although not called System/370s, evolved directly from these computers.
Minicomputers also proliferated during the fourth computer generation. The most popular lines were the DEC PDP11 models and the DEC VAX, both of which are available in various models today.
Supercomputers first became prominent in the fourth generation. Although many companies, including IBM and CDC, developed highspeed computers for scientific work, it was not until Cray Research, Inc., introduced the Cray 1 in 1975 that supercomputers became significant. Today, supercomputers are an important computer classification.
Perhaps the most important trend that began in the fourth generation is the proliferation of microcomputers. As more and more transistors were put on silicon chips, it eventually became possible to put an entire computer processor, called a microprocessor, on a chip. The first computer to use microprocessors became available in the mid1970s. The first microcomputer designed for personal use was the Altair, which was sold in 1975. The first Apple computer, marketed with the IBM PC in 1981. Today, microcomputers far outnumber all other types of computers combined.
Software development during the fourth computer generation started off with little change from the third generation. Operating systems were gradually improved, and new languages were designed. Database software became widely used during this time. The most important trend, however, resulted from the microcomputer revolution. Packaged software became widely available for microcomputers so that today most software is purchased, not developed from scratch.
1.2.5 Generationless Computers
We may have defined our last generation of computers and begun the era of generationless computers. Even though computer manufacturers talk of “fifth” and “sixth”generation computers, this talk is more a marketing play than a reflection of reality.
Advocates of the concept of generationless computers say that even though technological innovations are coming in rapid succession, no single innovation is, or will be, significant enough to characterize another generation of computers. New Words & Expressions result in 导致,终于造成……结果 air conditioning 空气调节 take delivery of 正式接过…… Navy lieutenant 海军上尉 highlevel language 高级语言 mainframe n. 主机,大型机 more and more 越来越多的 range from …to… 从……到…… multiprogramming n. 多道程序设计 timeshare n. 分时,时间共享 virtual memory 虚拟内存 from scratch 从头开始
start off v. 出发, 开始 proliferate v. 增生,扩散 Abbreviations COBOL (Common BusinessOriented Language) 面向商业的通用语言 DEC (Digital Equipment Corporation) 美国数字设备公司 LSI (Large Scale Integrated Circuit) 大规模集成电路 VLSI (Very Large Scale Integrated Circuit) 超大规模集成电路 Notes
1. IBM introduced a line of mainframe computers called the System/360. IBM 公司推出了一个称为 System/360 的大型计算机系列,此处 line 指系列产品。
Reading Material: Classes of Computing Applications and Their Characteristics
Although a common set of hardware technologies is used in computers ranging from smart home appliances to cell phones to the largest supercomputers, these different applications have different design requirements and employ the core hardware technologies in different ways. Broadly speaking, computers are used in three different classes of applications.Personal computers (PCs) are possibly the best known form of computing, which readers of
this book have likely used extensively. Personal computers emphasize delivery of good performance to single users at low cost and usually execute thirdparty soft ware. This class of computing drove the evolution of many computing technologies, which is only about 35 years old!
Servers are the modern form of what were once much larger computers, and are usually
accessed only via a network. Servers are oriented to carrying large workloads, which may consist of either single complex applications—usually a scientific or engineering application—or handling many small jobs, such as would occur in building a large web server. These applications are usually based on software from another source (such as a database or simulation system), but are often modified or customized for a particular function. Servers are built from the same basic technology as desktop computers, but provide for greater computing, storage, and input/output capacity. In general, servers also place a greater emphasis on dependability, since a crash is usually more costly than it would be on a single user PC.
Servers span the widest range in cost and capability. At the low end, a server may be little more than a desktop computer without a screen or keyboard and cost a thousand dollars. These lowend servers are typically used for file storage, small business applications, or simple web serving. At the other extreme are supercomputers, which at the present consist of tens of thousands of processors and manyterabytes of memory, and cost tens to hundreds of millions of dollars. Supercomputers are usually used for highend scientific and engineering calculations, such as weather forecasting, oil exploration, protein structure determination, and other largescale problems. Although such supercomputers represent the peak of computing capability, they represent a relatively small fraction
of the servers and a relatively small fraction of the overall computer market in terms of total revenue.
Embedded computers are the largest class of computers and span the widest range of applications
and performance. Embedded computers include the microprocessors found in your car, the computers in a television set, and the networks of processors that control a modern airplane or cargo ship. Embedded computing systems are designed to run one application or one set of related applications that are normally integrated with the hardware and delivered as a single system; thus, despite the large number of embedded computers, most users never really see that they are using a computer!
Embedded applications often have unique application requirements that combine a minimum performance with stringent limitations on cost or power. For example, consider a music player: the processor need only be as fast as necessary to handle its limited function, and beyond that, minimizing cost and power are the most important objectives. Despite their low cost, embedded computers oft en have lower tolerance for failure, since the results can vary from upsetting (when your new television crashes) to devastating (such as might occur when the computer in a plane or cargo ship crashes). In consumeroriented embedded applications, such as a digital home appliance, dependability is achieved primarily through simplicity—the emphasis is on doing one function as perfectly as possible. In large embedded systems, techniques of redundancy from the server world are oft en employed. Although this book focuses on generalpurpose computers, most concepts apply directly, or with slight modifications, to embedded computers.
Many embedded processors are designed using processor cores, a version of a processor written in a hardware description language, such as Verilog or VHDL. The core allows a designer to integrate other applicationspecific hardware with the processor core for fabrication on a single chip.
New Words & Expressions
saerver n. 服务器 fabrication n. 兆兆(10 12 )字节;
workload n. 工作量 database n. 数据库
lowend a. 低端 highend a. 高端
stringent adj. 严格的 terabytes n. 制造
processor n. [计]处理器 Verilog n. 一种硬件描述语言 Abbreviations VHDL (VHSic hadware description language) 超高速集成电路硬件描述语言
科技英语的特点
比起非科技英语来,科技英语有四多,即复杂长句多、被动语态多、非谓语动词多、词 性转换多。 一、复杂长句多 科技文章要求叙述准确,推理谨严,因此一句话里包含三四个甚至五六个分句的,并非少见。译成汉语时,必须按照汉语习惯破成适当数目的分句,才能条理清楚,避免洋腔洋调。
这种复杂长句居科技英语难点之首, 读者要学会运用语法分析方法来加以解剖, 以便以短代长,
化难为易。例如:
Factories will not buy machines unless they believe that the machine will produce goods that they are able to sell to consumers at a price that will cover all cost. 这是由一个主句和四个从句组成的复杂长句,只有进行必要的语法分析,才能正确理解 和翻译。现试译如下: 除非相信那些机器造出的产品卖给消费者的价格足够支付所有成本,否则厂家是不会买 那些机器的。 也可节译如下: 要不相信那些机器造出的产品售价够本,厂家是不会买的。 后一句只用了 24 个字,比前句 40 个字节约用字 40%,而对原句的基本内容无损。可见, 只要吃透原文的结构和内涵,翻译时再在汉语上反复推敲提炼,复杂的英语长句,也是容易驾 驭的。又如: There is an increasing belief in the idea that the “problem solving attitude” of the engineer must be buttressed not only by technical knowledge and “scientific analysis” but that the engineer must also be aware of economics and psychology and, perhaps even more important, that he must understand the world around him. 这个长句由一个主句带三个并列定语从句构成,试译如下: 越来越令人信服的想法是:工程师不仅必须用技术知识和科学分析来加强解决问题的意 向,而且也一定要了解经济学和心理学,而可能更为重要的是:必须懂得周围世界。 这两个例句初步说明了英语复杂长句的结构和译法。 二、被动语态多 英语使用被动语态大大多于汉语,如莎士比亚传世名剧《罗密欧与朱丽叶》中的一句就 两次用了被动语态: Juliet was torn between desire to keep Romeo near her and fear for his life, should his presence be detected. 朱丽叶精神上受到折磨,既渴望和罗密欧形影不离,又担心罗密欧万一让人发现,难免 有性命之忧。 科技英语更是如此,有三分之一以上用被动语态。例如: (a) No work can be done without energy. 译文:没有能量决不能做功。 (b) All business decisions must now be made in the light of the market. 译文:所有企业现在必须根据市场来作出决策。
(c) Automobiles may be manufactured with computerdriven robots or put together almost totally by hand.
译文:汽车可以由计算机操纵的机器人来制造,或者几乎全部用手工装配。
的并列后句,其谓语本应是 may be put together。put 是三种变化形式一样的不规则动词,在这 里是过去分词,由于修辞学上避免用词重复出现的要求,略去了 may be 两词,所以并非现在 时,而是被动语态。 科技英语之所以多用被动语态,为的是要强调所论述的客观事物(四例中的 work, necessaries, business decisions, automobiles),因此放在句首,作为句子的主语,以突出其重要性。 三、非谓语动词多 英语每个简单句中,只能用一个谓语动词,如果读到几个动作,就必须选出主要动作当 谓语,而将其余动作用非谓语动词形式,才能符合英语语法要求。 非谓语动词有三种:动名词、分词(包括现在分词和过去分词)和不定式。例如: (a) 要成为一个名符其实的内行,需要学到老。 这句中,有“成为”“需要”和“学”三个表示动作的词,译成英语后为: To be a true professional requires lifelong learning. 可以看出, 选好 “需要” (require) 作为谓语, 其余两个动作: “成为” 用不定式形式 to be, 而“学”用动名词形式 learning,这样才能符合英语语法要求。 (b) 任何具有重量并占有空间的东西都是物质。 这句包含“是” (在英语中属于存在动词)、 “具有”和“占有”三个动作,译成英语为: Matter is anything having weight and occupying space.
将“是” (is)当谓语(系动词),而“具有” (having)和“占有” (occupying)处理为现在
分词,连同它们的宾语 weight 和 space 分别构成现在分词短语作为修饰名词 anything 的定语。 (c) 这门学科为人所知的两大分支是无机化学和有机化学。 这句有“为人所知”和“是”两个动词,译成英语后为: The two great divisions of this science known are inorganic chemistry and organic chemistry. 这里将“是” (are)作为谓语系动词,而将“为人所知” (known)处理为过去分词。 上述三例分别列举了三种非谓语动词的使用情况。其必要性都是为了英语语法上这条铁 定的要求:每个简单句只允许有一个谓语动词。这就是英语为什么不同于其他语言,有非谓语 动词,而且用得十分频繁的原因。 四、词性转换多 英语单词有不少是多性词,即既是名词,又可用作动词、形容词、介词或副词,字形无 殊,功能各异,含义也各不相同,如不仔细观察,必致谬误。例如: (a) above 介词:above all (things) 首先,最重要的是 形容词:for the above reason 由于上述理由 副词:As (has been) indicated above 如上所指出 (b) light 名词: (启发)in (the)light of 由于,根据; (光)high light(s) 强光,精华; (灯)safety light 安全指示灯
形容词: (轻)light industry 轻工业; (明亮)light room 明亮的房间; (淡)light blue 淡蓝色; (薄)light coating 薄涂层 动词: (点燃)light up the lamp 点灯 副词: (轻快)travel light 轻装旅行 (容易)light come, light go 来得容易去得快 诸如此类的词性转换,在德、俄等西方语言中是少有的,而科技英语中却屡见不鲜,几 乎每个技术名词都可转换为同义的形容词。 词性转换增加了英语的灵活性和表现力, 读者必须 从上下文判明用词在句中是何种词性,含义如何,才能对全句得到正确无误的理解。 我们在科技翻译实践中,要充分体现以上各个特点,重视信息传递,注意调整句式、篇 章,以使译文叙述条理、逻辑连贯,同时还要注意准确使用科技术语。
Exercises
I. Answer the following questions 1. When was the modern computer invented? 2. What are major characteristics of the four generations of modern computers? 3. Describe the nearfuture supercomputer directions. 4. What are basic characteristics of modern computers? II. Write a summary of section 1.2 about computer generations in 300 words. III. Talk about the trends of computer hardware and software.Chapter 2 Basic Organization of Computers
学习指导
计算机主要由中央处理器、存储设备以及输入、输出设备等组成。通过本章学习,读者 应掌握以下内容: l 掌握计算机结构与硬件的主要术语; l 掌握计算机的组成与各部分的功能,并能用英语表述; l 掌握专业词汇的构成规律,特别是常用词缀及复合词的构成。2.1 Introduction
In this chapter, we examine the organization of basic computer systems. A simple computer has three primary subsystems. The central processing unit, or CPU, performs many operations and controls the computer. A microprocessor usually serves as the computer's CPU. The memory subsystem is used to store programs being executed by the CPU, along with the program's data. The input/output, or I/O, subsystem allows the CPU to interact with input and output devices, such as the keyboard and monitor of a personal computer, or the keypad and digital display of a microwave oven.Most computer systems, from the embedded controllers found in automobiles and consumer appliances to personal computers and mainframes, have the same basic organization. This organization has three main components: the CPU, the memory subsystem, and the I/O subsystem. The generic organization of these components is shown in Figure 21. CPU Memory Subsystem I/O Device I/O Device … I/O Subsystem Address Bus Data Bus Control Bus
In this chapter, we first describe the system buses used to connect the components in the computer system. Then we examine the instruction cycle, the sequence of operations that occurs within the computer as it fetches, decodes, and executes an instruction. New Words & Expressions subsystem n. 子系统 operation n. 操作,运算,执行命令(计) microprocessor n. [计]微处理器 system buses 系统总线 sequence n. 时序,序列 fetch vt. 取数,取指令 decode vt. 解码,译解 instruction n. 指令 Abbreviations CPU(Central Processing Unit) 中央处理器 I/O(Input/Output) 输入输出(设备)
2.2 System Buses
Physically, a bus is a set of wires. The components of the computer are connected to the buses. To send information from one component to another, the source component outputs data onto the bus. The destination component then inputs this data from the bus. As the complexity of a computer system increases, it becomes more efficient (in terms of minimizing connections) at using buses rather than direct connections between every pair of devices. Buses use less space on a circuit board and require less power than a large number of direct connections. They also require fewer pins on the chip or chips that comprise the CPU.The system shown in Figure 21 has three buses. The uppermost bus in this figure is the address bus. When the CPU reads data or instructions from or writes data to memory, it must specify the address of the memory location it wishes to access. It outputs this address to the address bus; memory inputs this address from the address bus and use it to access the proper memory location. Each I/O devices, such as a keyboard, monitor, or disk drive, has a unique address as well. When accessing an I/O device, the CPU places the address of the device on the address bus. Each device can read the address off of the bus and determine whether it is the device being accessed by the CPU. Unlike the other buses, the address bus always receives data from the CPU; the CPU never reads the address bus.
Data is transferred via the data bus. When the CPU fetches data from memory, it first outputs the memory address on its address bus. Then memory outputs the data onto the data bus; the CPU can then read the data from the data bus. When writing data to memory, the CPU first outputs the address onto the address bus, then outputs the data onto the data bus. Memory then reads and stores the data at the proper location. The processes for reading data from and writing data to the I/O devices are similar.
The control bus is different from the other two buses. The address bus consists of n lines, which combine to transmit one nbit address value. Similarly, the lines of the data bus work together to
transmit a single multibit value. In contrast, the control bus is a collection of individual control signals. These signals indicate whether data is to be read into or written out of the CPU, whether the CPU is accessing memory or an I/O device, and whether the I/O device or memory is ready to transfer data. Although this bus is shown as bidirectional in Figure 21, it is really a collection of (mostly) unidirectional signals. Most of these signals are output from the CPU to the memory and I/O subsystems, although a few are output by these subsystems to the CPU. We examine these signals in more detail when we look at the instruction cycle and the subsystem interface. A system may have a hierarchy of buses. For example, it may use its address, data, and control buses to access memory, and an I/O controller. The I/O controller, in turn, may access all I/O devices using a second bus, often called an I/O bus or a local bus. New Words & Expressions pins n. 插脚,管脚 address bus 地址总线
uppermost adj. 最高的;adv. 在最上 control bus 控制总线
data bus 数据总线 via prep. 经,通过,经由
multibit 多位 bidirectional 双向的
unidirectional 单向的 hierarchy n. 层次,层级
I/O bus 输入输出总线 local bus n. 局域总线
2.3 Instruction Cycle
The instruction cycle is the procedure a microprocessor goes through to process an instruction. First the microprocessor fetches, or reads, the instruction from memory. Then it decodes the instruction, determining which instruction it has fetched. Finally, it performs the operations necessary to execute the instruction. (Some people also include an additional element in the instruction cycle to store results. Here, we include that operation as part of the execute function.) Each of these functions—fetch, decode, and execute—consists of a sequence of one or more operations.
Let's start where the computer starts, with the microprocessor fetching the instruction from memory. First, the microprocessor places the address of the instruction on to the address bus. The memory subsystem inputs this address and decodes it to access the sired memory location. (We look at how this decoding occurs when we examine the memory subsystem in more detail later in this chapter.)
After the microprocessor allows sufficient time for memory to decode the address and access the requested memory location, the microprocessor asserts a READ control signal. The READ signal is a signal on the control bus which the microprocessor asserts when it is ready to read data from memory or an I/O device. (Some processors have a different name for this signal, but all microprocessors have a signal to perform this function.) Depending on the microprocessor, the READ signal may be active high (asserted 1) or active low (asserted 0).
When the READ signal is asserted, the memory subsystem places the instruction code to be fetched onto the computer system's data bus, The microprocessor then inputs this data from the bus and stores it in one of its internal registers. At this point, the microprocessor has fetched the instruction.
Next, the microprocessor decodes the instruction. Each instruction may require a different sequence of operations to execute the instruction. When the microprocessor decodes the instruction, it determines which instruction it is in order to select the correct sequence of operations to perform. This is done entirely within the microprocessor; it does not use the system buses.
Finally, the microprocessor executes the instruction. The sequence of operations to execute the instruction varies from instruction to instruction. The execute routine may read data from memory, write data to memory, read data from or write data to an I/O device, perform only operations within the CPU, or perform some combination of these operations. We now look at how the computer performs these operations from a system perspective.
To read data from memory, the microprocessor performs the same sequence of operations it uses to fetch an instruction from memory. After all, fetching an instruction is simply reading it from memory. Figure 22(a) shows the timing of the operations to read data from memory.
In Figure 22, notice the top symbol, CLK. This is the computer system clock; the microprocessor uses the system clock to synchronize its operations. The microprocessor places the address onto the bus at the beginning of a clock cycle, a 0/1 sequence of the system clock. One clock cycle later, to allow time for memory to decode the address and access its data, the microprocessor asserts the READ Signal. This causes memory to place its data onto the system data bus. During this clock cycle, the microprocessor reads the data off the system bus and stores it in one of its registers. At the end of the clock cycle it removes the address from the address bus and deasserts the READ signal. Memory then removes the data from the data bus, completing the memory read operation. Clock Cycle 1 Clock Cycle 2 CLK Address Bus Data Bus READ Address Data Clock Cycle 1 Clock Cycle 2 CLK Address Bus Data Bus WRITE Address Data (a) (b) Fig.22 Timing diagram for (a) memory read and (b) memory write The timing of the memory write operation is shown in Figure 22(b). The processor places the address and data onto the system buses during the first clock cycle. The microprocessor then asserts a WRITE control signal (or its equivalent) at the start of the second clock cycle. Just as the READ
signal causes memory to read data, the WRITE signal triggers memory to store data. Some time during this cycle, memory writes the data on the data bus to the memory location whose address is on the address bus. At the end of this cycle, the processor completes the memory write operation by removing the address and data from the system buses and deasserting the WRITE signal.
The I/O read and write operations are similar to the memory read and write operations. A processor may use either memory mapped I/O or isolated I/O. If the processor supports memory mapped I/O, it follows the same sequences of operations to input or output data as to read data from or write data to memory, the sequences shown in Figure 22. (Remember, in memory mapped I/O, the processor treats an I/O port as a memory location, so it is reasonable to treat an I/O data access the same as a memory access.) Processors that use isolated I/O follow the same process but have a second control signal to distinguish between I/O and memory accesses. (CPUs that use isolated I/O can have a memory location and an I/O port with the same address, which makes this extra signal necessary.)
Finally, consider instructions that are executed entirely within the microprocessor. The INAC instruction of the Relatively Simple CPU, and the MOV r1, r2 instruction of the 8085 microprocessor, can be executed without accessing memory or I/O devices. As with instruction decoding, the execution of these instructions does not make use of the system buses. New Words & Expressions instruction cycle 指令周期 memory map n. [计]内存 register n. 寄存器 port n. 端口 timing n. 定时;时序;时间选择 synchronize vt. 使...同步 assert vt. 主张,发出 deassert vt. 撤销 trigger vt. 引发,引起,触发 map v. 映射
2.4 CPU ORGANIZATION
The CPU controls the computer. It fetches instructions from memory, supplying the address and control signals needed by memory to access its data. The CPU decodes the instruction and controls the execution procedure. It performs some operations internally, and supplies the address, data, and control signals needed by memory and I/O devices to execute the instruction. Nothing happens in the computer unless the CPU causes it to happen. Internally, the CPU has three sections, as shown in Figure 23. The register sections, as its name implies, includes a set of registers and a bus or other communication mechanism. The registers in a processor's instruction set architecture are found in this section of the CPU. The system address and data buses interact with this section of the CPU. The register section also contains other registers that are not directly accessible by the programmer. The relatively simple CPU includes registers to latch the address being accessed in memory and a temporary storage register, as well as other registers that are not a part of its instruction set architecture.Control Unit
ALU
Register Control bus signals Address bus Data bus
Control signals Control signals Data values Data values(Operands) Data values (Results) Fig.23 CPU Internal Organization
During the fetch portion of the instruction cycle, the processor first outputs the address of the instruction onto the address bus. The processor has a register called the program counter; the CPU keeps the address of the next instruction to be fetched in this register. Before the CPU outputs the address onto the system's address bus, it retrieves the address from the program counter register. At the end of the instruction fetch, the CPU reads the instruction code from the system data bus. It stores this value in an internal register, usually called the instruction register or something similar.
The arithmetic logic unit or ALU performs most arithmetic and logical operations, such as adding or ANDing values. It receives its operands from the register section of the CPU and stores its results back in the register section. Since the ALU must complete its operations within a single clock cycle, it is constructed using only combinatorial logic. The ADD instructions in the relatively simple CPU and the 8085 microprocessor use the ALU during their executions.
Just as the CPU controls the computer (in addition to its other functions), the control unit controls the CPU. This unit generates the internal control signals that cause registers to load data, increment or clear their contents, and output their contents, as well as cause the ALU to perform the correct function. These signals are shown as control signals in Figure 23. The control unit receives some data values from the register unit, which it uses to generate the control signals. This data includes the instruction code and the values of some flag registers. The control unit also generates the signals for the system control bus, such as the READ, WRITE, and IO / M signals. A microprocessor typically performs a sequence of operations to fetch, decode, and execute an instruction. By asserting these internal and external control signals in the proper sequence, the control unit causes the CPU and the rest of the computer to perform the operations needed to correctly process instructions.
This description of the CPU is incomplete. Current processors have more complex features that improve their performance. One such mechanism, the instruction pipeline, allows the CPU to fetch one instruction while simultaneously executing another instruction.
New Words & Expressions latch v. 闭锁,锁存 program counter 程序计数器 instruction register 指令寄存器 operand n. 操作数 increment n. 增量,加 1 flag register 标志寄存器 pipeline n. 流水线 microsequenced 微层序的 Abbreviations ALU (Arithmetic Logic Unit) 算术逻辑单元
Reading Material: Eight Great Ideas in Computer Architecture
We now introduce eight great ideas that computer architects have been invented in the last 60 years of computer design. These ideas are so powerful they have lasted long after the first computer that used them, with newer architects demonstrating their admiration by imitating their predecessors. These great ideas are themes that we will weave through this and subsequent chapters as examples arise. To pointout their influence, in this section we introduce icons and highlighted terms that represent the great ideas and we use them to identify the nearly 100 sections of the book that feature use of the great ideas.Design for Moore’s Law
The one constant for computer designers is rapid change, which is driven largely by Moore’s
Law. It states that integrated circuit resources double every 18-24 months. Moore’s Law resulted
from a 1965 prediction of such growth in IC capacity made by Gordon Moore, one of the founders of Intel. As computer designs can take years, the resources available per chip can easily double or quadruple between the start and finish of the project. Like a skeet shooter, computer architects must anticipate where the technology will be when the design finishes rather than design for where it starts. We use an “up and to the right” Moore’s Law graph to represent designingfor rapid change.
Use Abstraction to Simplify Design
Both computer architects and programmers had to invent techniques to make themselves more productive, for otherwise design time would lengthen as dramatically as resources grew by Moore’s Law. A major productivity technique for hardware and soft ware is to use abstractions to represent the design at different levels of representation; lowerlevel details are hidden to offer a simpler model at higher levels. We’ll use the abstract painting icon to represent this second great idea.
Make the Common Case Fast
Making the common case fast will tend to enhance performance better than optimizing the rare case. Ironically, the common case is often simpler than the rare case and hence is often easier to enhance. This common sense advice implies that you know what the common case is, which is only possible with careful experimentation and measurement. We use a sports car as the icon for making the common case fast, as the most common trip has one or two passengers, and it’s surely easier to
make a fast sports car than a fast minivan!
Performance via Parallelism
Since the dawn of computing, computer architects have offered designs that get more performance by performing operations in parallel. We’ll see many examples of parallelism in this book. We use multiple jet engines of a plane as our icon for parallel performance.
Performance via Pipelining
A particular pattern of parallelism is so prevalent in computer architecture that it merits its own name: pipelining. For example, before fire engines, a “bucket brigade” would respond to a fire, which many cowboy movies show in response to a dastardly act by the villain. The townsfolk form a human chain to carry a water source to fire, as they could much more quickly move buckets up the chain instead of individuals running back and forth. Our pipeline icon is a sequence of pipes, with each section representing one stage of the pipeline.
Performance via Prediction
Following the saying that it can be better to ask for forgiveness than to ask for permission, the final great idea is prediction. In some cases it can be faster on average to guess and start working rather than wait until you know for sure, assuming that the mechanism to recover from a misprediction is not too expensive and your prediction is relatively accurate. We use the fortuneteller’s crystal ball as our prediction icon.
Hierarchy of Memories
Programmers want memory to be fast, large, and cheap, as memory speed often shapes performance, capacity limits the size of problems that can be solved, and the cost of memory today is oft en the majority of computer cost. Architects have found that they can address these conflicting demands with a hierarchy of memories, with the fastest, smallest, and most expensive memory per bit at the top of the hierarchy and the slowest, largest, and cheapest per bit at the bottom. Caches give the programmer the illusion that main memory is nearly as fast as the top of the hierarchy and nearly as big and cheap as the bottom of the hierarchy. We use a layered triangle icon to represent the memory hierarchy. The shape indicates speed, cost, and size: the closer to the top, the faster and more expensive per bit the memory; the wider the base of the layer, the bigger the memory.
Dependability via Redundancy
Computers not only need to be fast; they need to be dependable. Since any physical device can fail, we make systems dependable by including redundant components that can take over when a failure occurs and to help detect failures. We use the tractortrailer as our icon, since the dual tires on each side of its rear axels allow the truck to continue driving even when one tire fails. (Presumably, the truck driver heads immediately to a repair facility so the flat tire can be fixed, thereby restoring redundancy!) New Words & Expressions admiration n. 钦佩;赞赏;羡慕 theme n. 主题 skeet n. 双向 shooter n. 枪;射手
abstractions n. 抽象,抽象化 parallelism n. 平行,并行
pipeline v. 管道输送 bucket brigade n. 救火队列
dastardly ad. 懦弱的,卑鄙的 villain n. 恶棍;歹徒;坏人,罪犯
hierarchy n. 层次体系 cache n. 快速缓冲贮存区 redundancy n. 过剩,冗余 rear n. 后部,背 axels n. 轮轴
计算机英语专业词汇的构成
英语的词汇构成有很多种,真正英语的基本词汇是不多的,很大部分词汇属于构成型词 汇。这里,仅介绍在专业英语中遇到的专业词汇及其构成。目前,各行各业都有一些自己领域 的专业词汇,有的是随着本专业发展应运而生的,有的是借用公共英语中的词汇,有的是借用 外来语言词汇,有的则是人为构造成的词汇。 一、派生词(derivation) 这类词汇非常多,它是根据已有的词加上某种前后缀,或以词根生成、或以构词成分形 成新的词。科技英语词汇中有很大一部分来源于拉丁语、希腊语等外来语,有的是直接借用, 有的是在它们之上不断创造出新的词汇。这些词汇的构词成分(前缀、后缀、词根等)较固定, 构成新词以后便于读者揣度词义,易于记忆。 1.1 前缀 采用前缀构成的单词在计算机专业英语中占了很大比例,通过下面的实例可以了解这些 常用的前缀构成的单词。multi多 hyper超级 super 超级
multiprogram 多道程序 hypercube 超立方 superhighway 超级公路
multimedia 多媒体 hypercard 超级卡片 superpipline 超流水线
multiprocessor 多处理器 hypermedia 超媒体 superscalar 超标量
multiplex 多路复用 hypertext 超文本 superset 超集
multiprotocol 多协议 hyperswitch 超级交换机 superclass 超类
inter相互、在...间 micro微型 tele远程的
interface 接口、界面 microprocessor 微处理器 telephone 电话
interlace 隔行扫描 microkernel 微内核 teletext 图文电视
interlock 联锁 microcode 微代码 telemarketing 电话购物
internet 互联网络(因特网) microkid 微机迷 telecommuting 家庭办公
interconnection 互联 microchannel 微通道 teleconference 远程会议
单词前缀还有很多,其构成可以同义而不同源(如拉丁、希腊) ,可以互换,例如:
multi, poly 相当于 many 如:multimedia 多媒体,polytechnic 各种工艺的
uni, mono 相当于 single 如:unicode 统一的字符编码标准,monochrome 单色
bi, di 相当于 twice 如:bichloride 双角的,dichloride 二氯化物 equi,iso 相当于 equal 如:equality 等同性,isoline 等值线
simili, homo 相当于 same 如:similarity 类似,homogeneous 同类的 semi,hemi 相当于 half 如:semiconductor 半导体,hemicycle 半圆形 hyper, super 相当于 over 如:hypertext 超文本,superscalar 超标量体系结构 1.2 后缀
后缀是在单词后部加上构词结构,形成新的单词。如:
scope 探测仪器 meter 计量仪器 graph 记录仪器
baroscope 验压器 barometer 气压表 barograph 气压记录仪
telescope 望远镜 telemeter 测距仪 telegraph 电报
spectroscope 分光镜 spectrometer 分光仪 spectrograph 分光摄像仪
able 可能的 ware 件(部件) ity 性质
enable 允许、使能 hardware 硬件 reliability 可靠性
disable 禁止、不能 software 软件 availability 可用性
programmable 可编程的 firmware 固件 accountability 可核查性
portable 便携的 groupware 组件 integrity 完整性
scalable 可缩放的 freeware 赠件 confidentiality 保密性
二、复合词(compounding) 复合词是科技英语中另一大类词汇,其组成面广,通常分为复合名词、复合形容词、复 合动词等。复合词通常以小横杠“”连接单词构成,或者采用短语构成。有的复合词进一步 发展,去掉了小横杠,并经过缩略成为另一类词类,即混成词。复合词的实例有: based 基于,以……为基础 centric 以……为中心的 ratebased 基于速率的 clientcentric 以客户为中心的 creditbased 基于信誉的 usercentric 以用户为中心的 filebased 基于文件的 hostcentered 以主机为中心的 Windowsbased 以 Windows 为基础的 oriented 面向……的 free 自由的,无关的 objectoriented 面向对象的 leadfree 无线的 marketoriented 市场导向 jumperfree 无跳线的 processoriented 面向进程的 paperfree 无纸的 threadoriented 面向线程的 chargefree 免费的 info信息,与信息有关的 infochannel 信息通道 infotree 信息、树 infoworld 信息世界 infosec 信息安全 其他 pointtopoint 点到点 pointandclick 点击 plugandplay 即插即用 draganddrop 拖放 easytouse 易用的 linebyline 逐行 offtheshelf 现成的 storeandforward 存储转发 peertopeer 对等的 operatorcontrollable 操作员可控制的
leadingedge 领先的 overhyped 过度宣扬的 enduser 最终用户 frontuser 前端用户 signon 登录 signof 取消 pulldown 下拉 pullup 上拉 popup 弹出 此外,以名词 + 动词ing 构成的复合形容词形成了一种典型的替换关系,即可以根据需 要在结构中代入同一词类而构成新词,它们多为动宾关系。如: mancarrying aircraft 载人飞船 earthmoving machine 推土机 timeconsuming operation 耗时操作 oceangoing freighter 远洋货舱 然而,必须注意,复合词并非随意可以构造,否则会形成一种非正常的英语句子结构。 虽然上述例子给出了多个连接单词组成的复合词, 但不提倡这种冗长的复合方式。 对于多个单 词的非连线形式,要注意其顺序和主要针对对象。此外还应当注意,有时加连字符的复合词与 不加连字符的词汇词意是不同的,必须通过文章的上下文推断。如: forcefeed 强迫接受(vt.),而 force feed 则为“加压润滑” 。 随着词汇的专用化,复合词中间的连接符被省略掉,形成了一个单词,例如:
videotape 录像带 fanin 扇入 fanout 扇出
online 在线 onboard 在板 login 登录
logout 撤消 pushup 拉高 popup 弹出
三、混成词(blending) 混成词不论在公共英语还是科技英语中都大量出现,也有人将它们称为缩合词(与缩略 词区别)、融会词,它们多是名词,也有地方将其作为动词用,对这类词汇可以通过其构词规 律和词素进行理解。 这类词汇将两个单词的前部拼接、 前后拼接或者将一个单词前部与另一词 拼接构成新的词汇,实例有: brunch (breakfast + lunch) 早中饭 smog (smoke +fog) 烟雾 codec (coder+decoder) 编码译码器 compuser (computer+user) 计算机用户 transeiver (transmitter+receiver) 收发机 syscall (system+call) 系统调用 mechatronics (mechanical+electronic) 机械电子学 calputer (calculator+computer) 计算器式电脑 四、缩略词(shortening) 缩略词是将较长的英语单词取其首部或者主干构成与原词同义的短单词,或者将组成词 汇短语的各个单词的首字母拼接为一个大写字母的字符串。 随着科技发展, 缩略词在文章索引、 前序、摘要、文摘、电报、说明书、商标等科技文章中频繁采用。对计算机专业来说,在程序 语句、程序注释、软件文档、文件描述中也采用了大量的缩略词作为标识符、名称等等。缩略 词的出现方便了印刷、书写、速记、以及口语交流等,但也同时增加了阅读和理解的困难。 缩略词开始出现时,通常采用破折号、引号或者括号将它们的原形单词和组合词一并列 出, 久而久之, 人们对缩略词逐渐接受和认可, 作为注释性的后者也就消失了。 在通常情况下, 缩略词多取自各个组合字(虚词除外)的首部第一、二字母。缩略词也可能有形同而义异的情
况。如果遇到这种情况,翻译时应当根据上下文确定词意,并在括号内给出其原形组合词汇。 缩略词可以分为如下几种。
4.1 压缩和省略
将某些太长、难拼、难记、使用频繁的单词压缩成一个短小的单词,或取其头部、或取 其关键音节。如:
flu=influenza 流感 lab=laboratory 实验室 math=mathematics 数学 iff=if only if 当且仅当 rhino=rhinoceros 犀牛 ad=advertisement 广告 4.2 缩写(acronym) 将某些词组和单词集合中每个实意单词的第一或者首部几个字母重新组合,组成为一个 新的词汇,作为专用词汇使用。在应用中它形成三种类型,即: (1)通常以小写字母出现,并作为常规单词 radar (radio detecting and ranging) 雷达 laser (light amplification by stimulated emission of radiation) 激光 sonar (sound navigation and ranging ) 声纳 spool (simultaneous peripheral operation on line) 假脱机 (2)以大写字母出现,具有主体发音音节 BASIC (Beginner's Allpurpose Symbolic Instruction Code) 初学者通用符号指令代码 FORTRAN (Formula Translation) 公式翻译 COBOL (Common Business Oriented Language) 面向商务的通用语言 (3)以大写字母出现,没有读音音节,仅为字母头缩写 ADE (Application Development Environment) 应用开发环境 PCB (Process Control Block) 进程控制块 CGA (Color Graphics Adapter) 彩色图形适配器 DBMS (Data Base Management System) 数据库管理系统 FDD (Floppy Disk Device) 软盘驱动器 MBPS (Mega Byte Per Second) 每秒兆字节 Mbps(Mega Bits Per Second) 每秒兆字位 RISC (Reduced Instruction Set Computer) 精简指令集计算机 CISC (Complex Instruction Set Computer) 复杂指令集计算机 五、借用词 借用词一般来自厂商名、商标名、产品代号名、发明者名、地名等,它通过将普通公共 英语词汇演变成专业词意而实现。有的则是将原来已经有的词汇赋予新的含义。例如:
woofer 低音喇叭 tweeter 高音喇叭 flag 标志、状态
cache 高速缓存 semaphore 信号量 firewall 防火墙
mailbomb 邮件炸弹 scratch pad 高速缓存 fitfall 专用程序入口
在现代科技英语中借用了大量的公共英语词汇、日常生活中的常用词汇,而且,以西方 特有的幽默和结构讲述科技内容。这时,读者必须在努力扩大自己专业词汇的同时,也要掌握 和丰富自己的生活词汇,并在阅读和翻译时正确采用适当的含义。
Exercises
I. Answer the following questions 1. Describe the organization of basic computer systems. 2. How does a processor process an instruction? 3. How many sections are there in a CPU, and what are their functions? II. The eight great ideas in computer architecture are similar to ideasfrom other fields. Match the eight ideas from computer architecture, “Design forMoore’s Law” “Use Abstraction to Simplify Design” “Make the Common CaseFast” “Performance via Parallelism” “Performance via Pipelining” “Performancevia Prediction” “Hierarchy of Memories”, and “Dependability via Redundancy” to the following ideas from other fields: a. Assembly lines in automobile manufacturing b. Suspension bridge cables c. Aircraft and marine navigation systems that incorporate wind information d. Express elevators in buildings e. Library reserve desk f. Increasing the gate area on a CMOS transistor to decrease its switching timeg. Adding electromagnetic aircraft catapults (which are electricallypoweredas opposed to
current steampowered models), allowed by the increased powergeneration offered by the new reactor technology
h. Building selfdriving cars whose control systems partially rely on existing sensorsystems
already installed into the base vehicle, such as lane departure systems andsmart cruise control systems