Paper The paper will deal with the evolution of computer technology. Course Goal
ID: 3690440 • Letter: P
Question
Paper
The paper will deal with the evolution of computer technology.
Course Goal/Objective
Describe how concepts such as RISC, pipelining, cache memory, and virtual memory have evolved over the past 25 years to improve system performance.
Instructions
In this short research paper, you will investigate the evolution of and current trends in improving system performance with concepts such as RISC, pipelining, cache memory, and virtual memory. In this paper you must, as part of your conclusion, explicitly state the concept or approach that seems most important to you and explain your selection. A minimum of two references is required for this paper. At least one article should be from a peer-reviewed journal.
Submission of Paper
Submit your paper as a Word document in your Assignments folder. I will pay a great deal of attention to your thesis statement, the concise but logical flow of your information, your conclusion, and your grammar and syntax. Submitted papers that show evidence of plagiarism will be sent to the Dean's office for review. Your paper should have no spelling or grammatical mistakes, and the construction should be logical and easy to read. These are research papers, and should not contain colloquial or slang expressions. Provide a reference list at the end of the paper. Use a minimum of two outside references. Your papers are due by midnight, EST, on the date posted in the class schedule. Late papers will be accepted up until the end of the semester with no penalties.
Format and length
Your paper should be written using APA style. It should be no more than five pages long, but no less than three pages long. The font size should be 12 point, with one-inch margins and double spacing.
Explanation / Answer
From the architecture point of view, the microprocessor chips can be classified into two categories: Complex Instruction Set Computers (CISC) and Reduce Instruction Set Computers (RISC). In either case, the objective is to improve system performance. The debates between these two architectures made this research area very interesting, challenging, and some times confusing. CISC (Complex Instruction Set Computer) computers are based on a complex instruction set in which instructions are executed by microcode. Microcode allows developers to change hardware designs and still maintain backward compatibility with instructions for earlier computers by changing only the microcode, thus make a complex instruction set possible and flexible. Although CISC designs allow a lot of hardware flexibility, the supporting of microcode slows microprocessor performance because of the number of operations that must be performed to execute each CISC instruction. A CISC instruction set typically includes many instructions with different sizes and execution cycles, which makes CISC instructions harder to pipeline [1].From the 60's CISC microprocessors became prevalent, each successive processor having more and more complicated hardware and more and more complex instruction sets. This trend started from Intel 80486, Pentium MMX to Pentium III. RISC (Reduced Instruction Set Computer) chips evolved around the mid-1970 as are action at CISC chips. In 70's, John Cocke at IBM's T.J Watson Research Center provided the fundamental concepts of RISC, the idea came from the IBM 801 minicomputer built in 1971 which is used as a fast controller in a very large telephone switching system. This chip contained many traits a later RISC chip should have: few instructions, fix-sized instructions in a fixed format, execution on a single cycle of a processor and a Load / Store architecture. These ideas were further refined and articulated by a group at University Of California Berkeley led by David Patterson, who coined the term "RISC". They realized that RISC promised higher performance, less cost and faster design time. [1]. The simple load/store computers such as MIPS 2 are commonly called RISC architectures. David A. Patterson was the finder of the term RISC, after that John L. Hennessy invented the MIPS architecture to represent RISC [2].
Seminal uses of pipelining were in the ILLIAC II project and the IBM Stretch project, though a simple version was used earlier in the Z1 in 1939 and the Z3 in 1941.[6]
Pipelining began in earnest in the late 1970s in supercomputers such as vector processors and array processors. One of the early supercomputers was the Cyber series built by Control Data Corporation. Its main architect, Seymour Cray, later headed Cray Research. Cray developed the XMP line of supercomputers, using pipelining for both multiply and add/subtract functions. Later, Star Technologies added parallelism (several pipelined functions working in parallel), developed by Roger Chen. In 1984, Star Technologies added the pipelined divide circuit developed by James Bradley. By the mid 1980s, supercomputing was used by many different companies around the world.
Today, pipelining and most of the above innovations are implemented by the instruction unit of most microprocessors.
Faster speeds and better performance have always been significant goals in the development of computer processors. Meanwhile, software developers are continually striving to maximize existing hardware to create ever-faster applications. Ideally, of course, the fastest software benefits from the fastest hardware, with the combination facilitating extremely high-speed transactions.
Data caching was the first step in speeding the performance of data-centric applications. It has been followed by distributed caching, or “in-memory data grid,” which extends the capacity and scalability of single node caches. These caches are useful only in read-mostly scenarios, and clearly another kind of solution is necessary.
That is why space-based architecture was developed. Space-based architecture allows a high-speed application to operate within one server, achieving the same processing speed as an application spread over 100 servers (distributed caching). Space-based architecture ensures faster transactions, leading to resource efficiency, as well as reduced hardware, administration, and energy costs for peak data center management.
The first step in speeding applications was the development of data caching. Caching increases application performance by reducing constant database access.
Typically, the database and the application are located in separate machines, making data access expensive in terms of time and performance. Furthermore, every database access usually involves disk access, which creates contention and slows overall application performance.
Consider this extreme example: Amazon has 100 million users, with 2-3 million logged in at any given time. Imagine 2 million concurrent users hitting the same database at the same time and retrieving user information. Clearly, this does not make logical performance or hardware sense.
In reality, the software needs to access the user data only when a specific user is logged in. To ensure each user enjoys an efficient and fast experience with Amazon, upon user login, a caching layer can save the user's personal data within the application’s memory, speeding access, reducing subsequent access times, and easing the database load.
However, caching only improves performance when the data will be repetitively read. If new data is required every time a user accesses the system, caching is no longer relevant.
Also, caching offers only read access. If a user changes something, the change must be recorded within the database, which serves as the system of record for the application, and not within the cache.
Traditional caching is also limited by memory capacity of the hosting machine.
Before the development of higher level programming language, the programmers had to implement storage allocation methods into his application. The application was divided into different overlays and loaded into the memory one at a time as the size of the memory was confined. This methodology was easy as the programmers were aware of the machine details and his application. But things started changing with the launch of higher level programming language and growing size of application, which led to unmanageable overlays. To overcome this problem, Fotheringham in 1961 come up with the solution of “Dynamic Memory Allocation” in which the task of memory allocation was done by the software and hardware. After the implementation of Dynamic Memory Allocation, the programmer can focus more on problem solving rather than spending time on understanding the hardware details. It also gives the illusion to the programmers that they have vast amount of memory for their use.
Now, let us see how this concept was implemented in Atlas computers,
The main memory in Atlas is divided into core store and drum store. The processor has immediate access to the core store, but it is limited in size as a result some data need to be stored in the drum store. The process of swapping data between the core store and drum store is done by SUPERVISOR, a housekeeping program which resides in memory. The memory contains 1 million 48-bit words and its registers are grouped into units of 512, but it is to be noticed that the 512-word block of information and 512-word unit of memory are different. The 512-word unit of memory in the core store is called ‘page’, and it can hold a block of information. Associate with each ‘page’ there is a 11-bit page address register which stores the number of the block.
The “address” of a word in the main memory is 20-bit long, and consist of 11-bit block address and 9-bit position address within the block. The concept of “address” is distinct from the actual physical location, this distinction is a core idea in this system. When a block needs to be loaded into the memory it is assigned an address within the 20-bit range and this address is notified to the SUPERVISOR. While the block is brought into the core store, SUPERVISOR arranges for that area of core store to have appropriate addresses. It also keeps track of blocks in the drum with the help of a table. During execution, if the program is unable to find a block in the core store, it sends interrupt to the SUPERVISOR which loads the required block into memory. In order to make room for this block, a block in the core store need to be written back to the drum. Drum Transfer Learning Program does this job of selecting which block need to be written back.
“Dynamic Storage Allocation in Atlas Computers, Including an automatic use of backing store” is the first paper on “Virtual Memory” concept in operating systems. A significant work on Virtual Memory was done by Peter J. Denning at Princeton University, his paper “Virtual Memory” extends the work done in this paper.
References:
1. https://bithin.wordpress.com/2013/12/23/evolution-of-virtual-memory/
2. http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6448096&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F6%2F6373644%2F06448096.pdf%3Farnumber%3D6448096
Related Questions
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.