Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

Moore’s Law has ruled the development of computers for over 50 years (actually i

ID: 3836155 • Letter: M

Question

Moore’s Law has ruled the development of computers for over 50 years (actually it has been officially tracked starting 1958).

The trend is evident in High Performance Computing (HPC) field as shown in this diagram on the right from Top500 website.

Notice that in the period depicted in the diagram from 1994 to 2016, there had been dramatic changes in system architecture of super computers. Hardware evolved from vector computers to Massively Parallel Computing to Clusters and now accelerators. In the mean time, parallel computing tools shifted from PVM to MPI and OpenMP. Popular interconnections changed a few times between propriety connectors and fast switches. This diagram is a vivid testimony that Moore’s Law is followed through a combined efforts of many things together   

Top500 website contains a wealth of statistics on these fast computer systems. Browse the website then answer Questions 1-3 using your knowledge in operating systems (Question 4 requires further reading of extra materials).

2) Describe improvements that should be made over the current popular OS (your answer to Question 1) in regard to processes and memory management to make it more suitable for HPC over next five years.

Performance Development 100 TFlaps 10 Tflops IT Flops 100 GFlops 1 Flops 100 MFlops 1996 1998 2000 2002 2004 2006 2010 2012 2014 2016

Explanation / Answer

1)The next step in parallel processing was the introduction of multiprocessing. In these systems, two or more processors shared the work to be done. The earliest versions had a master/slave configuration. One processor (the master) was programmed to be responsible for all of the work in the system; the other (the slave) performed only those tasks it was assigned by the master. This arrangement was necessary because it was not then understood how to program the machines so they could cooperate in managing the resources of the system.

2) As the number of processors in SMP systems increases, the time it takes for data to propagate from one part of the system to all other parts grows also. When the number of processors is somewhere in the range of several dozen, the performance benefit of adding more processors to the system is too small to justify the additional expense. To get around the problem of long propagation times, message passing systems were created. In these systems, programs that share data send messages to each other to announce that particular operands have been assigned a new value. Instead of a broadcast of an operand's new value to all parts of a system, the new value is communicated only to those programs that need to know the new value. Instead of a shared memory, there is a network to support the transfer of messages between programs. This simplification allows hundreds, even thousands, of processors to work together efficiently in one system. (In the vernacular of systems architecture, these systems "scale well.") Hence such systems have been given the name of massively parallel processing (MPP) systems.

Hire Me For All Your Tutoring Needs
Integrity-first tutoring: clear explanations, guidance, and feedback.
Drop an Email at
drjack9650@gmail.com
Chat Now And Get Quote