1) As you know, a single CPU processes one instruction at a time. Adding a secon
ID: 3605822 • Letter: 1
Question
1) As you know, a single CPU processes one instruction at a time. Adding a second CPU (or core, in current terminology) allows the system to process two instructions at a time, simultaneously, effectively doubling the processing power of the system. A third core will offer triple the processing power of a single CPU, and so on. However, studies have shown that, in general, the expected increase in computing power starts to decline when the number of cores grows large, beyond eight or so. Why would you expect this to be the case? For what types of computing problems might this not be true?
2)Carefully describe the advantages and disadvantages of master-slave multiprocessing and symmetrical multiprocessing. Which would you select for fault-tolerant computing? Why?
3)Describe the trade-offs between the memory cache write-through and write-back techniques?
Explanation / Answer
Solution:
1)
This is because of thrashing, when we increase the degree of multiprogramming too much then this problem will occur. What happens is when we increase the number of cores, the system will start performing better and it will keep on increasing as we increase the number of the processor but this will stop at one point then performance will start degrading.
For example, if we consider supercomputers, they are nothing but 1000s of cores yielding extraordinary computing power but they are program dependent. There should be efficient programs which should be able to use the parallel processors efficiently and yield right computing power. However, in my view, it depends on various other aspects to get the right computing power with the increase in the number of cores.
2)
The primary advantage for cell processors to have master-slave type multiprocessing is process sharing leaving no CPU/ slave idle; sharing of page and disk buffers. Symmetric multiprocessing can be other types of processing that will be suitable for this application as there will be one copy of the operating system that can be run by any CPU. However, this type is complicated when compared to the former.
3)
Difference between write back and write through:
write-back:
it is a caching technique in which the modifications made to the data by the user in the cache are not storing to the main memory until it is demanded. the data is written into the cache everytime a change occurs but not to the main memory. original data resides in main memory, cache memory has only a copy of the original data. so the modifications are not reflecting in the original data. there is a chance of losing data if the system crashes
when a data is updated in write-back mode, the data stored in the cache is known as fresh and the corresponding data ( which is not modified) stored in the main memory is known as stale. whenever a request from any application arrives in main memory cache controller will update the data in main memory before the application uses it.
write-back shows better performance than write-through because the number of writes into the main memory is low. it writes into the main memory only when it is required or in a regular interval of time. here latency time is low.
write-through:
it is a caching technique in which the modifications made to the data by the user is storing to the cache and main memory at the same time. cache memory allows fast retrieval of the data and main memory ensures that data will not be lost if a system crash occurs.
the number of writes is more here than write-back so it is a little bit time consuming than write-back. it reduces the risk of data loss. the exact copy of the data will be there in main memory.
please rate well..
Related Questions
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.