Page replacement algorithms have to make decisions about what pages to evict whe
ID: 3850132 • Letter: P
Question
Page replacement algorithms have to make decisions about what pages to evict when a system does not have enough free pages to handle the current page fault rate.
1.) Why might we consider a page replacement algorithm to have made a bad decision about an eviction ?
2.) Generally page replacement algorithms do not try and evict pages that have been modified since they were last brought into memory. Explain why this is so.
3.) If the OPT algorithm is so good at figuring out the minimum number of page faults we could have in a system under a certain load, why don’t we just implement the OPT algorithm as the page replacement algorithm for an operating system ?
4.) Most systems implement some variation of the LRU algorithm to do page replacement. Briefly explain how the LRU algorithm works, and why it is so successful.
Explanation / Answer
Solution:
1)
A decision is considered to be a bad decision when our page replacement algorithm does the eviction of the page which is going to be used frequently in near future or the page which is being more frequently is the page which might be used also frequently in the near future. In this case what happens is most of the request for memory access will be a miss which will decrease the performance of the system. Which is bad.
2)
The main reason behind page replacement algorithms not trying to evict the page which is modified is especially when they are last bought into memory, because the memory access which is done recently are highly like to be accesses in near future, so this is the main reason behind not evicting the recently accessed memory location.
3)
Yes, the optimal page replacement algorithm is very good at figuring out that how to make sure that maximum number of request for memory access is a hit. But the problem with optimal page replacement algorithm is that it is very hard to implement, but the question is why ?
Because everybody knows that this algorithm is hard to implement. So the reason is simple as we know this algorithm works on future knowledge of the memory accesses and nobody knows that what page is going to be accessed in near future, and to implement it as a program is really hard. We can rely on approximation solution in this case. Because exact solution is not possible which is really hard. Maybe in near future when machine learning gets stronger as it is getting stronger everyday then we can have a better solution.
4)
In least replacement algorithm (LRU) the page which was accessed earliest will be replaced if new request comes up. The main logic behind this is that, suppose if we are performing some operation on some registers let’s say add then what we will be doing here is first access two operands and then after adding we will store the value in some other register. So here three memory access has been done, but if you see as per the point of view of a programmer you will say if some operation is done now it means that in the sequence of the algorithm that result of operation is going to be used. Because that is why it was done at the first place. So conclusion from this example is that the memory access which is done now is highly likely to be accessed in near future which is the reason Least recently used algorithm is quite successful.
Related Questions
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.