In this class, we have learned that Big-O can be used to predict the running tim
ID: 3580895 • Letter: I
Question
In this class, we have learned that Big-O can be used to predict the running time cost of a program. For example, with QuickSort, we learned that it will run in O( nlogn ) time for a data set size of n elements to be sorted.
However, different highly optimized C libraries that implement QuickSort have chosen to use InsertionSort when n is sufficiently small (say, about 12 elements) even though InsertionSort run in O( n2 ) time.
Describe a reason why this might be the case and why our understanding of Big O does not match real-world costs.
Explanation / Answer
Today computer processing power is very high so using insertion sort for a small set of element it doesn't matter much. For quick sort, the n should be large as when n increases so does n2 in comparison to nlogn.
Related Questions
drjack9650@gmail.com
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.