Problem 2: 1) Determine if 2 arrays contain the same elements 2) Counting total
ID: 3695006 • Letter: P
Question
Problem 2:
1) Determine if 2 arrays contain the same elements
2) Counting total number characters that have a duplicate within a string (i.e. "gigi the gato" would result in 7 (g x 3 + i x 2 + t x 2)
3) Finding an empty row in a 2-D array where empty is defined as an element with a 0 entry.
For each problems segment given above, do the following:
- Create an algorithm to solve the problem
- Identify the factors that would influence the running time, and which can be known before the algorithm or code is executed. Assign names (such as n) to each factor.
- Identify the operations that must be counted. You need not count every statement separately. If a group of statements always executes together, treat the group as a single unit. If a method is called, and you do not know the running time of that method, count it as a single operation.
- Count the operations performed by the algorithm or code. Express the count as a function of the factors you identified in Step 2. If the count cannot be expressed as a simple function of those factors, define the bounds that can be placed on the count: the best case (lower bound) and worst case (upper bound).
- Determine what the Best Case Inputs are, and the Worst Case Inputs are, and the efficiency of your implementation
- Transform your count formula into big-O notation by:
- Taking the efficiency with worst case input
- Dropping insignificant terms.
- Dropping constant coefficients.
Explanation / Answer
1) Determine if 2 arrays contain the same elements:
Ans)
Excuted Time=2*array1.length-1+2*array2.length-1+5 =2*array1.length-+2*array2.length+3.
If array2.length=array1.length=N than
Excuted Time=4N+3
Its O(1) in the best case, O(N) (per list) in worst case.
1.Hash tables aren't O(1) to do lookups. Your solution is somewhere between O(N.log N) and O(N^2) depending on the hash table implementaton.
2.The O(1) is if the list doesn't match, otherwise its O(N) twice in the worst case and O(N) in the best case. Hashtables are O(1) when there is no chance of a collision btw.
3.Upon collisions a hashmap is O(n) in most cases because it uses a linked list to store the collisions. However, there are better approaches and you should hardly have collisions anyway because if you did the hashmap would be useless. In all regular cases it's simply O(1). Besides that, it's not likely to have more than a small n of collisions in a single hashmap so performance wouldn't suck that bad; you can safely say that it's O(1) or almost O(1) because the n is so small it's can be ignored.
2) Counting total number characters that have a duplicate within a string (i.e. "gigi the gato" would result in 7 (g x 3 + i x 2 + t x 2)
Ans)
int n=s.length();//one time
Excuted Time=N+N2+(N-1)2+N-1+6=N2+(N-1)2+2N+5
So the number of operations required by this loop are
{1+(N+1)+N} = 2N+2
Note: This still may be wrong, as I am not confident about my understanding on calculating time complexity
What I want to know ?
Ok, so these small basic calculations I think I know, but in most cases I have seen the time complexity as
O(N), O(n2), O(log n), O(n!).... and many other,
Can anyone help me understand how does one calculate time complexity of an algorithm? I am sure there are plenty of newbies like me wanting to know this.
3) Finding an empty row in a 2-D array where empty is defined as an element with a 0 entry.
Ans)
Excuted Time=5+N3+(N*M)2
his bound is conveniently denoted with the Landau-notation (or big-Oh notation) and in case of O(f(n))O(f(n)) it is an upper bound. That is for inputs of size nn the algorithms complexity is guaranteed not to exceed (a constant times) f(n)f(n). Most of the time it is clear from the context what the “unit” this bound is measured in is. Note that “runtime” measured in actual time units (seconds, minutes, etc.) is frowned upon as you can’t compare them meaningful across different computers. Typically a “costly” elementary operation is identified, like compares and exchanges in sorting algorithms, or pushs and pops if the algorithm includes a stack, or updates to a tree data structure used in the algorithm. This elementary operation is understood as to be the dominating one, that has the largest contribution to the algorithm’s complexity, or perhaps it is more expensive than another one in a particular setting (multiplications are considered to be more expensive then additions for example). It has to be selected so that the cost of other operations are (at most) proportional to the number of the elementary operations.
This upper bound is called the worst-case bound or the worst-case complexity of the algorithm. Because it has to hold for all inputs of the same size nn and the worst-case inputs incur the highest (worst) cost. The actual performance for a specific input of size nn may be a lot lower than this upper bound.
While it is possible to do an exact analysis it is usually much more involved to arrive at an exact result. Also, an exact analysis means accounting for all operations in the algorithm, and that requires a rather detailed implementation; while counting elementary operations can be done from a mere sketch of the algorithm most of the time. It is nice if such an analysis is possible but it is not always necessary. Because small inputs are not much of a problem, you want to learn what happens when the input size nn gets large. Knowing that the algorithm’s complexity is bounded by 5n2+3n2log(n)5n2+3n2log(n) is nice but overkill when looking at the performance asymptotically (nn), because the term n2n2 dominates the others for large nn. So proving an upper bound of O(n2)O(n2) suffices.
n computer science, the time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the string representing the input.
2. Big O notation
The time complexity of an algorithm is commonly expressed using big O notation, which excludes coefficients and lower order terms. When expressed this way, the time complexity is said to be described asymptotically, i.e., as the input size goes to infinity.
For example, if the time required by an algorithm on all inputs of size n is at most 5n3 + 3n, the asymptotic time complexity is O(n3). More on that later.
Few more Examples:
3. O(1) Constant Time:
An algorithm is said to run in constant time if it requires the same amount of time regardless of the input size.
Examples:
4. O(n) Linear Time
An algorithm is said to run in linear time if its time execution is directly proportional to the input size, i.e. time grows linearly as input size increases.
Consider the following examples, below I am linearly searching for an element, this has a time complexity of O(n).
Related Questions
drjack9650@gmail.com
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.