How can we add n positive integers with binary expansion $l_1$, $l_2$,...$l_n$ b
ID: 652893 • Letter: H
Question
How can we add n positive integers with binary expansion $l_1$, $l_2$,...$l_n$ bits so that the total complexity is $O (sum l_i)$ for $i = {1,...,n}$ ? More importantly, how can show this complexity using amortized analysis (the potential method)?
I know the elementary school addition of 2 numbers of length $s$ and $r$ is $O(r+s)$ and hence, the addition of n integers is $O(sum l_i)$. However, what potential function would you use to prove that bound? I don't seem to have any intuition on that... Maybe use a function similar to the standard binary counter example (number of 1's in the binary representation of the number)..?
Explanation / Answer
You should be slightly more careful in your argument. Let the actual numbers you are adding be $x_1,ldots,x_n > 0$, where $ell_i = lfloor log_2 x_i floor + 1$. The complexity of adding two numbers $x,y>0$ is $O(log(x+y))$. In our case, assuming we are doing the addition sequentially, the numbers we are adding are $x_i$ and $x_1 + cdots + x_{i-1}$, for $i=2,ldots,n$. Therefore the entire complexity is $$ Oig(sum_{i=2}^n log (x_1 + cdots + x_i)ig). $$ Suppose now that we arrange the numbers in non-decreasing order $x_1 leq x_2 leq cdots leq x_n$. Then $$ sum_{i=2}^n log (x_1 + cdots + x_i) leq sum_{i=2}^n log (ix_i) leq nlog n + ell_1 + cdots + ell_n. $$ This is tight (up to constant factors) when all $ell_i$ are equal. In particular, when adding $1$ to itself in this fashion, the complexity is $O(nlog n)$ rather than $O(n)$.
Can you think of a better way?
Related Questions
drjack9650@gmail.com
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.