Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

3 Problem - Compiler Optimization and Aliasing Assume the following program frag

ID: 3600261 • Letter: 3

Question

3 Problem - Compiler Optimization and Aliasing

Assume the following program fragment without any control flow branches (straight line code). Your job is to implement a compiler optimization called "constant folding" for straight line code. This optimization identifies program variables with values that are known at compile time. Expressions that consist of only such variables can be evaluated at compile time. In our project, we do constant folding for ILOC instructions, not on the source code itself.

Would it always be safe for the compiler optimization of constant folding to replace the assignment "c = a-b" by "c = 1" ? Note that there are no assignments to variables a or b between "b = 3" and "c = a-b" The control flow is linear, so there are no branches. Give an example where constant propagation would not be safe (incorrect) in this situation, without violating any of the above assumptions about the code fragment. Note: You can add declarations of other variables and other statements that do not mention a or b.

Explanation / Answer

ANSWER::

Techniques used in optimization can be broken up among various scopes which can affect anything from a single statement to the entire program. Generally speaking, locally scoped techniques are easier to implement than global ones but result in smaller gains. Some examples of scopes include:

Peephole optimizations
Usually performed late in the compilation process after machine code has been generated. This form of optimization examines a few adjacent instructions (like "looking through a peephole" at the code) to see whether they can be replaced by a single instruction or a shorter sequence of instructions. For instance, a multiplication of a value by 2 might be more efficiently executed by left-shifting the value or by adding the value to itself.

Local optimizations
These only consider information local to a basic block. Since basic blocks have no control flow, these optimizations need very little analysis, but this also means that no information is preserved across jumps.
Global optimizations
These are also called "intraprocedural methods" and act on whole functions.[1] This gives them more information to work with but often makes expensive computations necessary. Worst case assumptions have to be made when function calls occur or global variables are accessed (because little information about them is available).

Loop optimizations
These act on the statements which make up a loop, such as a for loop (e.g., loop-invariant code motion). Loop optimizations can have a significant impact because many programs spend a large percentage of their time inside loops.
Prescient store optimizations
Allow store operations to occur earlier than would otherwise be permitted in the context of threads and locks. The process needs some way of knowing ahead of time what value will be stored by the assignment that it should have followed. The purpose of this relaxation is to allow compiler optimization to perform certain kinds of code rearrangement that preserve the semantics of properly synchronized programs.

Interprocedural, whole-program or link-time optimization
These analyze all of a program's source code. The greater quantity of information extracted means that optimizations can be more effective compared to when they only have access to local information . This kind of optimization can also allow new techniques to be performed. For instance function inlining, where a call to a function is replaced by a copy of the function body.

Machine code optimization
These analyze the executable task image of the program after all of an executable machine code has been linked. Some of the techniques that can be applied in a more limited scope, such as macro compression (which saves space by collapsing common sequences of instructions), are more effective when the entire executable task image is available for analysis