Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

1. Explain why 8 bits equal one byte. 2. Write an if decision statement for the

ID: 3600034 • Letter: 1

Question

1. Explain why 8 bits equal one byte.

2. Write an if decision statement for the following:

(a) An unsigned long integer variable that is greater than or equal to 100

(b) A string variable (you choose a name for the variable) that is equal to No OR no

3. Write an if statement that will first check if the identifier divisor is not equal to 0 and if it is not equal to zero do the computation quotient = dividend/divisor

4. Given the following if statement:
if(x<12 && x>2)

What values of x will make this if statement true?

5. Look at the following program:

int main( )
{int exponent = 1;
int three = 1;
while (exponent <= 5)

{ three = three * 3;

cout << two << endl;

++exponent; }

return 0; }

(a) What is displayed?
(b) Change the while loop (and body of the loop) to a for loop.

6. Explain the problem with the following code segment and correct it:

int main( )

{

int add = 1, sum = 0;

int number;
while (add <= 10)

{ cin >> number;
sum = sum + number;

cout << sum << endl;

++sum
}
return 0; }

7. Write a function to compute x3, recall x3 = x · x · x. The function should input a floating point value for x and return a floating point with the value of x3.

8. Use a while loop (or for loop) in a function to multiply the first n integers. That is compute 1 2 3 4 ... n. Your function should take the value of integer n as input and return the computed product (perhaps unsigned long int). For example, if n is 6, the function will compute 123456 and return the value of 720 to the main ( ) or calling function.

Explanation / Answer

1)Answer:

It's a historical reason.

Byte
Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures. The size of the byte has historically been hardware dependent and no definitive standards existed that mandated the size. The de facto standard of eight bits is a convenient power of two permitting the values 0 through 255 for one byte


The C standard requires that the char integral data type is capable of holding at least 256 different values, and is represented by at least 8 bits (clause 5.2.4.2.1). Various implementations of C and C++ reserve 8, 9, 16, 32, or 36 bits for the storage of a byte.The actual number of bits in a particular implementation is documented as CHAR_BIT as implemented in the limits.h file.

1964: Computer History Museum

IBM’s new System 360 computers come onto the market and set the de facto worldwide standard of the 8-bit byte, making the 12-bit and 36-bit word machines almost instantly obsolete.

The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures. The size of the byte has historically been hardware dependent and no definitive standards existed that mandated the size – byte-sizes from 1 to 48 bits are known to have been used in the past. The modern de-facto standard of eight bits, as documented in ISO/IEC 2382-1:1993, is a convenient power of two permitting the values 0 through 255 for one byte.[5] The international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits and processor designers optimize for this common usage. The popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the eight-bit size.

The unit symbol for the byte was designated as the upper-case letter B by the International Electrotechnical Commission (IEC) and Institute of Electrical and Electronics Engineers(IEEE) in contrast to the bit, whose IEEE symbol is a lower-case b. Internationally, the unit octet, symbol o, explicitly denotes a sequence of eight bits, eliminating the ambiguity of the byte