Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

C# provides a data type called decimal for use in situations where one must do c

ID: 662952 • Letter: C

Question

C# provides a data type called decimal for use in situations where one must do calculations such as monetary calculations.

What is the difference between the float, double, and decimal data types in C#?

Why is the decimal data type more appropriate to situations that involve computations with money?

Suppose that the language did not support the decimal data type (think C++). What data type makes sense to use in this case? Justify your answer. (Hint #1: it's not floating-point . Hint#2: Think about what is the base unit of currency.)

Explanation / Answer

1)
the decimal, double and float values are different. Precision is the main point here to show differences among them.
float is single precision and double is 64 bit double precision and decimal is 128 bit floating point.

float ---->7 digits --->32 bit
double --->15-16 digits --->64 bit
decimal -->28-29 digits--->128 bit

as decimal has 128 bit...so they have more precision and more accuracy..and also slow comparing with other two.

2)
money is matter...even floating points also..
for example
1.005 per unit.
for 100 units we can negotiate the decimal value since it is $0.5
for 1000 units ...the value becomes $5
for 10000 units ....the value becomes $50
...kepp on increasing
so this makes increasing/decreasing the cost of original calculations.
so in case of money high precision needed..so decimal has 128 bits and store 28-29 digits. So it is preferrable.

3)
If decimal is not supported then we will choose nect hogh precision data type which is double which as alomost 64 bitand can store
15-16 digits.
Since units of money so hogh precision is compulsory...I already explained in second answer why high precision is required.