My understanding was a “float” type variable is simply a number with decimal, with less digits available than the “double” type variable for accuracy purposes. So for what purpose then is this code illegal?
This code looks perfectly legal. However, it is not. This is because the literal value 3.4 is a double precision value when expressed as a literal, and the variable x has been declared as a floating point.
the literal value 3.4 is a double precision value when expressed as a literal.
It’s a number with one decimal point. Why does it suddenly need double precision?
Perhaps the Yellow Book isn’t explaining the reason for this well. To me as a beginner it seems like an extra step only serving to confuse and simultaneously increasing code verbosity and reducing readability to have to cast your numbers with an f or an additional (float). We already set it is a float and it only contains 1 digit after the decimal. Trouble wrapping my head around this one. If this is simply “the way it is” without good reason, why not just define all decimal variables with double then instead of worrying about this additional step?
Question 2:
Why doesn’t this Hello World code from the Microsoft docs print when run in MS Studio 2019?
Just put an f after the decimal. As I understand it, 3.4 is being read as 3.4000…etc to its max precision. Without the f it assumes a double.
Question 2: just put Console.Readline() after the writeline to prevent the console from closing.
Question 1.
There are two things going on here:
First – a literal value of the form 12.34
is a double
– it’s defined in the C# language specification; if you want a literal float
then it’s 12.34f
. As far as the assignment is concerned, it doesn’t matter what the value is that you are trying to assign, the compiler sees you trying to convert a value from a ‘more precise type’ to a ‘less precise type’ – it may be the case that the operation succeeds without losing precision but, in general, such an operation will result in loss and so cannot be performed implicitly. You can explicitly cast to perform the conversion:
Second, you’re right in saying that your example has one decimal point, but floats and doubles are binary floating point types, and as such you might not get a precise representation (Google ‘IEEE 754’ for more information on ‘Single’ and ‘Double’ floating point values). You should always (well, in most cases) treat float and double values as approximations since you will get rounding errors creep in to calculations and conversions. If you need precise representation of decimal values (for monetary calculations, for example) then use the ‘decimal’ type.
Question 2:
It does print but the application quits immediately, just add:
at the end of your program so that the console says open until you hit any key.
Edit: Corrected typo, added info on literal floating point values.
Hey for the float, I honestly don’t know, I never really thought about it, but what I can give you as a plausible reason for not using doubles all the time could be memory space as a double could be larger than a float. For the second hello world issue, what project type are you using. I have a feeling it is because the class is Hello. The dotnet framework has quite a few things that operate by naming conventions, so I believe that it’s likely not running and the error being thrown says something along the lines of there being no entry point since the ide now has no idea where to find that main method.
C# devs
null reference exceptions