Not until now and looks way beyond me. I'm more concerned about whether these new programming languages can do basic maths.
They can, but this is one of those comparatively rare occasions when an understanding of maths can be useful.
The bottom line is that computers store numbers as a series of bits. Traditionally, those numbers would have been integers, stored in various lengths - 8 bit gives you 0-255, 16 bits gives you 0-65535, and so on.
But that doesn't help with any calculation that involves a non-integer value.
If you have long enough integers, you can apply a "scaling factor" (in software), so that the value 10000 (decimal) stored in your integer is actually interpreted as 1.0000, giving you 4 decimal places.
But that's clunky, and might be good for accounting or other "counting" jobs, but falls down a bit when it comes to scientific calculations, where you might be dealing with a very wide range of numbers. And for that we need floating-point maths. On a computer, floating point maths is roughly equivalent to the way scientists express very big or small numbers as a power of 10: 6.02x10^-33, except they do it in binary. A lot of this is driven by the development of floating point co-processors, to which a CPU could offload the task of performing a cycle-hungry floating point calculation, and that got formalised into an IEEE standard for representation of floating point numbers. Essentially, you have an integer mantissa, which is the number itself, and an (also integer) exponent, which is the power of 2 that the number must be raised to in order to get the real value out.
The snag here is that, even though IEEE 754 (I looked it up) specifies some seriously big word lengths, some numbers in decimal just won't resolve exactly into the mantissa/exponent format. Notably 0.1 - the IEEE 754 equivalent of 1/3 - has the problem that no matter how many 3s you add after the decimal point, it never
exactly expresses the value of that fraction.
And there's a lot more of those numbers in the binary system than there is in the decimal one.
I think one of those clunky old-skool programmer skills was in knowing how accurate your answer needed to be, letting the floating point thing do its magic, and then representing the number (ie printing it
) in a real-world format that ignores those weird digits down in the far end of the fractional bit, maybe doing a bit of judicious programmed tidying up at intermediate phases of the process...
So, if you store and retrieve 0.1 as a floating point number, what you get back is 0.100000001490116119384765625.
But you're never going to present it to your end user like that - you apply a format to the number to enable it to display in a meaningful way, and the size of the error is too tiny to be significant for most purposes. Depending on the practicalities, you might want to round the display of the number to 2 decimal places for, eg., currency, or perhaps 4 for some measurement thing. Either way, the error noise isn't showing up until the ninth digit of the fractional part, so it's lost in the weeds. Except for some very involved calculations with FP numbers, those errors will rarely approach a point where they fundamentally mess things up. Although there are various defensive programming approaches aimed at catching/checking things along the way.
Oops, that went on a bit - I got a bit misty-eyed about the Good Old Days, and long integers, floating point coprocessors, etc.