It is common on modern computers that computing in double-precision (1 sign bit, 11 exponent bit, 52 explicit significand bits) is as fast as computing in single-precision (1 sign bit, 8 exponent bits, 23 significand bits). Therefore, when you load float objects, calculate, and store float objects, the compiler may load the float values into double-precision registers, calculate in double-precision, and store single-precision results. This benefits you by providing extra precision at very little cost.
Results may be more often “correctly rounded†(the result returned is the representable value nearest the mathematically exact result), but this is not guaranteed (because there are still rounding errors, which can interact in unexpected ways) or may often be more accurate (closer to the exact result than float calculations would provide) (but that is also not guaranteed), but, in rare cases, a double-precision calculation can return a result worse than single-precision calculation.
I can't guarantee it by any means, but I'd guess what you ran into was really 53 bits rather than 50. The reason they'd use 53 bits is because that's the next standard size of floating point type. In the IEEE 754 standard, the smallest type is 32 bits total.
The next size up is 64 bits total, which has a 53-bit significand (aka mantissa). Since they already have hardware in place to deal specifically with that size, it's probably easiest (in most cases) to carry out the calculation at that size, and then round to the smaller size.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.