The complier option you want for Visual Studio is fp:strict which is exposed in the IDE as Project->Properties->C/C++->Code Generation->Floating Point Model.
The complier option you want for Visual Studio is /fp:strict which is exposed in the IDE as Project->Properties->C/C++->Code Generation->Floating Point Model.
Yes, you'll have to change the FPU control word to avoid this. It is explained well for most popular compilers in this web page. Beware that this is dramatically incompatible with what most libraries expect the FPU to do, don't mix and match.
Always restore the FPU control word after you're done.
Control87(_PC_53, _MCW_PC) or _control87(_PC_24, _MCW_PC) will do the trick. Those set the precision to double and single respectively with MSVC. You might want to use _controlfp_s(...), as that allows you to retrieve the current control word explicitly after setting it.
If you're using GCC, the SO answer here might help: stackoverflow.com/questions/2497825/gcc-... If you're using another compiler, you might be able to find some clues in that example (or maybe post a comment to that answer to see if Mike Dinsdale might know.
As others have noted, you can deal with this by setting the x87 control word to limit floating point precision. However, a better way would be to get MSVC to generate SSE/SSE2 code for the floating-point operations; I'm surprised that it doesn't do that by default in this day and age, given the performance advantages (and the fact that it prevents one from running into annoying bugs like what you're seeing), but there's no accounting for MSVC's idiosyncrasies. Ranting about MSVC aside, I believe that the /arch:SSE2 flag will cause MSVC to use SSE and SSE2 instructions for single- and double-precision arithmetic, which should resolve the issue.
The whole point of x87's extended precision arithmetic is to avoid having to resort to these ridiculous libraries for matrix operations in the first place. Who would want lower precision to be the default? – Gabe Apr 3 '10 at 6:09 @gabe: The x87 doesn't deliver nearly enough accuracy to do exact arithmetic, which is what these geometric primitives provide.In cases like this, all it does is break algorithms that behave properly on every other architecture.
It also introduces all sorts of hard-to-predict bugs wherein numerical results are affected by semantically unrelated computations. There are definite benefits to x87 extended arithmetic, but not so many that it should be used without any critical consideration of the alternatives. – Stephen Canon Apr 3 '10 at 19:22 Wow.
Are you also against fused multiply-add operations? Remember, x87 was the first implementation of IEEE 754, and almost certainly the most popular. Considering that Intel's Itanium and i960, and Motorola's 68881 and 88110 all have extended precision arithmetic, it's hardly the only one.
– Gabe Apr 4 '10 at 1:15 @gabe: Of course not. I'm not opposed to extended precision, either. What I am against is it being blindly employed by compilers as the default evaluation mode, even when the user hasn't asked for it.
This is a constant source of bugs and puzzles for inexperienced programmers. The cases where a naive algorithm is saved by the compiler inserting extended precision computations behind the programmers back, by contrast, are few and far between. Having extended precision available is great boon to numerical computing; having it be the default is much harder to justify.
– Stephen Canon Apr 4 '10 at 3:11.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.