Float vs. Doubles and 64bit vs. 32bit architectures
A brilliant short text from Richard Fine summarizing what everyone should know about this.
[Quote from somebody else]I think I have a fundamental misunderstanding of how the hardware works. I guess my assumption was that 64-bit meant that memory was allocated in 64-bit chunks, meaning that if you work with a float it still passes around a double’s worth of memory, and that operations happen in 64-bit chunks as well.
The main difference is that memory addresses can be 64 bits long. This means
a) that we now need 64 bits to store pointers rather than the 32 we used to need – this is what people are talking about when they say that 64-bit ‘uses more memory,’ they mean that every pointer in the app now takes twice as much memory to store. It doesn’t affect the amount of memory used for storing things that aren’t pointers.
b) we now have 64-bit integer registers in order to hold an entire memory address in one register – and these registers can be used for other things, e.g. copying a block of data from one location in memory to another can now be done 64 bits at a time rather than 32 (because the copy is done via a register). It doesn’t make operations faster in general.
Also, please pardon me while I ramble for a bit about number encodings, for anyone who’s interested…
Floats and doubles both have varying precision over their range. For a float, at 10 the next value it’s capable of storing is 10.000001, while at 100,000 the next value it can do is 100,000.01, and this is why large worlds using floats have precision problems when you get away from the origin. In practical terms, this means you could have a game world about 200km by 200km square – around two-thirds the surface area of West Virginia – and still deal with positions down to the centimeter level. (In practice you need much more precision than the centimeter level to depict smooth animation, do smooth movement and physics, etc – but the point stands).
For a double, the same positions give 10 => 10.000000000000002 and 100,000 => 100000.00000000001, so the precision problems seen with float go away – but the problems aren’t solved, just moved. Once you’re dealing with values in the 50-thousand-million-km range, you’re again limited to 1cm precision. That’s about 0.01 light years of space in which you have 1cm precision or better – more than enough for the solar system, but several orders of magnitude short of what we’d need to model a game world in which you can fly from here to Alpha Centauri in ‘real space.’
It’s also, at the smaller end of things, very wasteful: at 10m away from the origin you’ve got precision down to the femtometre, but nobody needs it. All those bits are being used to provide a level of precision at that end which is unnecessary – so you have a storage cost for no gain.
By comparison, a 56:8 fixed-point format would have uniform precision – down to the 0.4cm level, everywhere – and we trade in all that unnecessary precision at the low end for a big boost in the range: 7.61 light years, enough to get to Alpha Centauri and most of the way back again 🙂
– Richard
(who really hopes that his math was all correct…)
Random numbers and procedural world generation
Rune Johansen writes this fantastic article comparing different methods of getting repeatable random numbers.
http://blog.runevision.com/2015/01/primer-on-repeatable-random-numbers.html
Integration basics for physics simulation
This great article series provide a kick starter on how to implement true integration solvers for physic simulations. My vehicle physics developments apply the lessons I’ve learned here.