>>10
It's not well discussed because floating point aritmetic is usualy embeded in the processors (not so usual on mobile phones and stuff like that).
Basicaly the big bitch in the floating point are these
http://en.wikipedia.org/wiki/IEEE_floating-point_standard
the IEEE standard. So as it says:
1 e bits f bits Width of field
+-+--------+-------------+
|S| Exp | Fraction |
+-+--------+-------------+
e+f 0 Bit index
As usual you have bit signal in S, the Exp and Fraction part varies in size from 32 to 64 bit standards. So to get the floating point rep:
1. take the signal
2. convert to binary with the point
3. (normalize) make point go to the leftmost bit, and keep the number of places moved.
So, far we got the signal, the number with the point in a convinient place, and the movement of the point necessary to recover the number.
4. The number is the Exp, and the other is the Fraction or mantissa, that has to be converted because it can be negative.
This document (linked from wikipedia page)
http://www.opencores.org/projects.cgi/web/fpu100/fpu_doc.pdf gives a deep insight on this and ARITHMETIC and it's algorithms.
It's not something that is usualy learned in schools, it's algorithms for reference. So I will not explain them to you.