Apparently, Binary Coded Decimal was an important feature in early CPUs. Can someone explain to me what its purpose is? I know that it makes it easier to print decimal values to a screen, but really, converting a binary integer into a string of decimal characters and vice versa are trivial problems. Is it really worth the extra CPU complexity and decreased data density just to give the programmer slightly less work to do when printing decimal numbers? You would think that with memory being so expensive back in the day, engineers would be more motivated to conserve its use.
Name:
Anonymous2010-02-10 5:00
It was cheaper to incorporate the well understood BCD circuitry into the CPU for the purpose of displaying decimal than it is to make use of the general CPU for the purpose of displaying decimals.
Name:
!iN.MY.aRMs2010-02-10 5:06
Binary Coded Decimal unuseble for atomic explozion calculation
Get an array of decimal characters from the user
Starting from the rightmost character, do the following:
Read the character, convert it into the proper binary value
Multiply it by the 10^(n-1), where n is the place of the current digit
Add the value to a counter
Once you've done that for all of the characters in the input string, you've got a valid, machine-useable number. Why exactly would they waste money on the hardware necessary for BCD and deal with smaller data density when it's just a simple subroutine call for a command line interpreter?