Decimal computer

IBM 650 front panel with bi-quinary coded decimal displays

Decimal computers are computers which can represent numbers and addresses in decimal as well as providing instructions to operate on those numbers and addresses directly in decimal, without conversion to a pure binary representation. Some also had a variable wordlength, which enabled operations on numbers with a large number of digits.

Early computers

Early computers that were exclusively decimal include the ENIAC, IBM NORC, IBM 650, IBM 1620, IBM 7070. In these machines the basic unit of data was the decimal digit, encoded in one of several schemes, including binary-coded decimal or BCD, bi-quinary, excess-3, and two-out-of-five code. Except for the 1620, these machines used word addressing. When non-numeric characters were used in these machines, they were encoded as two decimal digits.

Other early computers were character oriented, but provided instructions for performing arithmetic on strings of character that consisted of decimal numerals. On these machines the basic data element was an alphanumeric character, typically encoded in 6 bits. UNIVAC I and UNIVAC II used word addressing, with 12 character words. IBM examples include IBM 702, IBM 705, IBM 7080, IBM 1401 and other members of the IBM 1400 series, including the IBM 7010. The IBM character machines nominally used decimal addressing, but some allowed the use of non-decimal characters in addresses to expand the available address space. Addresses referenced a single character encoded in 6 bits with two additional bits per character, a word mark and a parity bit. The word mark enabled operations on variable length words.

Later computers

The IBM System/360, introduced in 1964 to unify IBM's product lines, used per character binary addressing, and also included instructions for packed decimal arithmetic as well as binary integer arithmetic, and binary floating point. It used 8-bit characters and introduced EBCDIC encoding, though ASCII was also supported.[1] The Burroughs B2500 introduced in 1966 also used 8-bit EBCDIC or ASCII characters and could pack two decimal digits per byte, but it did not provide binary arithmetic, making it a decimal architecture.

More modern computers

Several microprocessor families offer limited decimal support. For example, the 80x86 family of microprocessors provide instructions to convert one-byte BCD numbers (packed and unpacked) to binary format before or after arithmetic operations .[2] These operations were not extended to wider formats and hence are now slower than using 32-bit or wider BCD 'tricks' to compute in BCD (see ). The x87 FPU has instructions to convert 10-byte (18 decimal digits) packed decimal data, although it then operates on them as floating-point numbers.

The 68000 provided instructions for BCD addition and subtractions,[3] these instructions where removed when the Coldfire instruction set was defined, and all IBM mainframes also provide BCD integer arithmetic in hardware.

Decimal arithmetic is now becoming more common; for instance, three decimal types with two binary encodings have been added to the 2008 IEEE 754r standard, with 7, 16, and 34-digit decimal significands.[4]

The IBM Power6 processor and the IBM System z9 have implemented these types using the Densely Packed Decimal binary encoding,[5] the first in hardware and the second in microcode.

References

  1. IBM (1964). IBM System/360 Principles of Operation (PDF). First Edition. A22-6821-0.
  2. "MASM Programmer's Guide". Microsoft. 1992. Retrieved 2007-07-01.
  3. "Motorola M68000 Family Programmer's Reference Manual" (PDF). Retrieved 2007-07-01.
  4. "DRAFT Standard for Floating Point Arithmetic P754" (PDF). 2006-10-04. Retrieved 2007-07-01.
  5. Cowlishaw, Mike F. (2015) [1981,2008]. "General Decimal Arithmetic". IBM. Retrieved 2016-01-02.

Further reading

This article is issued from Wikipedia - version of the Friday, May 06, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.