Answer ( 1 )

  1. Computer codes? You probably mean character encoding?

    The character encoding predates electronic computers with Morse code being one of first examples along with other telegraph codes that followed. 5-bit Baudot code used for Telex system was first computer-like character encoding with control characters.

    Fast forward into computer age. The first computers operated on numbers and simply did not have memory to work on character strings (after all 5 letter word required 25 bits if you use Baudot code and you can fit a lot in 25 bits). Any string handling were therefore machine specific.

    The first standards were:
    Fieldata for US Armed Forces (see link for details) – obviously as army were first to deploy real-time computer systems and therefore universal character code was needed.
    BCD (character encoding) – IBM 6-bit encoding. The business needed string manipulation and IBM delivered suitable coding.

    The second round of standards (which to some extent lasts today)
    ASCII – 7-bit encoding with 8-bit extensions (code pages) and later UTF-8 encoding made this most common standard of all.
    EBCDIC – IBM extension to BCD, a quite confusing coding method it still survives as IBM Mainframes were extremely successful and so this encoding.

    Right now the Unicode is dominant standard with one of it’s transformation formats UTF-8 which is compatible with ASCII.

    As for other less known standards:
    Shift JIS – it appeared due pressing need for Japanese support on computers. It is only standard that is non-Unicode that still retains large user base (And likely that will remain)
    GB 2312 – First standard for handling Chinese on computers. Before that most common option was to store each Hanzi using Chinese Telegraph Code inside memory. It has been superseded by GB 18030 which is backward compatible with GB-2312 and it is also Unicode Transformation Format which makes it Unicode compatible.

Leave an answer