UTF-16
UTF-16 (16-bit Unicode Transformation Format) is a character encoding capable of encoding all 1,112,064 possible characters in Unicode. The encoding is variable-length, as code points are encoded with one or two 16-bit code units. (also see Comparison of Unicode encodings for a comparison of UTF-8, -16 & -32)
UTF-16 developed from an earlier fixed-width 16-bit encoding known as UCS-2 (for 2-byte Universal Character Set) once it became clear that a fixed-width 2-byte encoding could not encode enough characters to be truly universal.
History
In the late 1980s, work began on developing a uniform encoding for a "Universal Character Set" (UCS) that would replace earlier language-specific encodings with one coordinated system. The goal was to include all required characters from most of the world's languages, as well as symbols from technical domains such as science, mathematics, and music. The original idea was to expand the typical 256-character encodings requiring 1 byte per character with an encoding using 216 = 65,536 values requiring 2 bytes per character. Two groups worked on this in parallel, the IEEE and the Unicode Consortium, the latter representing mostly manufacturers of computing equipment. The two groups attempted to synchronize their character assignments, so that the developing encodings would be mutually compatible. The early 2-byte encoding was usually called "Unicode", but is now called "UCS-2".
Early in this process, however, it became increasingly clear that 216 characters would not suffice, and IEEE introduced a larger 31-bit space with an encoding (UCS-4) that would require 4 bytes per character. This was resisted by the Unicode Consortium, both because 4 bytes per character wasted a lot of disk space and memory, and because some manufacturers were already heavily invested in 2-byte-per-character technology. The UTF-16 encoding scheme was developed as a compromise to resolve this impasse in version 2.0 of the Unicode standard in July 1996[1] and is fully specified in RFC 2781 published in 2000 by the IETF.[2][3][4]
In UTF-16, code points greater or equal to 216 are encoded using two 16-bit code units. The standards organizations chose the largest block available of un-allocated 16-bit code points to use as these code units, and code points from this range are not individually encodable in UTF-16 (and not legally encodable in any UTF encoding).
UTF-16 is specified in the latest versions of both the international standard ISO/IEC 10646 and the Unicode Standard.
Description
U+0000 to U+D7FF and U+E000 to U+FFFF
Both UTF-16 and UCS-2 encode code points in this range as single 16-bit code units that are numerically equal to the corresponding code points. These code points in the BMP are the only code points that can be represented in UCS-2. Modern text almost exclusively consists of these code points.
U+10000 to U+10FFFF
Code points from the other planes (called Supplementary Planes) are encoded as two 16-bit code units called surrogate pairs,[5] by the following scheme:
High \ Low | DC00 | DC01 | … | DFFF |
---|---|---|---|---|
D800 | 010000 | 010001 | … | 0103FF |
D801 | 010400 | 010401 | … | 0107FF |
⋮ | ⋮ | ⋮ | ⋱ | ⋮ |
DBFF | 10FC00 | 10FC01 | … | 10FFFF |
- 0x010000 is subtracted from the code point, leaving a 20-bit number in the range 0..0x0FFFFF.
- The top ten bits (a number in the range 0..0x03FF) are added to 0xD800 to give the first 16-bit code unit or high surrogate, which will be in the range 0xD800..0xDBFF.
- The low ten bits (also in the range 0..0x03FF) are added to 0xDC00 to give the second 16-bit code unit or low surrogate, which will be in the range 0xDC00..0xDFFF.
There was an attempt to rename "high" and "low" surrogates to "leading" and "trailing" due to their numerical values not matching their names. This appears to have been abandoned in recent Unicode standards.
Since the ranges for the high surrogates, low surrogates, and valid BMP characters are disjoint, it is not possible for a surrogate to match a BMP character, or for (parts of) two adjacent characters to look like a legal surrogate pair. This simplifies searches a great deal. It also means that UTF-16 is self-synchronizing on 16-bit words: whether a code unit starts a character can be determined without examining earlier code units. UTF-8 shares these advantages, but many earlier multi-byte encoding schemes (such as Shift JIS and other Asian multi byte encodings) did not allow unambiguous searching and could only be synchronized by re-parsing from the start of the string (UTF-16 is not self-synchronizing if one byte is lost or if traversal starts at a random byte).
Because the most commonly used characters are all in the Basic Multilingual Plane, handling of surrogate pairs is often not thoroughly tested. This leads to persistent bugs and potential security holes, even in popular and well-reviewed application software (e.g. CVE-2008-2938, CVE-2012-2135).[6]
The Supplementary Planes contain Emoji, historic scripts, less used symbols, less used Chinese ideographs and some more.
U+D800 to U+DFFF
The Unicode standard permanently reserves these code point values for UTF-16 encoding of the high and low surrogates, and they will never be assigned a character, so there should be no reason to encode them. The official Unicode standard says that no UTF forms, including UTF-16, can encode these code points.
However UCS-2, UTF-8, and UTF-32 can encode these code points in trivial and obvious ways, and large amounts of software does so even though the standard states that such arrangements should be treated as encoding errors. It is possible to unambiguously encode them in UTF-16 by using a code unit equal to the code point, as long as no sequence of two code units can be interpreted as a legal surrogate pair (that is, as long as a high surrogate is never followed by a low surrogate). The majority of UTF-16 encoder and decoder implementations translate between encodings as though this were the case and Windows allows such sequences in filenames.
Examples
Consider the encoding of U+10437 (𐐷):
- Subtract 0x10000 from 0x10437. The result is 0x00437, 0000 0000 0100 0011 0111.
- Split this into the high 10-bit value and the low 10-bit value: 0000000001 and 0000110111.
- Add 0xD800 to the high value to form the high surrogate: 0xD800 + 0x0001 = 0xD801.
- Add 0xDC00 to the low value to form the low surrogate: 0xDC00 + 0x0037 = 0xDC37.
The following table summarizes this conversion, as well as others. The colors indicate how bits from the code point are distributed among the UTF-16 bytes. Additional bits added by the UTF-16 encoding process are shown in black.
Character | Binary code point | Binary UTF-16 | UTF-16 hex code units |
UTF-16BE hex bytes |
UTF-16LE hex bytes | |
---|---|---|---|---|---|---|
$ | U+0024 |
0000 0000 0010 0100 |
0000 0000 0010 0100 |
0024 |
00 24 |
24 00 |
€ | U+20AC |
0010 0000 1010 1100 |
0010 0000 1010 1100 |
20AC |
20 AC |
AC 20 |
𐐷 | U+10437 |
0001 0000 0100 0011 0111 |
1101 1000 0000 0001 1101 1100 0011 0111 |
D801 DC37 |
D8 01 DC 37 |
01 D8 37 DC |
𤭢 | U+24B62 |
0010 0100 1011 0110 0010 |
1101 1000 0101 0010 1101 1111 0110 0010 |
D852 DF62 |
D8 52 DF 62 |
52 D8 62 DF |
Note UTF-16LE is middle-endian with a byte granularity, because the most significant byte is the second and the least significant byte is the third, for U+10437 (𐐷):
middle | most significant | least significant | middle |
---|---|---|---|
01 |
D8 |
37 |
DC |
0000 0001 |
1101 1000 |
0011 0111 |
1101 1100 |
Byte order encoding schemes
UTF-16 and UCS-2 produce a sequence of 16-bit code units. Since most communication and storage protocols are defined for bytes, and each unit thus takes two 8-bit bytes, the order of the bytes may depend on the endianness (byte order) of the computer architecture.
To assist in recognizing the byte order of code units, UTF-16 allows a Byte Order Mark (BOM), a code point with the value U+FEFF, to precede the first actual coded value.[7] (U+FEFF is the invisible zero-width non-breaking space/ZWNBSP character.)[8] If the endian architecture of the decoder matches that of the encoder, the decoder detects the 0xFEFF value, but an opposite-endian decoder interprets the BOM as the non-character value U+FFFE reserved for this purpose. This incorrect result provides a hint to perform byte-swapping for the remaining values. If the BOM is missing, RFC 2781 says that big-endian encoding should be assumed. (In practice, due to Windows using little-endian order by default, many applications similarly assume little-endian encoding by default.) If there is no BOM, one method of recognizing a UTF-16 encoding is searching for the space character (U+0020) which is very common in texts in most languages.
The standard also allows the byte order to be stated explicitly by specifying UTF-16BE or UTF-16LE as the encoding type. When the byte order is specified explicitly this way, a BOM is specifically not supposed to be prepended to the text, and a U+FEFF at the beginning should be handled as a ZWNBSP character. Many applications ignore the BOM code at the start of any Unicode encoding. Web browsers often use a BOM as a hint in determining the character encoding.[9]
For Internet protocols, IANA has approved "UTF-16", "UTF-16BE", and "UTF-16LE" as the names for these encodings. (The names are case insensitive.) The aliases UTF_16 or UTF16 may be meaningful in some programming languages or software applications, but they are not standard names in Internet protocols.
Similar designations, UCS-2, UCS-2BE and UCS-2LE, are used to imitate the UTF-16 labels and behaviour. However, "UCS-2 should now be considered obsolete. It no longer refers to an encoding form in either 10646 or the Unicode Standard."[10]
Usage
UTF-16 is used for text in the OS API in Microsoft Windows 2000/XP/2003/Vista/7/8/CE.[11] Older Windows NT systems (prior to Windows 2000) only support UCS-2.[12] In Windows XP, no code point above U+FFFF is included in any font delivered with Windows for European languages.[13][14] Files and network data tend to be a mix of UTF-16, UTF-8, and legacy byte encodings.
IBM iSeries systems designate code page CCSID 13488 for UCS-2 character encoding, CCSID 1200 for UTF-16 encoding, and CCSID 1208 for UTF-8 encoding.[15]
UTF-16 is used by the Qualcomm BREW operating systems; the .NET environments; and the Qt cross-platform graphical widget toolkit.
Symbian OS used in Nokia S60 handsets and Sony Ericsson UIQ handsets uses UCS-2. iPhone handsets use UTF-16 for Short Message Service instead of UCS-2 described in the 3GPP TS 23.038 (GSM) and IS-637 (CDMA) standards.[16]
The Joliet file system, used in CD-ROM media, encodes file names using UCS-2BE (up to sixty-four Unicode characters per file name).
The Python language environment officially only uses UCS-2 internally since version 2.0, but the UTF-8 decoder to "Unicode" produces correct UTF-16. Since Python 2.2, "wide" builds of Unicode are supported which use UTF-32 instead;[17] these are primarily used on Linux. Python 3.3 no longer ever uses UTF-16, instead an encoding that gives the most compact representation for the given string is chosen from ASCII/Latin-1, UCS-2, and UTF-32.[18]
Java originally used UCS-2, and added UTF-16 supplementary character support in J2SE 5.0.
JavaScript uses UCS-2.[19]
In many languages, quoted strings need a new syntax for quoting non-BMP characters, if the language's "\uXXXX"
syntax explicitly limits itself to 4 hex digits. The most common (used by C#, D and several other languages) is to use an upper-case 'U' with 8 hex digits such as "\U0001D11E"
[20] In Java 7, regular expressions and ICU and Perl, the syntax "\x{1D11E}"
must be used. In many other cases (such as Java outside of regular expressions),[21] the only way to get non-BMP characters is to enter the surrogate halves individually, for example: "\uD834\uDD1E"
for U+1D11E.
These implementations all return the number of 16-bit code units rather than the number of Unicode code points when the corresponding string-length function call is used on their strings, and indexing into a string returns the indexed code unit, not the indexed code point,[19][22][23][24] this leads some people to claim that UTF-16 is not supported. However, the term "character" is defined and used in multiple ways within the Unicode terminology,[25] so an unambiguous count is not possible. Most of the confusion is due to obsolete ASCII-era documentation using the term "character" when a fixed-size "byte" or "octet" was intended.
See also
References
- ↑ "Questions about encoding forms". Retrieved 12 November 2010.
- ↑ ISO/IEC 10646:2014 "Information technology — Universal Coded Character Set (UCS)" sections 9 and 10.
- ↑ The Unicode Standard version 7.0 (2014) section 2.5.
- ↑ RFC 2781, February 2000.
- ↑ Allen, Julie D.; Anderson, Deborah; Becker, Joe; et al., eds. (2014). "3.8 Surrogates" (PDF). The Unicode Standard, Version 7.0—Core Specification. Mountain View: The Unicode Consortium. p. 118. Retrieved 3 November 2014.
- ↑ https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2008-2938 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-2135
- ↑ UTF-8 encoding produces byte values strictly less than 0xFE, so either byte in the BOM sequence also identifies the encoding as UTF-16.
- ↑ Use of U+FEFF as the character ZWNBSP instead of as a BOM has been deprecated in favor of U+2060 (WORD JOINER); see Byte Order Mark (BOM) FAQ at unicode.org. But if an application interprets an initial BOM as a character, the ZWNBSP character is invisible, so the impact is minimal.
- ↑ goto.glocalnet.net Encoding test pages (load these files and check the encoding that the browser selects)
- ↑ "The Unicode Standard, Version 8.0 – Core Specification" (PDF). Unicode Consortium. August 2015. Retrieved 4 September 2015.
- ↑ Unicode (Windows). Retrieved 08 March 2011 "These functions use UTF-16 (wide character) encoding (…) used for native Unicode encoding on Windows operating systems."
- ↑ "Description of storing UTF-8 data in SQL Server". microsoft.com. December 7, 2005. Retrieved 2008-02-01.
- ↑ "Unicode". microsoft.com. Retrieved 2009-07-20.
- ↑ "Surrogates and Supplementary Characters". microsoft.com. Retrieved 2009-07-20.
- ↑ "Character conversion". IBM. Retrieved 2012-05-22.
- ↑ Selph, Chad (2012-11-08). "Adventures in Unicode SMS". Twilio. Retrieved 2015-08-28.
- ↑ "PEP 261 – Support for "wide" Unicode characters". Python.org. Retrieved 29 May 2015.
- ↑ "PEP 0393 – Flexible String Representation". Python.org. Retrieved 29 May 2015.
- 1 2 https://mathiasbynens.be/notes/javascript-encoding
- ↑ ECMA-334, section 9.4.1
- ↑ "Java SE Specifications". sun.com. Retrieved 29 May 2015.
- ↑ Austin, Calvin (May 2004). "J2SE 5.0 in a Nutshell". Retrieved 2008-06-17.
Supplementary Character Support
, - ↑ "Javadoc for java.lang.String.charAt(int)".
- ↑ "C Sharp Language Specification". microsoft.com. Retrieved 2009-07-08.
- ↑ "Glossary". unicode.org. Retrieved 29 May 2015.
External links
- A very short algorithm for determining the surrogate pair for any codepoint
- Unicode Technical Note #12: UTF-16 for Processing
- Unicode FAQ: What is the difference between UCS-2 and UTF-16?
- Unicode Character Name Index
- RFC 2781: UTF-16, an encoding of ISO 10646
- java.lang.String documentation, discussing surrogate handling
|