Codec
A codec is a device or computer program for encoding or decoding a digital data stream or signal.[1][2][3] Codec is a portmanteau of coder-decoder or, less commonly, compressor-decompressor.
A codec encodes a data stream or signal for transmission, storage or encryption, or decodes it for playback or editing. Codecs are used in videoconferencing, streaming media, and video editing applications.
Related concepts
In the mid 20th century, a codec was a hardware device that coded analog signals into digital form using pulse-code modulation (PCM). Late in the century the name was also applied to a class of software for converting between different digital signal formats, including compander functions.
A modem is a contraction of modulator-demodulator. The telecommunications industry referred to the device as a dataset. It converts digital data from computers to analog signals for transmission over telephone lines. On the receiving end the analog signal is converted back to digital data.
An audio codec converts analog audio signals into digital signals for transmission or storage. A receiving device then converts the digital signals back to analog using an audio decompressor, for playback. An example of this is the codecs used in the sound cards of personal computers. A video codec accomplishes the same task for video signals.
Compression quality
- Lossy codecs: Many of the more popular codecs in the software world are lossy, meaning that they reduce quality by some amount in order to achieve compression. Often, this type of compression is virtually indistinguishable from the original uncompressed sound or images, depending on the codec and the settings used.[4] Smaller data sets ease the strain on relatively expensive storage sub-systems such as non-volatile memory and hard disk, as well as write-once-read-many formats such as CD-ROM, DVD and Blu-ray Disc. Lower data rates also reduce cost and improve performance when the data is transmitted.
- Lossless codecs: There are also many lossless codecs which are typically used for archiving data in a compressed form while retaining all of the information present in the original stream. If preserving the original quality of the stream is more important than eliminating the correspondingly larger data sizes, lossless codecs are preferred. This is especially true if the data is to undergo further processing (for example editing) in which case the repeated application of processing (encoding and decoding) on lossy codecs will degrade the quality of the resulting data such that it is no longer identifiable (visually, audibly or both). Using more than one codec or encoding scheme successively can also degrade quality significantly. The decreasing cost of storage capacity and network bandwidth has a tendency to reduce the need for lossy codecs for some media.
Media codecs
Two principal techniques are used in codecs, pulse-code modulation and delta modulation. Codecs are often designed to emphasize certain aspects of the media to be encoded. For example, a digital video (using a DV codec) of a sports event needs to encode motion well but not necessarily exact colors, while a video of an art exhibit needs to encode color and surface texture well.
Audio codecs for cell phones need to have very low latency between source encoding and playback. In contrast, audio codecs for recording or broadcast can use high-latency audio compression techniques to achieve higher fidelity at a lower bit-rate.
There are thousands of audio and video codecs, ranging in cost from free to hundreds of dollars or more. This variety of codecs can create compatibility and obsolescence issues. The impact is lessened for older formats, for which free or nearly-free codecs have existed for a long time. The older formats are often ill-suited to modern applications, however, such as playback in small portable devices. For example, raw uncompressed PCM audio (44.1 kHz, 16 bit stereo, as represented on an audio CD or in a .wav or .aiff file) has long been a standard across multiple platforms, but its transmission over networks is slow and expensive compared with more modern compressed formats, such as Opus and MP3.
Many multimedia data streams contain both audio and video, and often some metadata that permit synchronization of audio and video. Each of these three streams may be handled by different programs, processes, or hardware; but for the multimedia data streams to be useful in stored or transmitted form, they must be encapsulated together in a container format.
Lower bitrate codecs allow more users, but they also have more distortion. Beyond the initial increase in distortion, lower bit rate codecs also achieve their lower bit rates by using more complex algorithms that make certain assumptions, such as those about the media and the packet loss rate. Other codecs may not make those same assumptions. When a user with a low bitrate codec talks to a user with another codec, additional distortion is introduced by each transcoding.
AVI is sometimes erroneously described as a codec, but AVI is actually a container format, while a codec is a software or hardware tool that encodes or decodes audio or video into or from some audio or video format. Audio and video encoded with many codecs might be put into an AVI container, although AVI is not an ISO standard. There are also other well-known container formats, such as Ogg, ASF, QuickTime, RealMedia, Matroska, and DivX Media Format. Some container formats which are ISO standards are MPEG transport stream, MPEG program stream, MP4 and ISO base media file format.
Telephony Codec
See also
Comparisons
References
- ↑ "Using codecs". Microsoft. Retrieved 2009-12-21.
- ↑ "About.com - Codec". About.com. Retrieved 2009-12-21.
- ↑ "Ubuntu Documentation - What is a codec?". Ubuntu Documentation Team. Archived from the original on February 19, 2012. Retrieved 2009-12-21.
- ↑ "Audio quality of aac vs. mp3 vs. wma vs. ogg encoders". SoundExpert. Retrieved 2010-07-25.
above 5.0 – all sound artifacts will be beyond threshold of human perception with corresponding perception margin
|
|
|