Huffman coding

Char Freq Code
space7111
a4010
e4000
f31101
h21010
i21000
m20111
n20010
s21011
t20110
l111001
o100110
p110011
r111000
u100111
x110010
In computer science and information theory, Huffman coding is an entropy encoding algorithm used for lossless data compression. The term refers to the use of a variable-length code table for encoding a source symbol (such as a character in a file) where the variable-length code table has been derived in a particular way based on the estimated probability of occurrence for each possible value of the source symbol. It was developed by David A. Huffman while he was a Ph.D. student at MIT, and published in the 1952 paper "A Method for the Construction of Minimum-Redundancy Codes." Huffman became a member of the MIT faculty upon graduation and was later the founding member of the Computer Science Department at the University of California, Santa Cruz, now a part of the Baskin School of Engineering.

Huffman coding uses a specific method for choosing the representation for each symbol, resulting in a prefix-free code (sometimes called "prefix codes") (that is, the bit string representing some particular symbol is never a prefix of the bit string representing any other symbol) that expresses the most common characters using shorter strings of bits than are used for less common source symbols. Huffman was able to design the most efficient compression method of this type: no other mapping of individual source symbols to unique strings of bits will produce a smaller average output size when the actual symbol frequencies agree with those used to create the code. A method was later found to do this in linear time if input probabilities (also known as weights) are sorted.

For a set of symbols with a uniform probability distribution and a number of members which is a power of two, Huffman coding is equivalent to simple binary block encoding, e.g., ASCII coding. Huffman coding is such a widespread method for creating prefix-free codes that the term "Huffman code" is widely used as a synonym for "prefix-free code" even when such a code is not produced by Huffman's algorithm.

Although Huffman coding is optimal for a symbol-by-symbol coding with a known input probability distribution, its optimality can sometimes accidentally be over-stated. For example, arithmetic coding and LZW coding often have better compression capability. Both these methods can combine an arbitrary number of symbols for more efficient coding, and generally adapt to the actual input statistics, the latter of which is useful when input probabilities are not precisely known.

History

In 1951, David A. Huffman and his MIT information theory classmates were given the choice of a term paper or a final exam. The professor, Robert M. Fano, assigned a term paper on the problem of finding the most efficient binary code. Huffman, unable to prove any codes were the most efficient, was about to give up and start studying for the final when he hit upon the idea of using a frequency-sorted binary tree and quickly proved this method the most efficient.

In doing so, the student outdid his professor, who had worked with information theory inventor Claude Shannon to develop a similar code. Huffman avoided the major flaw of the suboptimal Shannon-Fano coding by building the tree from the bottom up instead of from the top down.
  • ****

Problem definition

Informal description

Given: A set of symbols and their weights (usually proportional to probabilities).
Find: A prefix-free binary code (a set of codewords) with minimum expected codeword length (equivalently, a tree with minimum weighted path length).

Formalized description

Input.
Alphabet , which is the symbol alphabet of size .
Set , which is the set of the (positive) symbol weights (usually proportional to probabilities), i.e. .

Output.
Code , which is the set of (binary) codewords, where is the codeword for .

Goal.
Let be the weighted path length of code . Condition: for any code .

Samples

Input (A, W) Symbol (ai) abcdeSum
Weights (wi) 0.100.150.300.160.29= 1
Output C Codewords (ci) 000001100111 
Codeword length (in bits)
(li)
33222
Weighted path length
(li wi )
0.300.450.600.320.58L(C) = 2.25
Optimality Probability budget
(2-li)
1/81/81/41/41/4= 1.00
Information content (in bits)
(−log2 wi) Ëœ
3.322.741.742.641.79 
Entropy
(−wi log2 wi)
0.3320.4110.5210.4230.518H(A) = 2.205


For any code that is biunique, meaning that the code is uniquely decodeable, the sum of the probability budgets across all symbols is always less than or equal to one. In this example, the sum is strictly equal to one; as a result, the code is termed a complete code. If this is not the case, you can always derive an equivalent code by adding extra symbols (with associated null probabilities), to make the code complete while keeping it biunique.

As defined by Shannon (1948), the information content h (in bits) of each symbol ai with non-null probability is



The information content of symbols with null probability is not defined, but in practice can be defined as any finite value because this information will be absent from the encoded message (unless the message A has an infinite symbol length, but in this case these symbols have an infinitesimal positive probability 0+).

The entropy H (in bits) is the weighted sum, across all symbols ai with non-zero probability wi, of the information content of each symbol:



All symbols with zero probability have in theory a positive infinite entropy, but as they are necessarily absent from the original message to be encoded, they don't contribute to the entropy of the encoded message (unless the message is infinite) ; so they could be made equivalent to a zero entropy within the sum above, removing the restriction on the suitable indices.

As a consequence of Shannon's Source coding theorem, the entropy is a measure of the smallest codeword length that is theoretically possible for the given alphabet with associated weights. In this example, the weighted average codeword length is 2.25 bits per symbol, only slightly larger than the calculated entropy of 2.205 bits per symbol. So not only is this code optimal in the sense that no other feasible code performs better, but it is very close to the theoretical limit established by Shannon.

Note that, in general, a Huffman code need not be unique, but it is always one of the codes minimizing .

Basic technique



The technique works by creating a binary tree of nodes. These can be stored in a regular array, the size of which depends on the number of symbols(N). A node can be either a leaf node or an internal node. Initially, all nodes are leaf nodes, which contain the symbol itself, the weight (frequency of appearance) of the symbol and optionally, a link to a parent node which makes it easy to read the code (in reverse) starting from a leaf node. Internal nodes contain symbol weight, links to two child nodes and the optional link to a parent node. As a common convention, bit '0' represents following the left child and bit '1' represents following the right child. A finished tree has N leaf nodes and N−1 internal nodes.

A linear-time* method to create a Huffman tree is to use two queues, the first one containing the initial weights (along with pointers to the associated leaves), and combined weights (along with pointers to the trees) being put in the back of the second queue. This assures that the lowest weight is always kept at the front of one of the two queues.

Creating the tree:
  1. Start with as many leaves as there are symbols.
  2. Enqueue all leaf nodes into the first queue (by probability in increasing order so that the least likely item is in the head of the queue).
  3. While there is more than one node in the queues:
  4. Dequeue the two nodes with the lowest weight.
  5. Create a new internal node, with the two just-removed nodes as children (either node can be either child) and the sum of their weights as the new weight.
  6. Enqueue the new node into the rear of the second queue.
  7. The remaining node is the root node; the tree has now been generated.


It is generally beneficial to minimize the variance of codeword length. For example, a communication buffer receiving Huffman-encoded data may need to be larger to deal with especially long symbols if the tree is especially unbalanced. To minimize variance, simply break ties between queues by choosing the item in the first queue. This modification will retain the mathematical optimality of the Huffman coding while both minimizing variance and minimizing the length of the longest character code.

* This method is linear time assuming that you already have the leaf nodes sorted by initial weight. If not, sorting them will take time.

Main properties

The frequencies used can be generic ones for the application domain that are based on average experience, or they can be the actual frequencies found in the text being compressed. (This variation requires that a frequency table or other hint as to the encoding must be stored with the compressed text; implementations employ various tricks to store tables efficiently.)

Huffman coding is optimal when the probability of each input symbol is a negative power of two. Prefix-free codes tend to have slight inefficiency on small alphabets, where probabilities often fall between these optimal points. "Blocking", or expanding the alphabet size by coalescing multiple symbols into "words" of fixed or variable-length before Huffman coding, usually helps, especially when adjacent symbols are correlated (as in the case of natural language text). The worst case for Huffman coding can happen when the probability of a symbol exceeds 2-1 = 0.5, making the upper limit of inefficiency unbounded. These situations often respond well to a form of blocking called run-length encoding.

Arithmetic coding produces slight gains over Huffman coding, but in practice these gains have seldom been large enough to offset arithmetic coding's higher computational complexity and patent royalties. (As of July 2006, IBM owns patents on many methods of arithmetic coding in several jurisdictions; see US patents on arithmetic coding.)

Variations

Many variations of Huffman coding exist, some of which use a Huffman-like algorithm, and others of which find optimal prefix codes (while, for example, putting different restrictions on the output). Note that, in the latter case, the method need not be Huffman-like, and, indeed, need not even be polynomial time. An exhaustive list of papers on Huffman coding on its variations is given by "Code and Parse Trees for Lossless Source Encoding"[1].

n-ary Huffman coding

The n-ary Huffman algorithm uses the {0, 1, ..., n − 1} alphabet to encode message and build an n-ary tree. This approach was considered by Huffman in his original paper.

Adaptive Huffman coding

A variation called adaptive Huffman coding calculates the frequencies dynamically based on recent actual frequencies in the source string. This is somewhat related to the LZ family of algorithms.

Huffman template algorithm

Most often, the weights used in implementations of Huffman coding represent numeric probabilities, but the algorithm given above does not require this; it requires only a way to order weights and to add them. The Huffman template algorithm enables one to use any kind of weights (costs, frequencies, pairs of weights, non-numerical weights) and one of many combining methods (not just addition). Such algorithms can solve other minimization problems, such as minimizing , a problem first applied to circuit design[2].

Length-limited Huffman coding

Length-limited Huffman coding is a variant where the goal is still to achieve a minimum weighted path length, but there is an additional restriction that the length of each codeword must be less than a given constant. The package-merge algorithm solves this problem with a simple greedy approach very similar to that used by Huffman's algorithm. Its time complexity is , where is the maximum length of a codeword. No algorithm is known to solve this problem with the same efficiency as conventional Huffman coding,

Huffman coding with unequal letter costs

In the standard Huffman coding problem, it is assumed that each symbol in the set that the code words are constructed from has an equal cost to transmit: a code word whose length is N digits will always have a cost of N, no matter how many of those digits are 0s, how many are 1s, etc. When working under this assumption, minimizing the total cost of the message and minimizing the total number of digits are the same thing.

Huffman coding with unequal letter costs is the generalization in which this assumption is no longer assumed true: the letters of the encoding alphabet may have non-uniform lengths, due to characteristics of the transmission medium. An example is the encoding alphabet of Morse code, where a 'dash' takes longer to send than a 'dot', and therefore the cost of a dash in transmission time is higher. The goal is still to minimize the weighted average codeword length, but it is no longer sufficient just to minimize the number of symbols used by the message. No algorithm is known to solve this in the same manner or with the same efficiency as conventional Huffman coding.

Optimal alphabetic binary trees (Hu-Tucker coding and the canonical Huffman code)

In the standard Huffman coding problem, it is assumed that any codeword can correspond to any input symbol. In the alphabetic version, the alphabetic order of inputs and outputs must be identical. Thus, for example, could not be assigned code , but instead should be assigned either or . This is also known as the Hu-Tucker problem, after the authors of the paper presenting the first linearithmic solution to this optimal binary alphabetic problem, which has some similarities to Huffman algorithm, but is not a variation of this algorithm. These optimal alphabetic binary trees are often used as binary search trees. If weights corresponding to the alphabetically ordered inputs are in numerical order, the Huffman code has the same lengths as the optimal alphabetic code, which can be found from calculating these lengths. The resulting alphabetic code is sometimes called the canonical Huffman code and is often the code used in practice, due to ease of encoding/decoding. The technique for finding this code is sometimes called Huffman-Shannon-Fano coding, since it is optimal like Huffman coding, but alphabetic in weight probability, like Shannon-Fano coding. The Huffman-Shannon-Fano code corresponding to the example is , which, having the same codeword lengths as the original solution, is also optimal.

Applications

Arithmetic coding can be viewed as a generalization of Huffman coding; indeed, in practice arithmetic coding is often preceded by Huffman coding, as it is easier to find an arithmetic code for a binary input than for a nonbinary input. Also, although arithmetic coding offers better compression performance than Huffman coding, Huffman coding is still in wide use because of its simplicity, high speed and lack of encumbrance by patents.

Huffman coding today is often used as a "back-end" to some other compression method. DEFLATE (PKZIP's algorithm) and multimedia codecs such as JPEG and MP3 have a front-end model and quantization followed by Huffman coding.

See also

References

External links

Computer science, or computing science, is the study of the theoretical foundations of information and computation and their implementation and application in computer systems.
..... Click the link for more information.
Information theory is a branch of applied mathematics and engineering involving the quantification of information to find fundamental limits on compressing and reliably communicating data.
..... Click the link for more information.
In information theory an entropy encoding is a lossless data compression scheme that is independent of the media’s specific characteristics.

One of the main types of entropy coding assigns codes to symbols so as to match code lengths with the probabilities of the
..... Click the link for more information.
In mathematics, computing, linguistics, and related disciplines, an algorithm is a finite list of well-defined instructions for accomplishing some task that, given an initial state, will proceed through a well-defined series of successive states, eventually terminating in an
..... Click the link for more information.
Lossless data compression is a class of data compression algorithms that allows the exact original data to be reconstructed from the compressed data. This can be contrasted to lossy data compression, which does not allow the exact original data to be reconstructed from the
..... Click the link for more information.
variable-length code is a code which maps source symbols to a variable number of bits.

Variable-length codes can allow sources to be compressed and decompressed with zero error (lossless data compression) and still be read back symbol by symbol.
..... Click the link for more information.
David Huffman

Born July 9 1925(1925--)

Died September 7 1999 (aged 74)

Residence USA
..... Click the link for more information.
worldwide view of the subject.
Please [ improve this article] or discuss the issue on the talk page.
"Ph.D." redirects here, for other uses see Ph.D. (disambiguation).


Doctor of Philosophy, abbreviated Ph.D.
..... Click the link for more information.
Massachusetts Institute of Technology (MIT) is a private, coeducational research university located in Cambridge, Massachusetts. MIT has five schools and one college, containing 32 academic departments,[3]
..... Click the link for more information.
University of California, Santa Cruz, also known as UC Santa Cruz or UCSC, is a public, collegiate university, one of the ten campuses of the University of California.
..... Click the link for more information.
A prefix code is a code, typically a variable-length code, with the "prefix property": no code word is a prefix of any other code word in the set. A code with code words has the prefix property; a code consisting of does not, because "1" is a prefix of both "10" and "11".
..... Click the link for more information.
In computational complexity theory, an algorithm is said to take linear time, or O(n) time, if the asymptotic upper bound for the time it requires is proportional to the size of the input, which is usually denoted n.
..... Click the link for more information.
power of two is any of the integer powers of the number two;[1] in other words, two multiplied by itself a certain number of times.[2] Note that one is a power (the zeroth power) of two. Written in binary, a power of two always has the form 100...
..... Click the link for more information.
In computer science, a block code is a type of channel coding. It adds redundancy to a message so that, at the receiver, one can decode with minimal (theoretically zero) errors, provided that the information rate (amount of transported information in bits per sec) would not exceed
..... Click the link for more information.
American Standard Code for Information Interchange (ASCII), generally pronounced ask-ee IPA: /ˈæski/ ( [1] ), is a character encoding based on the English alphabet.
..... Click the link for more information.
Arithmetic coding is a method for lossless data compression. Normally, a string of characters such as the words "hello there" is represented using a fixed number of bits per character, as in the ASCII code.
..... Click the link for more information.
Lempel-Ziv-Welch (LZW) is a universal lossless data compression algorithm created by Abraham Lempel, Jacob Ziv, and Terry Welch. It was published by Welch in 1984 as an improved implementation of the LZ78 algorithm published by Lempel and Ziv in 1978.
..... Click the link for more information.
David Huffman

Born July 9 1925(1925--)

Died September 7 1999 (aged 74)

Residence USA
..... Click the link for more information.
Robert Mario Fano (born 1917) is professor emeritus of Electrical Engineering and Computer Science at Massachusetts Institute of Technology. Fano is known principally for his work on information theory, inventing (with Claude Shannon) Shannon-Fano coding.
..... Click the link for more information.
In computer science, a binary tree is a tree data structure in which each node has at most two children. Typically the child nodes are called left and right. One common use of binary trees is binary search trees; another is binary heaps.
..... Click the link for more information.
Information theory is a branch of applied mathematics and engineering involving the quantification of information to find fundamental limits on compressing and reliably communicating data.
..... Click the link for more information.
Claude Shannon

Claude Shannon
Born 30 March 1916(1916--)
Petoskey, Michigan
Died 24 January 2001 (aged 86)
..... Click the link for more information.
proportionality, see Proportionality (disambiguation).


In mathematics, two quantities are called proportional if they vary in such a way that one of the quantities is a constant multiple of the other, or equivalently if they have a constant ratio.
..... Click the link for more information.
A prefix code is a code, typically a variable-length code, with the "prefix property": no code word is a prefix of any other code word in the set. A code with code words has the prefix property; a code consisting of does not, because "1" is a prefix of both "10" and "11".
..... Click the link for more information.
expected value (or mathematical expectation, or mean) of a discrete random variable is the sum of the probability of each possible outcome of the experiment multiplied by the outcome value (or payoff).
..... Click the link for more information.
The article entitled "A Mathematical Theory of Communication", published in 1948 by mathematician Claude E. Shannon, was one of the founding works of the field of information theory.
..... Click the link for more information.
Shannon entropy or information entropy is a measure of the uncertainty associated with a random variable.

Shannon entropy quantifies the information contained in a piece of data: it is the minimum average message length, in bits (if using base-2 logarithms), that must
..... Click the link for more information.
Shannon's source coding theorem (or noiseless coding theorem) establishes the limits to possible data compression, and the operational meaning of the Shannon entropy.

The source coding theorem shows that (in the limit, as the length of a stream of i.i.d.
..... Click the link for more information.
In computer science, a binary tree is a tree data structure in which each node has at most two children. Typically the child nodes are called left and right. One common use of binary trees is binary search trees; another is binary heaps.
..... Click the link for more information.
array is a data structure consisting of a group of elements that are accessed by indexing. In most programming languages each element has the same data type and the array occupies a contiguous area of storage. Most programming languages have a built-in array data type.
..... Click the link for more information.


This article is copied from an article on Wikipedia.org - the free encyclopedia created and edited by online user community. The text was not checked or edited by anyone on our staff. Although the vast majority of the wikipedia encyclopedia articles provide accurate and timely information please do not assume the accuracy of any particular article. This article is distributed under the terms of GNU Free Documentation License.