## Prediction by partial matching |

This article includes a but its sources remain unclear because it lacks . (November 2015) ( |

**Prediction by partial matching** (**PPM**) is an adaptive

Predictions are usually reduced to ^{[clarification needed]}. Each symbol (a letter, bit or any other amount of data) is ranked before it is compressed and, the ranking system determines the corresponding codeword (and therefore the compression rate). In many compression algorithms, the ranking is equivalent to probability mass function estimation. Given the previous letters (or given a context), each symbol is assigned with a probability. For instance, in

The number of previous symbols, *n*, determines the order of the PPM model which is denoted as PPM(*n*). Unbounded variants where the context has no length limitations also exist and are denoted as *PPM**. If no prediction can be made based on all *n* context symbols a prediction is attempted with *n* − 1 symbols. This process is repeated until a match is found or no more symbols remain in context. At that point a fixed prediction is made.

Much of the work in optimizing a PPM model is handling inputs that have not already occurred in the input stream. The obvious way to handle them is to create a "never-seen" symbol which triggers the ^{[zero-frequency problem. One variant uses the Laplace estimator, which assigns the "never-seen" symbol a fixed pseudocount of one. A variant called PPMd increments the pseudocount of the "never-seen" symbol every time the "never-seen" symbol is used. (In other words, PPMd estimates the probability of a new symbol as the ratio of the number of unique symbols to the total number of symbols observed).}

Other Languages

Deutsch: Prediction by Partial Matching

français: Prédiction par reconnaissance partielle

italiano: Prediction by Partial Matching

polski: PPM (kompresja)

русский: Алгоритм сжатия PPM