# Kaplan-Meier estimator

The Kaplan-Meier estimator (also known as the Product Limit Estimator) estimates the survival function from life-time data. In medical research, it might be used to measure the fraction of patients living for a certain amount of time after surgery. An economist might measure the length of time people remain unemployed after a job loss. An engineer might measure the time until failure of machine parts.

A plot of the Kaplan-Meier estimate of the survival function is a series of horizontal steps of declining magnitude which, when a large enough sample is taken, approaches the true survival function for that population. The value of the survival function between successive distinct sampled observations ("clicks") is assumed to be constant.

An example of a Kaplan-Meier plot for two conditions associated with patient survival

An important advantage of the Kaplan-Meier curve is that the method can take into account "censored" data — losses from the sample before the final outcome is observed (for instance, if a patient withdraws from a study). On the plot, small vertical tick-marks indicate losses, where patient data has been censored. When no truncation or censoring occurs, the Kaplan-Meier curve is equivalent to the empirical distribution.

In medical statistics, a typical application might involve grouping patients into categories, for instance, those with Gene A profile and those with Gene B profile. In the graph, patients with Gene B die much more quickly than those with gene A. After two years about 80% of the Gene A patients still survive, but less than half of patients with Gene B.

## Formulation

Let be the probability that an item from a given population will have a lifetime exceeding . For a sample from this population of size let the observed times until death of sample members be

Corresponding to each is , the number "at risk" just prior to time , and , the number of deaths at time .

Note that the intervals between each time typically will not be uniform. For example, a small data set might begin with 10 cases, have a death at day 3, a loss (censored case) at day 9, and another death at day 11. Then we have , , and .

The Kaplan-Meier estimator is the nonparametric maximum likelihood estimate of . It is a product of the form

When there is no censoring, is just the number of survivors just prior to time . With censoring, is the number of survivors less the number of losses (censored cases). It is only those surviving cases that are still being observed (have not yet been censored) that are "at risk" of an (observed) death.

There is an alternate definition that is sometimes used, namely

The two definitions differ only at the observed event times. The latter definition is right-continuous whereas the former definition is left-continuous.

Let be the random variable that measures the time of failure and let be its cumulative distribution function. Note that
Consequently, the right-continuous definition of may be preferred in order to make the estimate compatible with a right-continuous estimate of .

## Statistical considerations

The Kaplan-Meier estimator is a statistic, and several estimators are used to approximate its variance. One of the most common such estimators is Greenwood's formula:

In some cases, one may wish to compare different Kaplan-Meier curves. This may be done by several methods including:

## References

• Kaplan, E.L.; Meier, Paul. "Nonparametric estimation from incomplete observations." J. Am. Stat. Assoc. 53, 457-481 (1958)
• Kaplan, E.L in a retrospective on the seminal paper in "This week's citation classic". Current Contents 24, 14 (1983). Available from UPenn as PDF.
• Greenwood M. The natural duration of cancer. Reports on Public Health and Medical Subjects. London: Her Majesty's Stationery Office 1926;33:1-26.

## Tools, examples, and tutorials

The survival function, also known as a survivor function or reliability function, is a property of any random variable that maps a set of events, usually associated with mortality or failure of some system, onto time.
Non-Parametric statistics are statistics where it is not assumed that the population fits any parametrized distributions. Non-Parametric statistics are typically applied to populations that take on a ranked order (such as movie reviews receiving one to four stars).
In probability theory, the cumulative distribution function (CDF), also called probability distribution function or just distribution function,[1] completely describes the probability distribution of a real-valued random variable X.
A statistic (singular) is the result of applying a function (statistical algorithm) to a set of data.
variance of a random variable (or somewhat more precisely, of a probability distribution) is one measure of statistical dispersion, averaging the squared distance of its possible values from the expected value.
The logrank test (sometimes called the Mantel-Haenszel test or the Mantel-Cox test) [1] is a hypothesis test to compare the survival distributions of two samples.

## General

Proportional hazards models are a sub-class of survival models in statistics.

For the purposes of this article, consider survival models to consist of two parts: the underlying hazard function, describing how hazard (risk) changes over time, and the effect
Statistics is a mathematical science pertaining to the collection, analysis, interpretation or explanation, and presentation of data. It is applicable to a wide variety of academic disciplines, from the physical and social sciences to the humanities.
Descriptive statistics are used to describe the basic features of the data in a study. They provide simple summaries about the sample and the measures. Together with simple graphics analysis, they form the basis of virtually every quantitative analysis of data.
In statistics, mean has two related meanings:
• the arithmetic mean (and is distinguished from the geometric mean or harmonic mean).
• the expected value of a random variable, which is also called the population mean.

In mathematics and statistics, the arithmetic mean (or simply the mean) of a list of numbers is the sum of all the members of the list divided by the number of items in the list. The arithmetic mean is what students are taught very early to call the "average".
The geometric mean of a collection of positive data is defined as the nth root of the product of all the members of the data set, where n is the number of members.
median is described as the number separating the higher half of a sample, a population, or a probability distribution, from the lower half. The median of a finite list of numbers can be found by arranging all the observations from lowest value to highest value and picking
In statistics, mode means the most frequent value assumed by a random variable, or occurring in a sampling of a random variable. The term is applied both to probability distributions and to collections of experimental data.
The power of a statistical test is the probability that the test will reject a false null hypothesis (that it will not make a Type II error). As power increases, the chances of a Type II error decrease, and vice versa. The probability of a Type II error is referred to as β.
variance of a random variable (or somewhat more precisely, of a probability distribution) is one measure of statistical dispersion, averaging the squared distance of its possible values from the expected value.
In probability and statistics, the standard deviation of a probability distribution, random variable, or population or multiset of values is a measure of the spread of its values. It is usually denoted with the letter σ (lower case sigma).
Inferential statistics or statistical induction comprises the use of statistics to make inferences concerning some unknown aspect of a population. It is distinguished from descriptive statistics.
statistical hypothesis test, or more briefly, hypothesis test, is an algorithm to state the alternative (for or against the hypothesis) which minimizes certain risks.

In statistics, a result is called significant if it is unlikely to have occurred by chance. "A statistically significant difference" simply means there is statistical evidence that there is a difference; it does not mean the difference is necessarily large, important or significant
In statistics, a null hypothesis is a hypothesis set up to be nullified or refuted in order to support an alternate hypothesis. When used, the null hypothesis is presumed true until statistical evidence in the form of a hypothesis test indicates otherwise.
The alternate hypothesis (or maintained hypothesis or research hypothesis) and the null hypothesis are the two rival hypotheses whose likelihoods are compared by a statistical hypothesis test.
Type I errors (or α error, or false positive) and type II errors (β error, or a false negative) are two terms used to describe statistical errors.

## Statistical error vs.

The Z-test is a statistical test used in inference which determines if the difference between a sample mean and the population mean is large enough to be statistically significant.
A t test is any statistical hypothesis test in which the test statistic has a Student's t distribution if the null hypothesis is true.

## History

The t statistic was introduced by William Sealy Gosset for cheaply monitoring the quality of beer brews.