# Half-logistic distribution

Parameters Probability density functionProbability density plots of half-logistic distribution Cumulative distribution functionCumulative distribution plots of half-logistic distribution 0

In probability theory and statistics, the half-logistic distribution is a continuous probability distribution—the distribution of the absolute value of a random variable following the logistic distribution. That is, for

where Y is a logistic random variable, X is a half-logistic random variable.

## Specification

### Cumulative distribution function

The cumulative distribution function (cdf) of the half-logistic distribution is intimately related to the cdf of the logistic distribution. Formally, if F(k) is the cdf for the logistic distribution, then G(k) = 2F(k) − 1 is the cdf of a half-logistic distribution. Specifically,

### Probability density function

Similarly, the probability density function (pdf) of the half-logistic distribution is g(k) = 2f(k) if f(k) is the pdf of the logistic distribution. Explicitly,

## References

• George, Olusengun; Meenakshi Devidas (1992). "Some Related Distributions", in N. Balakrishnan: Handbook of the Logistic Distribution. New York: Marcel Dekker, Inc., 232-234. ISBN 0-8247-8587-8.
• Olapade, A.K. (February 2003). "On Characterizations of the Half-Logistic Distribution". InterStat, (2).
Probability distributions    [ edit] ]
Univariate Multivariate
Discrete: Benford • BernoullibinomialBoltzmanncategoricalcompound Poisson • discrete phase-type • degenerateGauss-Kuzmingeometrichypergeometriclogarithmicnegative binomialparabolic fractalPoissonRademacherSkellamuniformYule-SimonzetaZipfZipf-MandelbrotEwensmultinomialmultivariate Polya
Continuous: BetaBeta primeCauchychi-squareDirac delta function • Coxian • Erlangexponentialexponential powerFfading • Fermi-Dirac • Fisher's zFisher-TippettGammageneralized extreme valuegeneralized hyperbolicgeneralized inverse Gaussian • Half-Logistic • Hotelling's T-squarehyperbolic secanthyper-exponentialhypoexponentialinverse chi-square (scaled inverse chi-square) • inverse Gaussianinverse gamma (scaled inverse gamma) • KumaraswamyLandauLaplace • Lvy • Lvy skew alpha-stablelogisticlog-normal • Maxwell-Boltzmann • Maxwell speedNakagaminormal (Gaussian)normal-gammanormal inverse GaussianParetoPearson • phase-type • polarraised cosineRayleigh • relativistic Breit-Wigner • Riceshifted GompertzStudent's ttriangulartruncated normaltype-1 Gumbeltype-2 GumbeluniformVariance-GammaVoigtvon MisesWeibullWigner semicircleWilks' lambdaDirichletGeneralized Dirichlet distribution . inverse-WishartKentmatrix normalmultivariate normalmultivariate Studentvon Mises-FisherWigner quasiWishart
Miscellaneous: bimodalCantorconditional • equilibrium • exponential family • infinitely divisible • location-scale familymarginalmaximum entropyposterior • prior • quasisamplingsingular
In mathematics, a support of a function f  from a set X  to the real numbers R is a subset Y of X such that f (x) is zero for all x in X and outside Y.
In mathematics, a probability density function (pdf) is a function that represents a probability distribution in terms of integrals.

Formally, a probability distribution has density f, if f
In probability theory, the cumulative distribution function (CDF), also called probability distribution function or just distribution function,[1] completely describes the probability distribution of a real-valued random variable X.
expected value (or mathematical expectation, or mean) of a discrete random variable is the sum of the probability of each possible outcome of the experiment multiplied by the outcome value (or payoff).
median is described as the number separating the higher half of a sample, a population, or a probability distribution, from the lower half. The median of a finite list of numbers can be found by arranging all the observations from lowest value to highest value and picking
In statistics, mode means the most frequent value assumed by a random variable, or occurring in a sampling of a random variable. The term is applied both to probability distributions and to collections of experimental data.
variance of a random variable (or somewhat more precisely, of a probability distribution) is one measure of statistical dispersion, averaging the squared distance of its possible values from the expected value.
skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable.

## Introduction

Consider the distribution in the figure. The bars on the right side of the distribution taper differently than the bars on the left side.
kurtosis (from the Greek word kurtos, meaning bulging) is a measure of the "peakedness" of the probability distribution of a real-valued random variable. Higher kurtosis means more of the variance is due to infrequent extreme deviations, as opposed to frequent
Shannon entropy or information entropy is a measure of the uncertainty associated with a random variable.

Shannon entropy quantifies the information contained in a piece of data: it is the minimum average message length, in bits (if using base-2 logarithms), that must
In probability theory and statistics, the moment-generating function of a random variable X is

wherever this expectation exists. The moment-generating function generates the moments of the probability distribution.
In probability theory, the characteristic function of any random variable completely defines its probability distribution. On the real line it is given by the following formula, where X is any random variable with the distribution in question:

Probability theory is the branch of mathematics concerned with analysis of random phenomena.[1] The central objects of probability theory are random variables, stochastic processes, and events: mathematical abstractions of non-deterministic events or measured quantities
Statistics is a mathematical science pertaining to the collection, analysis, interpretation or explanation, and presentation of data. It is applicable to a wide variety of academic disciplines, from the physical and social sciences to the humanities.
probability distribution that assigns a probability to every subset (more precisely every measurable subset) of its state space in such a way that the probability axioms are satisfied.
A random variable is an abstraction of the intuitive concept of chance into the theoretical domains of mathematics, forming the foundations of probability theory and mathematical statistics.
logistic distribution is a continuous probability distribution. Its cumulative distribution function is the logistic function, which appears in logistic regression and feedforward neural networks.
In probability theory, the cumulative distribution function (CDF), also called probability distribution function or just distribution function,[1] completely describes the probability distribution of a real-valued random variable X.
In mathematics, a probability density function (pdf) is a function that represents a probability distribution in terms of integrals.

Formally, a probability distribution has density f, if f
probability distribution that assigns a probability to every subset (more precisely every measurable subset) of its state space in such a way that the probability axioms are satisfied.
In statistics, in univariate data, each data point has only one scalar component. Or, when the statistical technique to be used, it contains only one dependent variable. The more general case is multivariate.
A multivariate random variable or random vector is a vector X = (X1, ..., Xn) whose components are scalar-valued random variables on the same probability space (Ω, P).
Bernoulli distribution, named after Swiss scientist Jakob Bernoulli, is a discrete probability distribution, which takes value 1 with success probability and value 0 with failure probability .
binomial distribution is the discrete probability distribution of the number of successes in a sequence of n independent yes/no experiments, each of which yields success with probability p.
Boltzmann distribution predicts the distribution function for the fractional number of particles Ni / N occupying a set of states i which each respectively possess energy Ei:

A categorical distribution is the most general distribution whose sample space is the set .

It is the generalization of the Bernoulli distribution for a categorical random variable.

It should not be confused with the multinomial distribution.
In probability theory, a compound Poisson distribution is the probability distribution of a "Poisson-distributed number" of independent identically-distributed random variables. More precisely, suppose

i.e.
degenerate distribution is the probability distribution of a discrete random variable whose support consists of only one value. Examples include a two-headed coin and rolling a die whose sides all show the same number.
Gauss-Kuzmin distribution gives the probability distribution of the occurrence of a given integer in the continued fraction expansion of an arbitrary real number. The distribution is named after Carl Friedrich Gauss, who first conjectured and studied the distribution around 1800,
geometric distribution is either of two discrete probability distributions:
• the probability distribution of the number X of Bernoulli trials needed to get one success, supported on the set , or
• the probability distribution of the number Y