Half-logistic distribution

Half-logistic distribution
Probability density function
Enlarge picture
Probability density plots of half-logistic distribution
Cumulative distribution function
Enlarge picture
Cumulative distribution plots of half-logistic distribution
Parameters
Support
Probability density function (pdf)
Cumulative distribution function (cdf)
Mean
Median
Mode0
Variance
Skewness
Excess kurtosis
Entropy
Moment-generating function (mgf)
Characteristic function


In probability theory and statistics, the half-logistic distribution is a continuous probability distribution—the distribution of the absolute value of a random variable following the logistic distribution. That is, for



where Y is a logistic random variable, X is a half-logistic random variable.

Specification

Cumulative distribution function

The cumulative distribution function (cdf) of the half-logistic distribution is intimately related to the cdf of the logistic distribution. Formally, if F(k) is the cdf for the logistic distribution, then G(k) = 2F(k) − 1 is the cdf of a half-logistic distribution. Specifically,

Probability density function

Similarly, the probability density function (pdf) of the half-logistic distribution is g(k) = 2f(k) if f(k) is the pdf of the logistic distribution. Explicitly,

References

  • George, Olusengun; Meenakshi Devidas (1992). "Some Related Distributions", in N. Balakrishnan: Handbook of the Logistic Distribution. New York: Marcel Dekker, Inc., 232-234. ISBN 0-8247-8587-8. 
  • Olapade, A.K. (February 2003). "On Characterizations of the Half-Logistic Distribution". InterStat, (2). 
Probability distributions    [ edit] ]
Univariate Multivariate
Discrete: Benford • BernoullibinomialBoltzmanncategoricalcompound Poisson • discrete phase-type • degenerateGauss-Kuzmingeometrichypergeometriclogarithmicnegative binomialparabolic fractalPoissonRademacherSkellamuniformYule-SimonzetaZipfZipf-MandelbrotEwensmultinomialmultivariate Polya
Continuous: BetaBeta primeCauchychi-squareDirac delta function • Coxian • Erlangexponentialexponential powerFfading • Fermi-Dirac • Fisher's zFisher-TippettGammageneralized extreme valuegeneralized hyperbolicgeneralized inverse Gaussian • Half-Logistic • Hotelling's T-squarehyperbolic secanthyper-exponentialhypoexponentialinverse chi-square (scaled inverse chi-square) • inverse Gaussianinverse gamma (scaled inverse gamma) • KumaraswamyLandauLaplace • Lvy • Lvy skew alpha-stablelogisticlog-normal • Maxwell-Boltzmann • Maxwell speedNakagaminormal (Gaussian)normal-gammanormal inverse GaussianParetoPearson • phase-type • polarraised cosineRayleigh • relativistic Breit-Wigner • Riceshifted GompertzStudent's ttriangulartruncated normaltype-1 Gumbeltype-2 GumbeluniformVariance-GammaVoigtvon MisesWeibullWigner semicircleWilks' lambdaDirichletGeneralized Dirichlet distribution . inverse-WishartKentmatrix normalmultivariate normalmultivariate Studentvon Mises-FisherWigner quasiWishart
Miscellaneous: bimodalCantorconditional • equilibrium • exponential family • infinitely divisible • location-scale familymarginalmaximum entropyposterior • prior • quasisamplingsingular
In mathematics, a support of a function f  from a set X  to the real numbers R is a subset Y of X such that f (x) is zero for all x in X and outside Y.
..... Click the link for more information.
In mathematics, a probability density function (pdf) is a function that represents a probability distribution in terms of integrals.

Formally, a probability distribution has density f, if f
..... Click the link for more information.
In probability theory, the cumulative distribution function (CDF), also called probability distribution function or just distribution function,[1] completely describes the probability distribution of a real-valued random variable X.
..... Click the link for more information.
expected value (or mathematical expectation, or mean) of a discrete random variable is the sum of the probability of each possible outcome of the experiment multiplied by the outcome value (or payoff).
..... Click the link for more information.
median is described as the number separating the higher half of a sample, a population, or a probability distribution, from the lower half. The median of a finite list of numbers can be found by arranging all the observations from lowest value to highest value and picking
..... Click the link for more information.
In statistics, mode means the most frequent value assumed by a random variable, or occurring in a sampling of a random variable. The term is applied both to probability distributions and to collections of experimental data.
..... Click the link for more information.
variance of a random variable (or somewhat more precisely, of a probability distribution) is one measure of statistical dispersion, averaging the squared distance of its possible values from the expected value.
..... Click the link for more information.
skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable.

Introduction

Consider the distribution in the figure. The bars on the right side of the distribution taper differently than the bars on the left side.
..... Click the link for more information.
kurtosis (from the Greek word kurtos, meaning bulging) is a measure of the "peakedness" of the probability distribution of a real-valued random variable. Higher kurtosis means more of the variance is due to infrequent extreme deviations, as opposed to frequent
..... Click the link for more information.
Shannon entropy or information entropy is a measure of the uncertainty associated with a random variable.

Shannon entropy quantifies the information contained in a piece of data: it is the minimum average message length, in bits (if using base-2 logarithms), that must
..... Click the link for more information.
In probability theory and statistics, the moment-generating function of a random variable X is



wherever this expectation exists. The moment-generating function generates the moments of the probability distribution.
..... Click the link for more information.
In probability theory, the characteristic function of any random variable completely defines its probability distribution. On the real line it is given by the following formula, where X is any random variable with the distribution in question:


..... Click the link for more information.
Probability theory is the branch of mathematics concerned with analysis of random phenomena.[1] The central objects of probability theory are random variables, stochastic processes, and events: mathematical abstractions of non-deterministic events or measured quantities
..... Click the link for more information.
Statistics is a mathematical science pertaining to the collection, analysis, interpretation or explanation, and presentation of data. It is applicable to a wide variety of academic disciplines, from the physical and social sciences to the humanities.
..... Click the link for more information.
probability distribution that assigns a probability to every subset (more precisely every measurable subset) of its state space in such a way that the probability axioms are satisfied.
..... Click the link for more information.
A random variable is an abstraction of the intuitive concept of chance into the theoretical domains of mathematics, forming the foundations of probability theory and mathematical statistics.
..... Click the link for more information.
logistic distribution is a continuous probability distribution. Its cumulative distribution function is the logistic function, which appears in logistic regression and feedforward neural networks.
..... Click the link for more information.
In probability theory, the cumulative distribution function (CDF), also called probability distribution function or just distribution function,[1] completely describes the probability distribution of a real-valued random variable X.
..... Click the link for more information.
In mathematics, a probability density function (pdf) is a function that represents a probability distribution in terms of integrals.

Formally, a probability distribution has density f, if f
..... Click the link for more information.
probability distribution that assigns a probability to every subset (more precisely every measurable subset) of its state space in such a way that the probability axioms are satisfied.
..... Click the link for more information.
In statistics, in univariate data, each data point has only one scalar component. Or, when the statistical technique to be used, it contains only one dependent variable. The more general case is multivariate.
..... Click the link for more information.
A multivariate random variable or random vector is a vector X = (X1, ..., Xn) whose components are scalar-valued random variables on the same probability space (Ω, P).
..... Click the link for more information.
Bernoulli distribution, named after Swiss scientist Jakob Bernoulli, is a discrete probability distribution, which takes value 1 with success probability and value 0 with failure probability .
..... Click the link for more information.
binomial distribution is the discrete probability distribution of the number of successes in a sequence of n independent yes/no experiments, each of which yields success with probability p.
..... Click the link for more information.
Boltzmann distribution predicts the distribution function for the fractional number of particles Ni / N occupying a set of states i which each respectively possess energy Ei:


..... Click the link for more information.
A categorical distribution is the most general distribution whose sample space is the set .

It is the generalization of the Bernoulli distribution for a categorical random variable.

It should not be confused with the multinomial distribution.
..... Click the link for more information.
In probability theory, a compound Poisson distribution is the probability distribution of a "Poisson-distributed number" of independent identically-distributed random variables. More precisely, suppose



i.e.
..... Click the link for more information.
degenerate distribution is the probability distribution of a discrete random variable whose support consists of only one value. Examples include a two-headed coin and rolling a die whose sides all show the same number.
..... Click the link for more information.
Gauss-Kuzmin distribution gives the probability distribution of the occurrence of a given integer in the continued fraction expansion of an arbitrary real number. The distribution is named after Carl Friedrich Gauss, who first conjectured and studied the distribution around 1800,
..... Click the link for more information.
geometric distribution is either of two discrete probability distributions:
  • the probability distribution of the number X of Bernoulli trials needed to get one success, supported on the set , or
  • the probability distribution of the number Y

..... Click the link for more information.


This article is copied from an article on Wikipedia.org - the free encyclopedia created and edited by online user community. The text was not checked or edited by anyone on our staff. Although the vast majority of the wikipedia encyclopedia articles provide accurate and timely information please do not assume the accuracy of any particular article. This article is distributed under the terms of GNU Free Documentation License.