SOA Exam P (Probability) Glossary

37 essential terms and definitions for SOA Exam P (Probability). Each definition is written for exam preparation, covering the concepts as they are tested on the 2026 syllabus.

37 Terms
17 Sections
2026 Syllabus

B

Bayes' Theorem
Bayes' Theorem is a formula that describes how to update the probability of a hypothesis given new evidence, expressing the posterior probability in terms of the prior probability, the likelihood, and the marginal likelihood.P(AB)=P(BA)P(A)P(B)P(A|B) = \frac{P(B|A)\,P(A)}{P(B)}
Bernoulli Distribution
Bernoulli distribution is a discrete probability distribution for a random variable that takes the value 1 with probability p and the value 0 with probability 1-p, modeling a single trial with two possible outcomes.P(X=k)=pk(1p)1k,k{0,1}P(X=k) = p^k(1-p)^{1-k}, \quad k \in \{0,1\}
Beta Distribution
Beta distribution is a continuous probability distribution defined on the interval [0, 1] and parameterized by two positive shape parameters alpha and beta, commonly used to model proportions and prior distributions in Bayesian analysis.f(x)=xα1(1x)β1B(α,β),0x1f(x) = \frac{x^{\alpha-1}(1-x)^{\beta-1}}{B(\alpha,\beta)}, \quad 0 \le x \le 1
Binomial Distribution
Binomial distribution is the discrete probability distribution of the number of successes in a fixed number of independent Bernoulli trials, each with the same probability of success.P(X=k)=(nk)pk(1p)nkP(X=k) = \binom{n}{k} p^k (1-p)^{n-k}

C

Central Limit Theorem
Central Limit Theorem states that the sampling distribution of the sample mean of independent, identically distributed random variables with finite variance approaches a normal distribution as the sample size increases, regardless of the underlying distribution.
Chebyshev's Inequality
Chebyshev's Inequality is a probabilistic bound stating that for any random variable with finite mean and variance, the probability that it deviates from its mean by more than k standard deviations is at most 1/k-squared.P(Xμkσ)1k2P(|X - \mu| \ge k\sigma) \le \frac{1}{k^2}
Conditional Probability
Conditional probability is the probability of an event A occurring given that another event B has already occurred, defined as the ratio of the joint probability of A and B to the probability of B.P(AB)=P(AB)P(B)P(A|B) = \frac{P(A \cap B)}{P(B)}
Convolution
Convolution refers to the mathematical operation used to determine the probability distribution of the sum of two or more independent random variables, computed by integrating or summing the product of their individual density or mass functions.
Covariance
Covariance is a measure of the joint variability of two random variables, indicating the degree to which they tend to deviate from their respective means in the same direction.Cov(X,Y)=E[XY]E[X]E[Y]\text{Cov}(X,Y) = E[XY] - E[X]\,E[Y]
Cumulative Distribution Function
Cumulative distribution function (CDF) is a function that gives the probability that a random variable takes a value less than or equal to a specified value, defined for all real numbers.F(x)=P(Xx)F(x) = P(X \le x)

D

Deductible
Deductible is the amount of a loss that the policyholder must pay before the insurer begins to pay. In actuarial modeling, an ordinary deductible reduces the loss amount, while a franchise deductible eliminates coverage entirely below the threshold.
Double Expectation Theorem
Double Expectation Theorem (Law of Total Expectation) states that the expected value of a random variable equals the expected value of its conditional expectation given another random variable.E[X]=E[E[XY]]E[X] = E[E[X|Y]]

E

Exponential Distribution
Exponential distribution is a continuous probability distribution that models the time between events in a Poisson process, characterized by its constant hazard rate (memoryless property).f(x)=λeλx,x0f(x) = \lambda e^{-\lambda x}, \quad x \ge 0

G

Gamma Distribution
Gamma distribution is a two-parameter family of continuous probability distributions that generalizes the exponential distribution, commonly used to model waiting times for multiple events in a Poisson process.f(x)=λαxα1eλxΓ(α),x0f(x) = \frac{\lambda^\alpha x^{\alpha-1} e^{-\lambda x}}{\Gamma(\alpha)}, \quad x \ge 0
Geometric Distribution
Geometric distribution is a discrete probability distribution that models the number of Bernoulli trials needed to achieve the first success, with each trial having the same probability of success p.P(X=k)=(1p)k1p,k=1,2,3,P(X=k) = (1-p)^{k-1}p, \quad k = 1, 2, 3, \ldots

H

Hazard Rate Function
Hazard rate function (failure rate) is the instantaneous rate of failure at time t given survival up to time t, defined as the ratio of the probability density function to the survival function.h(t)=f(t)S(t)h(t) = \frac{f(t)}{S(t)}

I

Independence
Independence refers to the property where the occurrence of one event does not affect the probability of another event. Two events A and B are independent if and only if the probability of their intersection equals the product of their individual probabilities.P(AB)=P(A)P(B)P(A \cap B) = P(A)\,P(B)

J

Joint Distribution
Joint distribution is the probability distribution that describes the simultaneous behavior of two or more random variables, specifying the probability of each combination of values the variables can take.

L

Law of Total Probability
Law of Total Probability states that if a set of events partitions the sample space, the probability of any event can be expressed as the sum of conditional probabilities weighted by the partition probabilities.P(A)=iP(ABi)P(Bi)P(A) = \sum_{i} P(A|B_i)\,P(B_i)
Law of Total Variance
Law of Total Variance decomposes the variance of a random variable into the expected value of the conditional variance plus the variance of the conditional expectation.Var(X)=E[Var(XY)]+Var(E[XY])\text{Var}(X) = E[\text{Var}(X|Y)] + \text{Var}(E[X|Y])
Lognormal Distribution
Lognormal distribution is a continuous probability distribution of a random variable whose natural logarithm is normally distributed, commonly used to model quantities that are the product of many independent positive factors.f(x)=1xσ2πexp ⁣((lnxμ)22σ2),x>0f(x) = \frac{1}{x\sigma\sqrt{2\pi}} \exp\!\left(-\frac{(\ln x - \mu)^2}{2\sigma^2}\right), \quad x > 0
Loss Elimination Ratio
Loss Elimination Ratio is the proportion of expected losses eliminated by imposing a deductible, calculated as the ratio of expected payments below the deductible to total expected losses.LER(d)=E[Xd]E[X]\text{LER}(d) = \frac{E[X \wedge d]}{E[X]}

M

Marginal Distribution
Marginal distribution is the probability distribution of a subset of random variables obtained by summing or integrating the joint distribution over the values of the other variables.
Moment Generating Function
Moment generating function (MGF) is a function that encodes all the moments of a probability distribution, defined as the expected value of e raised to the power tX, and uniquely determines the distribution when it exists in a neighborhood of zero.MX(t)=E[etX]M_X(t) = E[e^{tX}]

N

Negative Binomial Distribution
Negative binomial distribution is a discrete probability distribution that models the number of trials needed to achieve a specified number of successes in a sequence of independent Bernoulli trials.P(X=k)=(k1r1)pr(1p)kr,k=r,r+1,P(X=k) = \binom{k-1}{r-1} p^r (1-p)^{k-r}, \quad k = r, r+1, \ldots
Normal Distribution
Normal distribution is a symmetric, bell-shaped continuous probability distribution parameterized by its mean and variance, central to probability theory due to the Central Limit Theorem.f(x)=1σ2πexp ⁣((xμ)22σ2)f(x) = \frac{1}{\sigma\sqrt{2\pi}} \exp\!\left(-\frac{(x-\mu)^2}{2\sigma^2}\right)

O

Order Statistics
Order statistics refers to the values obtained by sorting a random sample in ascending order. The k-th order statistic is the k-th smallest value, used extensively in modeling insurance claim amounts and extreme values.

P

Pareto Distribution
Pareto distribution is a heavy-tailed continuous probability distribution used in actuarial science to model large losses, characterized by a power-law tail behavior.F(x)=1(θx)α,xθF(x) = 1 - \left(\frac{\theta}{x}\right)^\alpha, \quad x \ge \theta
Poisson Distribution
Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space, assuming events occur independently at a constant average rate.P(X=k)=λkeλk!P(X=k) = \frac{\lambda^k e^{-\lambda}}{k!}
Policy Limit
Policy limit is the maximum amount an insurer will pay for a covered loss. When a loss exceeds the policy limit, the insured bears the excess, and the insurer payment is capped at the limit minus any deductible.
Probability Density Function
Probability density function (PDF) is a function whose integral over an interval gives the probability that a continuous random variable falls within that interval. The PDF must be nonnegative and integrate to one over the entire real line.P(aXb)=abf(x)dxP(a \le X \le b) = \int_a^b f(x)\,dx
Probability Mass Function
Probability mass function (PMF) is a function that gives the probability that a discrete random variable equals each of its possible values, with all probabilities summing to one.p(x)=P(X=x)p(x) = P(X = x)

S

Standard Deviation
Standard deviation is the positive square root of the variance of a random variable, measuring the typical distance of values from the mean in the same units as the variable.σ=Var(X)\sigma = \sqrt{\text{Var}(X)}
Survival Function
Survival function is the complement of the cumulative distribution function, giving the probability that a random variable exceeds a specified value.S(x)=P(X>x)=1F(x)S(x) = P(X > x) = 1 - F(x)

U

Uniform Distribution
Uniform distribution is a probability distribution in which all outcomes in a specified range are equally likely. For the continuous case on [a, b], the density is constant at 1/(b-a).f(x)=1ba,axbf(x) = \frac{1}{b-a}, \quad a \le x \le b

V

Variance
Variance is a measure of the dispersion of a random variable around its mean, defined as the expected value of the squared deviation from the mean.Var(X)=E[(Xμ)2]=E[X2](E[X])2\text{Var}(X) = E[(X - \mu)^2] = E[X^2] - (E[X])^2

W

Weibull Distribution
Weibull distribution is a continuous probability distribution used to model time-to-failure data, with a flexible hazard rate that can be increasing, decreasing, or constant depending on its shape parameter.f(x)=τθ(xθ)τ1e(x/θ)τ,x0f(x) = \frac{\tau}{\theta}\left(\frac{x}{\theta}\right)^{\tau-1} e^{-(x/\theta)^\tau}, \quad x \ge 0
Practice Exam P Questions →

About FreeFellow

FreeFellow is an AI-native exam prep platform for actuarial (SOA & CAS), CFA, CFP, CPA, CAIA, and securities licensing candidates — built around modern AI as a core capability rather than as a bolt-on. Every lesson ships with AI-narrated audio. Every constructed-response item has a copy-to-AI prompt builder so candidates can paste their answer into their own ChatGPT or Claude for self-graded feedback. Fellow members get instant AI grading on essays against the official rubric (currently CFA Level III, expanding to other essay-bearing sections).

The 70% you need to pass — question bank, written solutions, lessons, formula sheet, mixed practice, readiness tracking — is free forever, with no trial period and no credit card. Become a Fellow ($59/quarter or $149/year per track) to unlock mock exams, flashcards with spaced repetition, performance analytics, AI essay grading, and a personalized study plan.