第 5 章
關燈
小
中
大
第 5 章
\chapter{Parameter estimation methods}\label{cap2}
\section{Overview of Parameter Estimation}
urate parameter estimation is crucial for effectively utilizing the Pearson Type III distribution in practical applications. This section provides an overview of the primary methods used for estimating the parameters of this distribution, emphasizing their theoretical foundations, advantages, and limitations.
\section{The method of moment estimation (MoM)}
The Moment method is one of the earliest and simplest techniques for parameter estimation. It involves equating the sample moments (e.g., mean, variance) to the theoretical moments of the distribution to solve for the parameters.
Raw moments represent the average of powers of the observed data points.The k-th order raw moment ($m_k$) is calculated as
\begin{equation}
m_k = \frac{1}{n} \sum_{i=1}^{n} X_i^k
\end{equation}
where $X_i$ represents the observed data points and $n$ is the sample size.\\
\vspace{0.1cm}
\vspace{0.1cm}
Central moments measure the variability of the data points around their sample mean.The k-th order central moment ($t_k$) is given by
\begin{equation}
t_k = \frac{1}{n} \sum_{i=1}^{n} (X_i - \bar{X})^k
\end{equation}
where $\bar{X}$ represents the sample mean.\\
\vspace{0.1cm}
In method of moments estimation, we equate the sample moments (either raw or central) to their corresponding population moments and solve for the parameters of interest. This provides estimates for the parameters based on the sample data.
\section{The maximum likelihood estimation method (ML)}
The Maximum Likelihood Estimation method is widely regarded for its efficiency and robustness. It finds parameter values that maximize the likelihood function, representing the probability of observing the given data. Maximum Likelihood Estimation (MLE) is a method used to estimate the parameters of a statistical model by maximizing the likelihood function. Suppose we have a random sample $X_1, X_2, \ldots, X_n $ from a probability distribution with probability density function (pdf) or probability mass function (pmf) $ f(x; \theta) $, where $ \theta $ represents the parameter(s) to be estimated.
The likelihood function $ L(\theta) $ is defined as the joint probability density or mass function of the observed data, considered as a function of the parameter(s) $ \theta $:
\begin{equation}
L(\theta) = \prod_{i=1}^{n} f(x_i; \theta)
\end{equation}
The goal of MLE is to find the parameter value(s) $ \hat{\theta} $ that maximizes the likelihood function, or equivalently, the log-likelihood function $ \ell(\theta) $:
\begin{equation}
\ell(\theta) = \log L(\theta) = \sum_{i=1}^{n} \log f(x_i; \theta)
\end{equation}
To find $ \hat{\theta} $, we differentiate the log-likelihood function with respect to $\theta $, set the derivative(s) equal to zero, and solve for $ \hat{\theta} $ using this equation:
\begin{equation}
\frac{\partial \ell(\theta)}{\partial \theta} = 0
\end{equation}
If the log-likelihood function is concave, the critical point(s) obtained by solving the above equation corresponds to the maximum likelihood estimator(s) $ \hat{\theta} $. In some cases, it may be more convenient to work with the negative log-likelihood function, denoted as $ -\ell(\theta) $, which is equivalent to minimizing the negative log-likelihood.
The MLE properties, including consistency, asymptotic normality, and efficiency, make it one of the most widely used methods for parameter estimation in statistics.\\
\section{The Fisher Minimum $\chi^2$ Estimation}
The Fisher Minimum Chi-Square method involves minimizing the chi-square statistic between observed and expected frequencies. It provides robust parameter estimates by balancing goodness-of-fit and parameter stability. The RP-based Minimum Chi-Square method is an extension of the equiprobable minimum chi-square method, focusing on minimizing residuals using representative points. This method is particularly advantageous for small sample sizes and highly skewed data distributions.
The steps for Pearson-Fisher’s minimum Chi-square estimation are:
\noindent1. Formulate the likelihood function.\\
2. Maximize the likelihood function.\\
3. Determine parameter estimates.\\
4. Assess model fit using the minimum Chi-square statistic.\\
5. pare the minimum Chi-square statistic with critical values.\\
6. Draw conclusions about the goodness of fit of the model.\\
Let's first look at the basic formula of Equiprobable Fisher Minimum $\chi^2$ Estimation:
Suppose we have a set of observed data $x_i$, $i=1,2,...,n$, and the parameter to be estimated is $\theta$. Let $f(x;\theta)$ be the probability density function (PDF) of the data, and $F(x;\theta)$ be the cumulative distribution function (CDF). For a frequency histogram divided into $k$ intervals, let $O_i$ be the observed frequency in the $i$th interval, and $E_i$ be the expected frequency in the $i$th interval. Then the expression for the chi-square statistic is:
\begin{equation}
\chi^2 = \sum_{i=1}^{k} \frac{(O_i - E_i)^2}{E_i}
\end{equation}
where the expected frequency $E_i$ can be obtained by integration:
\begin{equation}
E_i = n [F(b_i;\theta) - F(a_i;\theta)]
\end{equation}
where $[a_i, b_i]$ are the boundaries of the $i$th interval.
The core idea of Equiprobable Fisher Minimum $\chi^2$ Estimation is to estimate the parameter $\theta$ by minimizing this chi-square statistic. Optimization algorithms (such as gradient descent, Newton's method, etc.) are usually used to find the parameter value $\theta$ that minimizes $\chi^2$.
本站無廣告,永久域名(danmei.twking.cc)
\chapter{Parameter estimation methods}\label{cap2}
\section{Overview of Parameter Estimation}
urate parameter estimation is crucial for effectively utilizing the Pearson Type III distribution in practical applications. This section provides an overview of the primary methods used for estimating the parameters of this distribution, emphasizing their theoretical foundations, advantages, and limitations.
\section{The method of moment estimation (MoM)}
The Moment method is one of the earliest and simplest techniques for parameter estimation. It involves equating the sample moments (e.g., mean, variance) to the theoretical moments of the distribution to solve for the parameters.
Raw moments represent the average of powers of the observed data points.The k-th order raw moment ($m_k$) is calculated as
\begin{equation}
m_k = \frac{1}{n} \sum_{i=1}^{n} X_i^k
\end{equation}
where $X_i$ represents the observed data points and $n$ is the sample size.\\
\vspace{0.1cm}
\vspace{0.1cm}
Central moments measure the variability of the data points around their sample mean.The k-th order central moment ($t_k$) is given by
\begin{equation}
t_k = \frac{1}{n} \sum_{i=1}^{n} (X_i - \bar{X})^k
\end{equation}
where $\bar{X}$ represents the sample mean.\\
\vspace{0.1cm}
In method of moments estimation, we equate the sample moments (either raw or central) to their corresponding population moments and solve for the parameters of interest. This provides estimates for the parameters based on the sample data.
\section{The maximum likelihood estimation method (ML)}
The Maximum Likelihood Estimation method is widely regarded for its efficiency and robustness. It finds parameter values that maximize the likelihood function, representing the probability of observing the given data. Maximum Likelihood Estimation (MLE) is a method used to estimate the parameters of a statistical model by maximizing the likelihood function. Suppose we have a random sample $X_1, X_2, \ldots, X_n $ from a probability distribution with probability density function (pdf) or probability mass function (pmf) $ f(x; \theta) $, where $ \theta $ represents the parameter(s) to be estimated.
The likelihood function $ L(\theta) $ is defined as the joint probability density or mass function of the observed data, considered as a function of the parameter(s) $ \theta $:
\begin{equation}
L(\theta) = \prod_{i=1}^{n} f(x_i; \theta)
\end{equation}
The goal of MLE is to find the parameter value(s) $ \hat{\theta} $ that maximizes the likelihood function, or equivalently, the log-likelihood function $ \ell(\theta) $:
\begin{equation}
\ell(\theta) = \log L(\theta) = \sum_{i=1}^{n} \log f(x_i; \theta)
\end{equation}
To find $ \hat{\theta} $, we differentiate the log-likelihood function with respect to $\theta $, set the derivative(s) equal to zero, and solve for $ \hat{\theta} $ using this equation:
\begin{equation}
\frac{\partial \ell(\theta)}{\partial \theta} = 0
\end{equation}
If the log-likelihood function is concave, the critical point(s) obtained by solving the above equation corresponds to the maximum likelihood estimator(s) $ \hat{\theta} $. In some cases, it may be more convenient to work with the negative log-likelihood function, denoted as $ -\ell(\theta) $, which is equivalent to minimizing the negative log-likelihood.
The MLE properties, including consistency, asymptotic normality, and efficiency, make it one of the most widely used methods for parameter estimation in statistics.\\
\section{The Fisher Minimum $\chi^2$ Estimation}
The Fisher Minimum Chi-Square method involves minimizing the chi-square statistic between observed and expected frequencies. It provides robust parameter estimates by balancing goodness-of-fit and parameter stability. The RP-based Minimum Chi-Square method is an extension of the equiprobable minimum chi-square method, focusing on minimizing residuals using representative points. This method is particularly advantageous for small sample sizes and highly skewed data distributions.
The steps for Pearson-Fisher’s minimum Chi-square estimation are:
\noindent1. Formulate the likelihood function.\\
2. Maximize the likelihood function.\\
3. Determine parameter estimates.\\
4. Assess model fit using the minimum Chi-square statistic.\\
5. pare the minimum Chi-square statistic with critical values.\\
6. Draw conclusions about the goodness of fit of the model.\\
Let's first look at the basic formula of Equiprobable Fisher Minimum $\chi^2$ Estimation:
Suppose we have a set of observed data $x_i$, $i=1,2,...,n$, and the parameter to be estimated is $\theta$. Let $f(x;\theta)$ be the probability density function (PDF) of the data, and $F(x;\theta)$ be the cumulative distribution function (CDF). For a frequency histogram divided into $k$ intervals, let $O_i$ be the observed frequency in the $i$th interval, and $E_i$ be the expected frequency in the $i$th interval. Then the expression for the chi-square statistic is:
\begin{equation}
\chi^2 = \sum_{i=1}^{k} \frac{(O_i - E_i)^2}{E_i}
\end{equation}
where the expected frequency $E_i$ can be obtained by integration:
\begin{equation}
E_i = n [F(b_i;\theta) - F(a_i;\theta)]
\end{equation}
where $[a_i, b_i]$ are the boundaries of the $i$th interval.
The core idea of Equiprobable Fisher Minimum $\chi^2$ Estimation is to estimate the parameter $\theta$ by minimizing this chi-square statistic. Optimization algorithms (such as gradient descent, Newton's method, etc.) are usually used to find the parameter value $\theta$ that minimizes $\chi^2$.
本站無廣告,永久域名(danmei.twking.cc)