Science Journal of Applied Mathematics and Statistics
Volume 4, Issue 5, October 2016, Pages: 229-235

Minimax Estimation of the Parameter of ЭРланга Distribution Under Different Loss Functions

Lanping Li

Department of Basic Subjects, Hunan University of Finance and Economics, Changsha, China

Email address:

To cite this article:

Lanping Li. Minimax Estimation of the Parameter of ЭРланга Distribution Under Different Loss Functions. Science Journal of Applied Mathematics and Statistics. Vol. 4, No. 5, 2016, pp. 229-235. doi: 10.11648/j.sjams.20160405.16

Received: August 31, 2016; Accepted: September 12, 2016; Published: October 8, 2016


Abstract:The aim of this article is to study the estimation of the parameter of ЭРланга distribution based on complete samples. The Bayes estimators of the parameter of ЭРланга distribution are obtained under three different loss functions, namely, weighted square error loss, squared log error loss and entropy loss functions by using conjugate prior inverse Gamma distribution. Then the minimax estimators of the parameter are derived by using Lehmann’s theorem. Finally, performances of these estimators are compared in terms of risks which obtained under squared error loss function.

Keywords: Bayes Estimator, Minimax Estimator, Squared Log Error Loss Function, Entropy Loss Function


1. Introduction

In reliability and supportability data analysis field, the most commonly used distribution are the exponential distribution, normal distribution and Weibull distribution, etc. But in some practical application, such as the repair time, guarantee the distribution delay time, the above several distributions does not just as one wish. At this time ЭРланга distribution was proposed as a suitable alternative distribution [1].

Suppose that the repair time  obeys the ЭРланга distribution with the following probability density function (pdf) and distribution function respectively:

(1)

(2)

Here, is the unknown parameter. It is easily to see that, and then the parameter  is also often referred to as the mean time to repair equipment.

Lv et al. [1] studied the characteristic parameters, such as mean, variance and median and the maximum likelihood estimation of ЭРланга distribution was also derived. Pan et al. [2] studied the interval estimation and hypothesis test of ЭРланга distribution based on small sample, and the difference of exponential distribution with З рланга distribution was also discussed. Long [3] studied the estimation of the parameter of З рланга distribution based on missing data. Yu et al. [4] used the Эрланга distribution to fit the battlefield injury degree, and established the simulating model, then proposed a new method to solve the problem in the production and distribution of battlefield injury in campaign macrocosm. Long [5] studied the Bayes estimation of Эрлангa distribution under type-II censored samples on the basis conjugate prior, Jeffreys prior and no information prior distributions.

The minimax estimation was introduced by Abraham Wald in 1950, and then minimax approach has received great attention and application many aspects by researchers [6-9]. Minimax estimation is one of the most aspect in statistical inference field. Under quadratic and MLINEX loss functions, The references [10-13] studied the minimax estimation of the Weibull distribution, Pareto distribution and Rayleigh distributions and Minimax distribution, respectively. Rasheed and Al-Shareefi [14] discussed the minimax estimation of the scale parameter of Laplace distribution under squared-log error loss function. Li [15] studied the minimax estimation of the parameter of exponential distribution based on record values. Li [16] obtained the minimax estimators of the parameter of Maxwell distribution under different loss functions.

The purpose of this paper is to study maximum likelihood estimation (MLE) and Bayes estimation of the parameter of ЭРланга distribution. Further, by using Lehmann’s theorem we derive minimax estimators under three loss functions, namely, weighted squared error loss, squared log error loss and entropy loss functions.

2. Maximum Likelihood Estimation

Let be a sample drawn from ЭРланга distribution with pdf (1), and  is the observation of . For given sample observation, we can get the likelihood function of the parameter  as follows:

(3)

That is

(4)

Here  is the observation of .

Then the log-likelihood function is

By solving log-likelihood equation

,

the maximum likelihood estimator of  can be easily derived as follows:

(5)

And by Eq. (1), we can easily show that  is a random variable distributed with the Gamma distribution , which has the following probability density function:

(6)

3. Bayesian Estimation

In Bayesian statistical analysis, loss function plays an important role in the Bayes estimation and Bayes test problems. Many loss function are proposed in Bayesian analysis, and squared error loss function is the most common loss function, which is a symmetric loss function. In many practical problems, especially in the estimation of reliability and failure rates, symmetric loss may be not suitable, because it is to be thought the overestimation will bring more loss than underestimation [17]. Then some asymmetric loss functions are developed. For example, Zellner [18] proposed the LINEX loss in Bayes estimation, Brown [19] put forward the squared log error loss function for estimating unknown parameter, Dey et al. [20] proposed the entropy loss function in the Bayesian analysis.

In this paper, we will discuss the Bayes estimation of the unknown parameter of ЭРланга distribution under the following loss functions:

(i) Weighted squared error loss function

(7)

Under the weighted squared error loss function (7), the Bayes estimator of  is

(8)

(ii) Squared log error loss function

Squared log error loss function is a asymmetric loss function, which first proposed by Brown for estimating scale parameter. This loss function can also be found in Kiapoura and Nematollahib [21] with the following form:

(9)

Obviously, as or . The loss function (9) is not always convex, and it is convex for  and concave otherwise. But the risk function of this function has minimum value, which we also call it the Bayes estimator  under squared log error loss function. That is

(10)

(iii) Entropy loss function

In many practical situations, it appears to be more realistic to express the loss in terms of the ratio . In this case, Dey et al. [20] pointed out a useful asymmetric loss function named entropy loss function:

(11)

Whose minimum occurs at. Also, this loss function has been used in Singh et al. [22], Nematollahi and Motamed-Shariati [23]. The Bayes estimator under the entropy loss (11) is denoted by, obtained by

(12)

In this section, we will estimate the unknown parameter  on the basis of the above three mentioned loss functions. We further assume that some prior knowledge about the parameter  is available to the investigation from past experience with the ЭРланга model. The prior knowledge can often be summarized in terms of the so-called prior densities on parameter space of . In the following discussion, we assume the following Jeffrey’s non-informative quasi-prior density defined as, 

(13)

Hence,  leads to a diffuse prior and  to a non-informative prior.

Let be a sample drawn from ЭРланга distribution with pdf (1), and  is the observation of . Combining the likelihood function (3) with the prior density (13), the posterior probability density of  can be derived using Bayes Theorem as follows

(14)

Theorem 1. Let be a sample of Э Рланга distribution with probability density function (1), and  is the observation of .  is the observation of  

Then

(i) Under the weighted square error loss function (7), the Bayes estimator is

(15)

(ii) The Bayes estimator under the squared log error loss function (9) is

(16)

(iii) The Bayes estimator under the entropy loss function (11) is

(17)

Proof. (i) Form Equation (14), it is obviously concluded that the posterior distribution of the parameter  is Gamma distribution.

That us

,

Then

(18)

Thus, the Bayes estimator under the weighted square error loss function (7) is derived as

For the case (ii): By using (14),

Where

is a Digamma function.

Then the Bayes estimator under the squared log error loss function (9) is come out to be

(iii) By Eqs. (12) and (17), the Bayes estimator under the entropy loss function (11) is given by

4. Minimax Estimation of ЭРланга Distribution

This section will derive the minimax extimators of Э Рланга Distribution by using Lehmann’s Theorem, which depends on specific prior distribution and loss functions of a Bayesian method. The Lehmann’s Theorem is stated as follows:

Lemma 1 Let  be a class distribution functions and be the estimators of. Suppose thatis a Bayes estimator, which derived on the basis of a prior distributionon. Then if the Bayes risk function  equals constant on , then is a minimax estimator of .

Theorem 2 Let be a sample drawn from ЭРланга distribution with pdf (1), and  is the observation of . Suppose that  is the observation of the statistics Then

(i) Under the weighted square error loss function (7), is the minimax estimator of parameter  

(ii) Under the squared log error loss function, is the minimax estimator of parameter  

(iii) Under the entropy loss function, is the minimax estimator of parameter  

Proof. To use Lehmann’s Theorem for the proof of the results. We need calculate the risk function of Bayes estimators and prove these risk functions are constants.

For the case (i), we can derive the risk function of the Bayes estimator  under the weighted square error loss function (7) as follows:

From equation (6), we have , then we have

Consequently,

Then, for Bayes estimator , the risk function  is a constant on the parameter  So, According to Lemma 1,  is the minimax estimator of parameter  under the weighted square error loss function (7).

For the case (ii). The risk function of the Bayes estimator  is

By , we can easily get the result

Then

Let , then we can prove that .

The derivative of  is

Then

From above results, we can the fact

Further, we have

 

Then  is also a constant about the parameter . So, according to Lemma 1, we know that,  is a minimax estimator for parameter  under the squared log error loss function.

For the case (iii). The risk function of the Bayes estimator  can be obtained as follows:

Then is also a constant about the parameter . So, according to lemma 1, we know that,  is a minimax estimator for the parameter  under the entropy loss function.

5. Performances of Bayes Estimators

To illustrate the performance of these Bayes estimators, squared error loss function is used as a loss function to compare them. We noteandare the risk functions of estimatorsand relative to the squared error loss, respectively. They can be easily derived as follows:

,

Let , and  are the ratio of the risk functions to , which are plotted in Figs. 1-4 with different sample sizes, (n=10, 20, 30, 50)

Figure 1. Performance of estimators with n=10.

Figure 2. Performance of estimators with n=20.

Figure 3. Performance of estimators with n=30.

Figure 4. Performance of estimators with n=50.

From Figure 1 to Figure 4, we know that no of these estimators is uniformly better that other estimators. Then in practice, we recommend to select the estimator according to the prior parameter value d when assuming the quasi-prior as the prior distribution.

6. Conclusion

This paper derived Bayes estimators of the parameter of Э Рланга distribution under weighted squared error loss, squared log error loss and entropy loss functions. Mote Carlo simulations show that the risk functions of these estimators, defined under squared error loss function, are all decrease as sample size n increases. The risk functions more and more close to each other aehen the sample size n is large, such as n>50.

Acknowledgement

This study is partially supported by Natural Science Foundation of Hunan Province (No. 2015JJ3030) and Foundation of Hunan Educational Committee (No. 15C0228). The author also gratefully acknowledge the helpful comments and suggestions of the reviewers, which have improved the presentation.


References

  1. Lv H. Q., Gao L. H. and Chen C. L., 2002. Эрланга distribution and its application in supportability data analysis. Journal of Academy of Armored Force Engineering, 16(3): 48-52.
  2. Pan G. T., Wang B. H., Chen C. L., Huang Y. B.and Dang M. T., 2009. The research of interval estimation and hypothetical test of small sample of Зрланга distribution. Application of Statistics and Management, 28(3): 468-472.
  3. Long B., 2013. The estimations of parameter from Зрланга distribution under missing data samples. Journal of Jiangxi Normal University (Natural Science), 37(1): 16-19.
  4. Yu C. M., Chi Y. H., Zhao Z. W., and Song J. F., 2008. Maintenance-decision-oriented modeling and emulating of battlefield injury in campaign macrocosm. Journal of System Simulation, 20(20): 5669-5671.
  5. Long B., 2015. Bayesian estimation of parameter on Эрлангa distribution under different prior distribution. Mathematics in Practice & Theory, (4):186-192.
  6. Jiao J., Venkat K., Han Y., Weissman T, 2015. Minimax estimation of functionals of discrete distributions. Information Theory IEEE Transactions on, 61(5):2835-2885.
  7. Gao C., Ma Z., Ren Z., Zhou H. H, 2014. Minimax estimation in sparse canonical correlation analysis. Annals of Statistics, 43(5):905-912.
  8. Kogan M. M., 2014. LMI-based minimax estimation and filtering under unknown covariances. International Journal of Control, 87(6):1216-1226.
  9. Tchrakian T. T., Zhuk S., 2015. A macroscopic traffic data-assimilation framework based on the Fourier–Galerkin method and Minimax estimation[J]. IEEE Transactions on Intelligent Transportation Systems, 16(1):452-464.
  10. Roy, M. K., Podder C. K. and Bhuiyan K. J., 2002. Minimax estimation of the scale parameter of the Weibull distribution for quadratic and MLINEX loss functions, Jahangirnagar University Journal of Science, 25: 277-285.
  11. Podder, C. K., Roy M. K., Bhuiyan K. J. and Karim A., 2004. Minimax estimation of the parameter of the Pareto distribution for quadratic and MLINEX loss functions, Pak. J. Statist., 20(1): 137-149.
  12. Dey, S., 2008. Minimax estimation of the parameter of the Rayleigh distribution under quadratic loss function, Data Science Journal, 7(1): 23-30
  13. Shadrokh, A. and Pazira H., 2010. Minimax estimation on the Minimax distribution, International Journal of Statistics and Systems, 5(2): 99-118.
  14. Rasheed H. A., Al-Shareefi E. F, 2015. Minimax estimation of the scale parameter of Laplace distribution under squared-log error loss function. Mathematical Theory & Modeling, 5(1):183-193.
  15. Li L. P., 2014. Minimax estimation of the parameter of exponential distribution based on record values. International Journal of Information Technology & Computer Science, 6(6):47-53.
  16. Li L. P., 2016, Minimax estimation of the parameter of Maxwell distribution under different loss functions, American Journal of Theoretical and Applied Statistics, 5(4): 202-207.
  17. Li X., Shi Y., Wei J., Chai J., 2007. Empirical Bayes estimators of reliability performances using LINEX loss under progressively Type-II censored samples[J]. Mathematics & Computers in Simulation, 2007, 73(5):320-326.
  18. Zellner, A., 1986. Bayesian estimation and prediction using asymmetric loss function. Journal of American statistical Association, 81(394): 446-451.
  19. Brown L., 1968. Inadmissibility of the usual estimators of scale parameters in problems with unknown location and scale parameters. Annals of Mathematical Statistics, 39(1):29-48.
  20. Dey, D. K., Ghosh M. and Srinivasan C., 1987. Simultaneous estimation of parameters under entropy loss, J. Statist. Plan. and Infer., 15(3):347-363.
  21. Kiapoura A. and Nematollahib N., 2011. Robust Bayesian prediction and estimation under a squared log error loss function. Statistics & Probability Letters, 81(11):1717-1724.
  22. Singh S. K., Singh U. and Kumar D., 2011. Bayesian estimation of the exponentiated Gamma parameter and reliability function under asymmetric loss function. REVSTAT, 9(3): 247-260.
  23. Nematollahi N., Motamed-Shariati F., 2009. Estimation of the scale parameter of the selected Gamma population under the entropy loss function. Communication in Statistics- Theory and Methods, 38(7):208-221.

Article Tools
  Abstract
  PDF(257K)
Follow on us
ADDRESS
Science Publishing Group
548 FASHION AVENUE
NEW YORK, NY 10018
U.S.A.
Tel: (001)347-688-8931