首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Estimation of the mean of an exponential distribution based on record data has been treated by Samaniego and Whitaker [F.J. Samaniego, and L.R. Whitaker, On estimating popular characteristics from record breaking observations I. Parametric results, Naval Res. Logist. Quart. 33 (1986), pp. 531–543] and Doostparast [M. Doostparast, A note on estimation based on record data, Metrika 69 (2009), pp. 69–80]. When a random sample Y 1, …, Y n is examined sequentially and successive minimum values are recorded, Samaniego and Whitaker [F.J. Samaniego, and L.R. Whitaker, On estimating popular characteristics from record breaking observations I. Parametric results, Naval Res. Logist. Quart. 33 (1986), pp. 531–543] obtained a maximum likelihood estimator of the mean of the population and showed its convergence in probability. We establish here its convergence in mean square error, which is stronger than the convergence in probability. Next, we discuss the optimal sample size for estimating the mean based on a criterion involving a cost function as well as the Fisher information based on records arising from a random sample. Finally, a comparison between complete data and record is carried out and some special cases are discussed in detail.  相似文献   

2.
Record scheme is a method to reduce the total time on test of an experiment. In this scheme, items are sequentially observed and only values smaller than all previous ones are recorded. In some situations, when the experiments are time-consuming and sometimes the items are lost during the experiment, the record scheme dominates the usual random sample scheme [M. Doostparast and N. Balakrishnan, Optimal sample size for record data and associated cost analysis for exponential distribution, J. Statist. Comput. Simul. 80(12) (2010), pp. 1389–1401]. Estimation of the mean of an exponential distribution based on record data has been treated by Samaniego and Whitaker [On estimating population characteristics from record breaking observations I. Parametric results, Naval Res. Logist. Q. 33 (1986), pp. 531–543] and Doostparast [A note on estimation based on record data, Metrika 69 (2009), pp. 69–80]. The lognormal distribution is used in a wide range of applications when the multiplicative scale is appropriate and the log-transformation removes the skew and brings about symmetry of the data distribution [N.T. Longford, Inference with the lognormal distribution, J. Statist. Plann. Inference 139 (2009), pp. 2329–2340]. In this paper, point estimates as well as confidence intervals for the unknown parameters are obtained. This will also be addressed by the Bayesian point of view. To carry out the performance of the estimators obtained, a simulation study is conducted. For illustration proposes, a real data set, due to Lawless [Statistical Models and Methods for Lifetime Data, 2nd ed., John Wiley & Sons, New York, 2003], is analysed using the procedures obtained.  相似文献   

3.
Doostparast and Balakrishnan (Pareto record-based analysis, Statistics, under review) recently developed optimal confidence intervals as well as uniformly most powerful tests for one- and two-sided hypotheses concerning shape and scale parameters, for the two-parameter Pareto distribution based on record data. In this paper, on the basis of record values and inter-record times from the two-parameter Pareto distribution, maximum-likelihood and Bayes estimators as well as credible regions are developed for the two parameters of the Pareto distribution. For illustrative purposes, a data set on annual wages of a sample of production-line workers in a large industrial firm is analysed using the proposed procedures.  相似文献   

4.
There are a number of situations in which the experimental data observed are record statistics. In this paper, optimal confidence intervals as well as uniformly most powerful (MP) tests for one-sided alternatives are developed. Since a uniformly MP test for a two-sided alternative does not exist, generalized likelihood ratio and uniformly unbiased and invariant tests are derived for the two parameters of the exponential distribution based on record data. For illustrative purposes, a data set on the times between consecutive telephone calls to a company's switchboard is analysed using the proposed procedures. Finally, some open problems in this direction are pointed out.  相似文献   

5.
Epstein [Truncated life tests in the exponential case, Ann. Math. Statist. 25 (1954), pp. 555–564] introduced a hybrid censoring scheme (called Type-I hybrid censoring) and Chen and Bhattacharyya [Exact confidence bounds for an exponential parameter under hybrid censoring, Comm. Statist. Theory Methods 17 (1988), pp. 1857–1870] derived the exact distribution of the maximum-likelihood estimator (MLE) of the mean of a scaled exponential distribution based on a Type-I hybrid censored sample. Childs et al. [Exact likelihood inference based on Type-I and Type-II hybrid censored samples from the exponential distribution, Ann. Inst. Statist. Math. 55 (2003), pp. 319–330] provided an alternate simpler expression for this distribution, and also developed analogous results for another hybrid censoring scheme (called Type-II hybrid censoring). The purpose of this paper is to derive the exact bivariate distribution of the MLE of the parameter vector of a two-parameter exponential model based on hybrid censored samples. The marginal distributions are derived and exact confidence bounds for the parameters are obtained. The results are also used to derive the exact distribution of the MLE of the pth quantile, as well as the corresponding confidence bounds. These exact confidence intervals are then compared with parametric bootstrap confidence intervals in terms of coverage probabilities. Finally, we present some numerical examples to illustrate the methods of inference developed here.  相似文献   

6.
This article considers the maximum likelihood and Bayes estimation of the stress–strength reliability based on two-parameter generalized exponential records. Here, we extend the results of Baklizi [Computational Statistics and Data Analysis 52 (2008), 3468–3473] to explain a wide variety of real datasets. We also consider the estimation of R when the same shape parameter is known. The results for exponential distribution can be obtained as a special case with different scale parameters.  相似文献   

7.
The importance of the normal distribution for fitting continuous data is well known. However, in many practical situations data distribution departs from normality. For example, the sample skewness and the sample kurtosis are far away from 0 and 3, respectively, which are nice properties of normal distributions. So, it is important to have formal tests of normality against any alternative. D'Agostino et al. [A suggestion for using powerful and informative tests of normality, Am. Statist. 44 (1990), pp. 316–321] review four procedures Z 2(g 1), Z 2(g 2), D and K 2 for testing departure from normality. The first two of these procedures are tests of normality against departure due to skewness and kurtosis, respectively. The other two tests are omnibus tests. An alternative to the normal distribution is a class of skew-normal distributions (see [A. Azzalini, A class of distributions which includes the normal ones, Scand. J. Statist. 12 (1985), pp. 171–178]). In this paper, we obtain a score test (W) and a likelihood ratio test (LR) of goodness of fit of the normal regression model against the skew-normal family of regression models. It turns out that the score test is based on the sample skewness and is of very simple form. The performance of these six procedures, in terms of size and power, are compared using simulations. The level properties of the three statistics LR, W and Z 2(g 1) are similar and close to the nominal level for moderate to large sample sizes. Also, their power properties are similar for small departure from normality due to skewness (γ1≤0.4). Of these, the score test statistic has a very simple form and computationally much simpler than the other two statistics. The LR statistic, in general, has highest power, although it is computationally much complex as it requires estimates of the parameters under the normal model as well as those under the skew-normal model. So, the score test may be used to test for normality against small departure from normality due to skewness. Otherwise, the likelihood ratio statistic LR should be used as it detects general departure from normality (due to both skewness and kurtosis) with, in general, largest power.  相似文献   

8.
A test for the equality of two or more two-parameter exponential distributions is suggested. It is developed on an intuitive basis and is obtained by combining two independent tests by the Fisher method (1950, pp. 99-101). The test is simple for application and is optimal asymptotically in the sense of Bahadur efficiency (1960). A numerical example is discussed to illustrate its application in a real-world situation. The Monte Carlo simulation is used for calculating its power which is compared with that of the test suggested by Singh and Narayan (1983). The suggested test is found oftener more powerful.  相似文献   

9.
This paper deals with a study of different types of tests for the two-sided c-sample scale problem. We consider the classical parametric test of Bartlett [M.S. Bartlett, Properties of sufficiency and statistical tests, Proc. R. Stat. Soc. Ser. A. 160 (1937), pp. 268–282] several nonparametric tests, especially the test of Fligner and Killeen [M.A. Fligner and T.J. Killeen, Distribution-free two-sample tests for scale, J. Amer. Statist. Assoc. 71 (1976), pp. 210–213], the test of Levene [H. Levene, Robust tests for equality of variances, in Contribution to Probability and Statistics, I. Olkin, ed., Stanford University Press, Palo Alto, 1960, pp. 278–292] and a robust version of it introduced by Brown and Forsythe [M.B. Brown and A.B. Forsythe, Robust tests for the equality of variances, J. Amer. Statist. Assoc. 69 (1974), pp. 364–367] as well as two adaptive tests proposed by Büning [H. Büning, Adaptive tests for the c-sample location problem – the case of two-sided alternatives, Comm. Statist.Theory Methods. 25 (1996), pp. 1569–1582] and Büning [H. Büning, An adaptive test for the two sample scale problem, Nr. 2003/10, Diskussionsbeiträge des Fachbereich Wirtschaftswissenschaft der Freien Universität Berlin, Volkswirtschaftliche Reihe, 2003]. which are based on the principle of Hogg [R.V. Hogg, Adaptive robust procedures. A partial review and some suggestions for future applications and theory, J. Amer. Statist. Assoc. 69 (1974), pp. 909–927]. For all the tests we use Bootstrap sampling strategies, too. We compare via Monte Carlo Methods all the tests by investigating level α and power β of the tests for distributions with different strength of tailweight and skewness and for various sample sizes. It turns out that the test of Fligner and Killeen in combination with the bootstrap is the best one among all tests considered.  相似文献   

10.
In this paper, within the framework of a Bayesian model, we consider the problem of sequentially estimating the intensity parameter of a homogeneous Poisson process with a linear exponential (LINEX) loss function and a fixed cost per unit time. An asymptotically pointwise optimal (APO) rule is proposed. It is shown to be asymptotically optimal for the arbitrary priors and asymptotically non-deficient for the conjugate priors in a similar sense of Bickel and Yahav [Asymptotically pointwise optimal procedures in sequential analysis, in Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Vol. 1, University of California Press, Berkeley, CA, 1967, pp. 401–413; Asymptotically optimal Bayes and minimax procedures in sequential estimation, Ann. Math. Statist. 39 (1968), pp. 442–456] and Woodroofe [A.P.O. rules are asymptotically non-deficient for estimation with squared error loss, Z. Wahrsch. verw. Gebiete 58 (1981), pp. 331–341], respectively. The proposed APO rule is illustrated using a real data set.  相似文献   

11.
The generalized Rayleigh (GR) distribution [V.G. Vodǎ, Inferential procedures on a generalized Rayleigh variate, I, Appl. Math. 21 (1976), pp. 395–412; V.G. Vodǎ, Inferential procedures on a generalized Rayleigh variate, II, Appl. Math. 21 (1976), pp. 413–419] has been applied in several areas such as health, agriculture, biology and other sciences. For the first time, we propose the Kumaraswamy GR (KwGR) distribution for analysing lifetime data. The new density function can be expressed as a mixture of GR density functions. Explicit formulae are derived for some of its statistical quantities. The density function of the order statistics can be expressed as a mixture of GR density functions. We also propose a linear log-KwGR regression model for analysing data with real support to extend some known regression models. The estimation of parameters is approached by maximum likelihood. The importance of the new models is illustrated in two real data sets.  相似文献   

12.
In this paper, we revisit the alternative outlier model of Thompson [A note on restricted maximum likelihood estimation with an alternative outlier model, J. Roy. Stat. Soc. Ser. B 47 (1985), pp. 53–55] for detecting outliers in the linear model. Gumedze et al. [A variance shift model for detection of outliers in the linear mixed model, Comput. Statist. Data Anal. 54 (2010), pp. 2128–2144] called this model the variance shift outlier model (VSOM). The basic idea behind the VSOM is to detect observations with inflated variance and isolate them for further investigation. The VSOM is appealing because it downweights an outlier in the analysis, with the weighting determined automatically as part of the estimation procedure. We set up the VSOM as a linear mixed model and then use the likelihood ratio test (LRT) statistic as an objective measure for determining whether the weighting is required, i.e. whether the observation is an outlier. We also derived one-step updates of the variance parameter estimates based on observed, expected and average information matrices to obtain one-step LRT statistics which usually require less computation. Both the fully iterated and one-step LRTs are functions of the squared standard residuals from the null model and therefore can be computed directly without the need to fit the VSOM. We investigated the properties of the likelihood ratio tests and compare them. An extension of the model to detect a group of outliers is also given. We illustrate the proposed methodology using simulated datasets and a real dataset.  相似文献   

13.
Three test statistics for a change-point in a linear model, variants of those considered by Andrews and Ploberger [Optimal tests when a nusiance parameter is present only under the alternative. Econometrica. 1994;62:1383–1414]: the sup-likelihood ratio (LR) statistic; a weighted average of the exponential of LR-statistics and a weighted average of LR-statistics, are studied. Critical values for the statistics with time trend regressors, obtained via simulation, are found to vary considerably, depending on conditions on the error terms. The performance of the bootstrap in approximating p-values of the distributions is assessed in a simulation study. A sample approximation to asymptotic analytical expressions extending those of Kim and Siegmund [The likelihood ratio test for a change-point in simple linear regression. Biometrika. 1989;76:409–423] in the case of the sup-LR test is also assessed. The approximations and bootstrap are applied to the Quandt data [The estimation of a parameter of a linear regression system obeying two separate regimes. J Amer Statist Assoc. 1958;53:873–880] and real data concerning a change-point in oxygen uptake during incremental exercise testing and the bootstrap gives reasonable results.  相似文献   

14.
We deal with a general class of extreme-value regression models introduced by Barreto-Souza and Vasconcellos [Bias and skewness in a general extreme-value regression model, Comput. Statist. Data Anal. 55 (2011), pp. 1379–1393]. Our goal is to derive an adjusted likelihood ratio statistic that is approximately distributed as χ2 with a high degree of accuracy. Although the adjusted statistic requires more computational effort than its unadjusted counterpart, it is shown that the adjustment term has a simple compact form that can be easily implemented in standard statistical software. Further, we compare the finite-sample performance of the three classical tests (likelihood ratio, Wald, and score), the gradient test that has been recently proposed by Terrell [The gradient statistic, Comput. Sci. Stat. 34 (2002), pp. 206–215], and the adjusted likelihood ratio test obtained in this article. Our simulations favour the latter. Applications of our results are presented.  相似文献   

15.
This article deals with testing inference in the class of beta regression models with varying dispersion. We focus on inference in small samples. We perform a numerical analysis in order to evaluate the sizes and powers of different tests. We consider the likelihood ratio test, two adjusted likelihood ratio tests proposed by Ferrari and Pinheiro [Improved likelihood inference in beta regression, J. Stat. Comput. Simul. 81 (2011), pp. 431–443], the score test, the Wald test and bootstrap versions of the likelihood ratio, score and Wald tests. We perform tests on the parameters that index the mean submodel and also on the parameters in the linear predictor of the precision submodel. Overall, the numerical evidence favours the bootstrap tests. It is also shown that the score test is considerably less size-distorted than the likelihood ratio and Wald tests. An application that uses real (not simulated) data is presented and discussed.  相似文献   

16.
The class of inflated beta regression models generalizes that of beta regressions [S.L.P. Ferrari and F. Cribari-Neto, Beta regression for modelling rates and proportions, J. Appl. Stat. 31 (2004), pp. 799–815] by incorporating a discrete component that allows practitioners to model data on rates and proportions with observations that equal an interval limit. For instance, one can model responses that assume values in (0, 1]. The likelihood ratio test tends to be quite oversized (liberal, anticonservative) in inflated beta regressions estimated with a small number of observations. Indeed, our numerical results show that its null rejection rate can be almost twice the nominal level. It is thus important to develop alternative testing strategies. This paper develops small-sample adjustments to the likelihood ratio and signed likelihood ratio test statistics in inflated beta regression models. The adjustments do not require orthogonality between the parameters of interest and the nuisance parameters and are fairly simple since they only require first- and second-order log-likelihood cumulants. Simulation results show that the modified likelihood ratio tests deliver much accurate inference in small samples. An empirical application is presented and discussed.  相似文献   

17.
This paper proposes various double unit root tests for cross-sectionally dependent panel data. The cross-sectional correlation is handled by the projection method [P.C.B. Phillips and D. Sul, Dynamic panel estimation and homogeneity testing under cross section dependence, Econom. J. 6 (2003), pp. 217–259; H.R. Moon and B. Perron, Testing for a unit root in panels with dynamic factors, J. Econom. 122 (2004), pp. 81–126] or the subtraction method [J. Bai and S. Ng, A PANIC attack on unit roots and cointegration, Econometrica 72 (2004), pp. 1127–1177]. Pooling or averaging is applied to combine results from different panel units. Also, to estimate autoregressive parameters the ordinary least squares estimation [D.P. Hasza and W.A. Fuller, Estimation for autoregressive processes with unit roots, Ann. Stat. 7 (1979), pp. 1106–1120] or the symmetric estimation [D.L. Sen and D.A. Dickey, Symmetric test for second differencing in univariate time series, J. Bus. Econ. Stat. 5 (1987), pp. 463–473] are used, and to adjust mean functions the ordinary mean adjustment or the recursive mean adjustment are used. Combinations of different methods in defactoring to eliminate the cross-sectional dependency, integrating results from panel units, estimating the parameters, and adjusting mean functions yields various available tests for double unit roots in panel data. Simple asymptotic distributions of the proposed test statistics are derived, which can be used to find critical values of the test statistics.

We perform a Monte Carlo experiment to compare the performance of these tests and to suggest optimal tests for a given panel data. Application of the proposed tests to a real data, the yearly export panel data sets of several Latin–American countries for the past 50 years, illustrates the usefulness of the proposed tests for panel data, in that they reveal stronger evidence of double unit roots than the componentwise double unit root tests of Hasza and Fuller [Estimation for autoregressive processes with unit roots, Ann. Stat. 7 (1979), pp. 1106–1120] or Sen and Dickey [Symmetric test for second differencing in univariate time series, J. Bus. Econ. Stat. 5 (1987), pp. 463–473].  相似文献   


18.
Tests for the equality of variances are of interest in many areas such as quality control, agricultural production systems, experimental education, pharmacology, biology, as well as a preliminary to the analysis of variance, dose–response modelling or discriminant analysis. The literature is vast. Traditional non-parametric tests are due to Mood, Miller and Ansari–Bradley. A test which usually stands out in terms of power and robustness against non-normality is the W50 Brown and Forsythe [Robust tests for the equality of variances, J. Am. Stat. Assoc. 69 (1974), pp. 364–367] modification of the Levene test [Robust tests for equality of variances, in Contributions to Probability and Statistics, I. Olkin, ed., Stanford University Press, Stanford, 1960, pp. 278–292]. This paper deals with the two-sample scale problem and in particular with Levene type tests. We consider 10 Levene type tests: the W50, the M50 and L50 tests [G. Pan, On a Levene type test for equality of two variances, J. Stat. Comput. Simul. 63 (1999), pp. 59–71], the R-test [R.G. O'Brien, A general ANOVA method for robust tests of additive models for variances, J. Am. Stat. Assoc. 74 (1979), pp. 877–880], as well as the bootstrap and permutation versions of the W50, L50 and R tests. We consider also the F-test, the modified Fligner and Killeen [Distribution-free two-sample tests for scale, J. Am. Stat. Assoc. 71 (1976), pp. 210–213] test, an adaptive test due to Hall and Padmanabhan [Adaptive inference for the two-sample scale problem, Technometrics 23 (1997), pp. 351–361] and the two tests due to Shoemaker [Tests for differences in dispersion based on quantiles, Am. Stat. 49(2) (1995), pp. 179–182; Interquantile tests for dispersion in skewed distributions, Commun. Stat. Simul. Comput. 28 (1999), pp. 189–205]. The aim is to identify the effective methods for detecting scale differences. Our study is different with respect to the other ones since it is focused on resampling versions of the Levene type tests, and many tests considered here have not ever been proposed and/or compared. The computationally simplest test found robust is W50. Higher power, while preserving robustness, is achieved by considering the resampling version of Levene type tests like the permutation R-test (recommended for normal- and light-tailed distributions) and the bootstrap L50 test (recommended for heavy-tailed and skewed distributions). Among non-Levene type tests, the best one is the adaptive test due to Hall and Padmanabhan.  相似文献   

19.
A statistical model is said to be an order‐restricted statistical model when its parameter takes its values in a closed convex cone C of the Euclidean space. In recent years, order‐restricted likelihood ratio tests and maximum likelihood estimators have been criticized on the grounds that they may violate a cone order monotonicity (COM) property, and hence reverse the cone order induced by C. The authors argue here that these reversals occur only in the case that C is an obtuse cone, and that in this case COM is an inappropriate requirement for likelihood‐based estimates and tests. They conclude that these procedures thus remain perfectly reasonable procedures for order‐restricted inference.  相似文献   

20.
Jibo Wu  Hu Yang 《Statistics》2013,47(3):535-545
This paper deals with parameter estimation in the linear regression model and an almost unbiased two-parameter estimator is introduced. The performance of this new estimator over the ordinary least-squares estimator and the two-parameter estimator [M.R. Özkale and S. Kaçiranlar, The restricted and unrestricted two-parameter estimator, Comm. Statist. Theory Methods 36 (2007), pp. 2707–2725] in terms of scalar mean-squared error criterion is investigated and a simulation study is done.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号