首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we consider conditional inference procedures for the Pareto and power function distributions. We develop procedures for obtaining confidence intervals for the location and scale parameters as well as upper and lower n probability tolerance intervals for a proportion g, given a Type-II right censored sample from the corresponding distribution. The intervals are exact, and are obtained by conditioning on the observed values of the ancillary statistics. Since, for each distribution, the procedures assume that a shape parameter x is known, a sensitivity analysis is also carried out to see how the procedures are affected by changes in x.  相似文献   

2.
A goodness-of-fit statistic Z is defined in terms of the spacings generated by the order statistics of a complete or a censored sample from a distribution of the type (l/)f((x-μ)/), μ and unknown. The distribution of Z is studied, mostly through Monte Carlo methods. The power properties of Z for testing Exponential, Uniform, Normal, Gamma and Logistic distributions are discussed; Z is shown to be more powerful than the Smith & Bain (1976) correlation statistic, except for testing Uniform, Normal and Logistic (symmetric distributions) against symmetric alternatives. The statistic Z is generalized to test the goodness-of-fit from κ 2 independent complete or censored samples.  相似文献   

3.
Selection from k independent populations of the t (< k) populations with the smallest scale parameters has been considered under the Indifference Zone approach by Bechhofer k Sobel (1954). The same problem has been considered under the Subset Selection approach by Gupta & Sobel (1962a) for the normal variances case and by Carroll, Gupta & Huang (1975) for the more general case of stochastically increasing distributions. This paper uses the Subset Selection approach to place confidence bounds on the probability of selecting all “good” populations, or only “good” populations, for the Case of scale parameters, where a “good” population is defined to have one of the t smallest scale parameters. This is an extension of the location parameter results obtained by Bofinger & Mengersen (1986). Special results are obtained for the case of selecting normal populations based on variances and the necessary tables are presented.  相似文献   

4.
When a new observation is to be classified into one of several multivariate normal populations with different means and the same covariance matrix, by Rao's method of scoring, the chance of misclassification is expressed as a multiple integral. This paper gives a practical method of obtaining reasonable approximations to this integral by using tables prepared by Gibbons, Olkin & Sobel (1977) for a different task.  相似文献   

5.
   
Replacing f (x)/F (x) by α+β(x- θ)/σ in the maximum likelihood equations ∂L/∂θ and ∂L/∂σ calculated from a censored sample, a pair of estimators θe and σe, is obtained. The variances and covariances of these estimators are calculated and compared with the corresponding values for the best linear unbiassed (BLU) estimators.  相似文献   

6.
Kumar and Patel (1971) have considered the problem of testing the equality of location parameters of two exponential distributions on the basis of samples censored from above, when the scale parameters are the same and unknown. The test proposed by them is shown to be biased for n1n2, while for n1=n2 the test possesses the property of monotonicity and is equivalent to the likelihood ratio test, which is considered by Epstein and Tsao (1953) and Dubey (1963a, 1963b). Epstein and Tsao state that the test is unbiased. We may note that when the scale parameters of k exponential distributions are unknown the problem of testing the equality of location parameters is reducible to that of testing the equality of parameters in k rectangular populations for which a test and its power function were given by Khatri (1960, 1965); Jaiswal (1969) considered similar problems in his thesis. Here we extend the problem of testing the equality of k exponential distributions on the basis of samples censored from above when the scale parameters are equal and unknown, and we establish the likelihood ratio test (LET) and the union-intersection test (UIT) procedures. Using the results previously derived by Jaiswal (1969), we obtain the power function for the LET and for k= 2 show that the test possesses the property of monotonicity. The power function of the UIT is also given.  相似文献   

7.
Three tests are considered concerning the common mean of two normal populations: (1) an F test based on a sample from one population, (2) a test based on the addition of the F statistics from independent samples from two popultions (proposed), and (3) a test based on the maximum of the F statistics from two independent samples from two populations. A condition under which test (2) is locally more powerful than test (1) is given. As the test statistic in test (2) does not follow a standard distribution, a formula for approximating the observed significance level is provided. A simulation study is used to compare the power of these tests.  相似文献   

8.
The author proposes the best shrinkage predictor of a preassigned dominance level for a future order statistic of an exponential distribution, assuming a prior estimate of the scale parameter is distributed over an interval according to an arbitrary distribution with known mean. Based on a Type II censored sample from this distribution, we predict the future order statistic in another independent sample from the same distribution. The predictor is constructed by incorporating a preliminary confidence interval for the scale parameter and a class of shrinkage predictors constructed here. It improves considerably classical predictors for all values of the scale parameter within its dominance interval containing the confidence interval of a preassigned level.  相似文献   

9.
For estimating the common mean of a bivariate normal distribution, Krishnamoorthy & Rohatgi (1989) proposed some estimators which dominate the maximum likelihood estimator in a large region of the parameter space. We consider some modifications of these estimators and study their risk performance.  相似文献   

10.
Isometry is the independence of a size variable G(X) and shape. This note characterizes G(X) =γΦXb1i as the only continuous size variable for which isometry with respect to G can continue to hold for all unequal changes of scale of the variables Xi, assuming some conditions on the ranges of the Xi.  相似文献   

11.
The paper analyses the distribution of times from HIV seroconversion to the first AIDS defining illness for a subcohort of the Western Australian HIV Cohort Study for whom the seroconversion date is known to fall within a calendar time window. The analysis is based on a generalised gamma model for the incubation times and a piecewise constant distribution for the conditional times of seroconversion given the seroconversion windows. This allows flexible hazard shapes and also allows comparison of goodness of fit of the gamma and Weibull distributions which are often used for modelling incubation times. Computational issues are discussed. In these data, neither age at seroconversion, nor calendar time of seroconversion, nor the identification of a seroconversion illness appears to afFect incubation distributions. The Weibull distribution appears to provide a reasonable fit. The distribution of times from seroconversion to an HIV-related death is also briefly considered.  相似文献   

12.
In this note we examine the problem of estimating the mean of a Poisson distribution when a nuisance parameter is present. Using a condition of Cox (1958) about ancillarity in the presence of a nuisance parameter, we justify that inference about the parameter should be carried out using the conditional distribution given the appropriate ancillary statistics. A small simulation study has been done to compare the performance of the conditional likelihood approach and the standard likelihood approach.  相似文献   

13.
This paper investigates the predictive mean squared error performance of a modified double k-class estimator by incorporating the Stein variance estimator. Recent studies show that the performance of the Stein rule estimator can be improved by using the Stein variance estimator. However, as we demonstrate below, this conclusion does not hold in general for all members of the double k-class estimators. On the other hand, an estimator is found to have smaller predictive mean squared error than the Stein variance-Stein rule estimator, over quite large parts of the parameter space.  相似文献   

14.
We re-examine the criteria of “hyper-admissibility” and “necessary bestness”, for the choice of estimator, from the point of view of their relevance to the design of actual surveys. Both these criteria give rise to a unique choice of estimator (viz. the Horvitz-Thompson estimator ?HT) whatever be the character under investigation or sample design. However, we show here that the “principal hyper-surfaces” (or “domains”) of dimension one (which are practically uninteresting)play the key role in arriving at the unique choice. A variance estimator v1(?HT) (due to Horvitz-Thompson), which takes negative values “often”, is shown to be uniquely “hyperadmissible” in a wide class of unbiased estimators of the variance of ?HT. Extensive empirical evidence on the superiority of the Sen-Yates-Grundy variance estimator v2(?HT) over v1(?HT) is presented.  相似文献   

15.
We propose a method for simultaneously estimating a density function and its derivatives based on a recursive reduction of bias in the usual one-step estimator—in effect, a jackknife. The procedure is computationally simple and requires only the inversion of a triangular matrix with easily-calculated elements.  相似文献   

16.
Consider a discrete time Markov chain X(n) denned on {0,1,…} and let P be the transition probability matrix governing X(n). This paper shows that, if a transformed matrix of P is totally positive of order 2, then poj(n) and pio(n) are unimodal with respect to n, where pij(n) = Pr[X(n) = j |X(0) = i]. Furthermore, the modes of poj(n) and pio(n) are non-increasing in j and I, respectively, when additionally P itself is totally positive of order 2. These results are transferred to a class of semi-Markov processes via a uniformization.  相似文献   

17.
We consider two related aspects of the study of old‐age mortality. One is the estimation of a parameterized hazard function from grouped data, and the other is its possible deceleration at extreme old age owing to heterogeneity described by a mixture of distinct sub‐populations. The first is treated by half of a logistic transform, which is known to be free of discretization bias at older ages, and also preserves the increasing slope of the log hazard in the Gompertz case. It is assumed that data are available in the form published by official statistical agencies, that is, as aggregated frequencies in discrete time. Local polynomial modelling and weighted least squares are applied to cause‐of‐death mortality counts. The second, related, problem is to discover what conditions are necessary for population mortality to exhibit deceleration for a mixture of Gompertz sub‐populations. The general problem remains open but, in the case of three groups, we demonstrate that heterogeneity may be such that it is possible for a population to show decelerating mortality and then return to a Gompertz‐like increase at a later age. This implies that there are situations, depending on the extent of heterogeneity, in which there is at least one age interval in which the hazard function decreases before increasing again.  相似文献   

18.
Cross-validation, as a popular tool for choosing a smoothing parameter, is generalized to the case of dependent observations. A general version of the ‘deletion theorem’ for representation and simplified calculation of cross-validatory criteria is given. Finally cross-validation is discussed in terms of penalized likelihoods as a method for model choice analogous to the Akaike information criterion.  相似文献   

19.
This paper considers a linear regression model with regression parameter vector β. The parameter of interest is θ= aTβ where a is specified. When, as a first step, a data‐based variable selection (e.g. minimum Akaike information criterion) is used to select a model, it is common statistical practice to then carry out inference about θ, using the same data, based on the (false) assumption that the selected model had been provided a priori. The paper considers a confidence interval for θ with nominal coverage 1 ‐ α constructed on this (false) assumption, and calls this the naive 1 ‐ α confidence interval. The minimum coverage probability of this confidence interval can be calculated for simple variable selection procedures involving only a single variable. However, the kinds of variable selection procedures used in practice are typically much more complicated. For the real‐life data presented in this paper, there are 20 variables each of which is to be either included or not, leading to 220 different models. The coverage probability at any given value of the parameters provides an upper bound on the minimum coverage probability of the naive confidence interval. This paper derives a new Monte Carlo simulation estimator of the coverage probability, which uses conditioning for variance reduction. For these real‐life data, the gain in efficiency of this Monte Carlo simulation due to conditioning ranged from 2 to 6. The paper also presents a simple one‐dimensional search strategy for parameter values at which the coverage probability is relatively small. For these real‐life data, this search leads to parameter values for which the coverage probability of the naive 0.95 confidence interval is 0.79 for variable selection using the Akaike information criterion and 0.70 for variable selection using Bayes information criterion, showing that these confidence intervals are completely inadequate.  相似文献   

20.
We consider a linear regression model, with the parameter of interest a specified linear combination of the components of the regression parameter vector. We suppose that, as a first step, a data-based model selection (e.g. by preliminary hypothesis tests or minimizing the Akaike information criterion – AIC) is used to select a model. It is common statistical practice to then construct a confidence interval for the parameter of interest, based on the assumption that the selected model had been given to us  a priori . This assumption is false, and it can lead to a confidence interval with poor coverage properties. We provide an easily computed finite-sample upper bound (calculated by repeated numerical evaluation of a double integral) to the minimum coverage probability of this confidence interval. This bound applies for model selection by any of the following methods: minimum AIC, minimum Bayesian information criterion (BIC), maximum adjusted  R 2, minimum Mallows'   C P   and  t -tests. The importance of this upper bound is that it delineates general categories of design matrices and model selection procedures for which this confidence interval has poor coverage properties. This upper bound is shown to be a finite-sample analogue of an earlier large-sample upper bound due to Kabaila and Leeb.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号