首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Non-central chi-squared distribution plays a vital role in statistical testing procedures. Estimation of the non-centrality parameter provides valuable information for the power calculation of the associated test. We are interested in the statistical inference property of the non-centrality parameter estimate based on one observation (usually a summary statistic) from a truncated chi-squared distribution. This work is motivated by the application of the flexible two-stage design in case–control studies, where the sample size needed for the second stage of a two-stage study can be determined adaptively by the results of the first stage. We first study the moment estimate for the truncated distribution and prove its existence, uniqueness, and inadmissibility and convergence properties. We then define a new class of estimates that includes the moment estimate as a special case. Among this class of estimates, we recommend to use one member that outperforms the moment estimate in a wide range of scenarios. We also present two methods for constructing confidence intervals. Simulation studies are conducted to evaluate the performance of the proposed point and interval estimates.  相似文献   

2.
In this paper, the hypothesis testing and interval estimation for the intraclass correlation coefficients are considered in a two-way random effects model with interaction. Two particular intraclass correlation coefficients are described in a reliability study. The tests and confidence intervals for the intraclass correlation coefficients are developed when the data are unbalanced. One approach is based on the generalized p-value and generalized confidence interval, the other is based on the modified large-sample idea. These two approaches simplify to the ones in Gilder et al. [2007. Confidence intervals on intraclass correlation coefficients in a balanced two-factor random design. J. Statist. Plann. Inference 137, 1199–1212] when the data are balanced. Furthermore, some statistical properties of the generalized confidence intervals are investigated. Finally, some simulation results to compare the performance of the modified large-sample approach with that of the generalized approach are reported. The simulation results indicate that the modified large-sample approach performs better than the generalized approach in the coverage probability and expected length of the confidence interval.  相似文献   

3.
Confidence intervals for parameters that can be arbitrarily close to being unidentified are unbounded with positive probability [e.g. Dufour, J.-M., 1997. Some impossibility theorems in econometrics with applications to instrumental variables and dynamic models. Econometrica 65, 1365–1388; Pfanzagl, J. 1998. The nonexistence of confidence sets for discontinuous functionals. Journal of Statistical Planning and Inference 75, 9–20], and the asymptotic risks of their estimators are unbounded [Pötscher, B.M., 2002. Lower risk bounds and properties of confidence sets for ill-posed estimation problems with applications to spectral density and persistence estimation, unit roots, and estimation of long memory parameters. Econometrica 70, 1035–1065]. We extend these “impossibility results” and show that all tests of size α concerning parameters that can be arbitrarily close to being unidentified have power that can be as small as α for any sample size even if the null and the alternative hypotheses are not adjacent. The results are proved for a very general framework that contains commonly used models.  相似文献   

4.
A modified large-sample (MLS) approach and a generalized confidence interval (GCI) approach are proposed for constructing confidence intervals for intraclass correlation coefficients. Two particular intraclass correlation coefficients are considered in a reliability study. Both subjects and raters are assumed to be random effects in a balanced two-factor design, which includes subject-by-rater interaction. Computer simulation is used to compare the coverage probabilities of the proposed MLS approach (GiTTCH) and GCI approaches with the Leiva and Graybill [1986. Confidence intervals for variance components in the balanced two-way model with interaction. Comm. Statist. Simulation Comput. 15, 301–322] method. The competing approaches are illustrated with data from a gauge repeatability and reproducibility study. The GiTTCH method maintains at least the stated confidence level for interrater reliability. For intrarater reliability, the coverage is accurate in several circumstances but can be liberal in some circumstances. The GCI approach provides reasonable coverage for lower confidence bounds on interrater reliability, but its corresponding upper bounds are too liberal. Regarding intrarater reliability, the GCI approach is not recommended because the lower bound coverage is liberal. Comparing the overall performance of the three methods across a wide array of scenarios, the proposed modified large-sample approach (GiTTCH) provides the most accurate coverage for both interrater and intrarater reliability.  相似文献   

5.
With reference to the problem of interval estimation of a population mean under model uncertainty, we compare approaches based on robust and empirical statistics via expected lengths of the associated confidence intervals. An explicit expression for confidence intervals arising from a general class of robust statistics is worked out and this is employed to obtain a higher order asymptotic formula for the expected lengths of such intervals. Comparative theoretical results, as well as a simulation study, are then presented.  相似文献   

6.
We propose optimal procedures to achieve the goal of partitioning k multivariate normal populations into two disjoint subsets with respect to a given standard vector. Definition of good or bad multivariate normal populations is given according to their Mahalanobis distances to a known standard vector as being small or large. Partitioning k multivariate normal populations is reduced to partitioning k non-central Chi-square or non-central F distributions with respect to the corresponding non-centrality parameters depending on whether the covariance matrices are known or unknown. The minimum required sample size for each population is determined to ensure that the probability of correct decision attains a certain level. An example is given to illustrate our procedures.  相似文献   

7.
A survey of research by Emanuel Parzen on how quantile functions provide elegant and applicable formulas that unify many statistical methods, especially frequentist and Bayesian confidence intervals and prediction distributions. Section 0: In honor of Ted Anderson's 90th birthday; Section 1: Quantile functions, endpoints of prediction intervals; Section 2: Extreme value limit distributions; Sections 3, 4: Confidence and prediction endpoint function: Uniform(0,θ)(0,θ), exponential; Sections: 5, 6: Confidence quantile and Bayesian inference normal parameters μμ, σσ; Section 7: Two independent samples confidence quantiles; Section 8: Confidence quantiles for proportions, Wilson's formula. We propose ways that Bayesians and frequentists can be friends!  相似文献   

8.
We consider a general class of mixed models, where the individual parameter vector is composed of a linear function of the population parameter vector plus an individual random effects vector. The linear function can vary for the different individuals. We show that the search for optimal designs for the estimation of the population parameter vector can be restricted to the class of group-wise identical designs, i.e., for each of the groups defined by the different linear functions only one individual elementary design has to be optimized. A way to apply the result to non-linear mixed models is described.  相似文献   

9.
Recently Jammalamadaka and Mangalam [2003. Non-parametric estimation for middle censored data. J. Nonparametric Statist. 15, 253–265] introduced a general censoring scheme called the “middle-censoring” scheme in non-parametric set up. In this paper we consider this middle-censoring scheme when the lifetime distribution of the items is exponentially distributed and the censoring mechanism is independent and non-informative. In this set up, we derive the maximum likelihood estimator and study its consistency and asymptotic normality properties. We also derive the Bayes estimate of the exponential parameter under a gamma prior. Since a theoretical construction of the credible interval becomes quite difficult, we propose and implement Gibbs sampling technique to construct the credible intervals. Monte Carlo simulations are performed to evaluate the small sample behavior of the techniques proposed. A real data set is analyzed to illustrate the practical application of the proposed methods.  相似文献   

10.
It is shown that Strawderman's [1974. Minimax estimation of powers of the variance of a normal population under squared error loss. Ann. Statist. 2, 190–198] technique for estimating the variance of a normal distribution can be extended to estimating a general scale parameter in the presence of a nuisance parameter. Employing standard monotone likelihood ratio-type conditions, a new class of improved estimators for this scale parameter is derived under quadratic loss. By imposing an additional condition, a broader class of improved estimators is obtained. The dominating procedures are in form analogous to those in Strawderman [1974. Minimax estimation of powers of the variance of a normal population under squared error loss. Ann. Statist. 2, 190–198]. Application of the general results to the exponential distribution yields new sufficient conditions, other than those of Brewster and Zidek [1974. Improving on equivariant estimators. Ann. Statist. 2, 21–38] and Kubokawa [1994. A unified approach to improving equivariant estimators. Ann. Statist. 22, 290–299], for improving the best affine equivariant estimator of the scale parameter. A class of estimators satisfying the new conditions is constructed. The results shed new light on Strawderman's [1974. Minimax estimation of powers of the variance of a normal population under squared error loss. Ann. Statist. 2, 190–198] technique.  相似文献   

11.
This article investigates the large sample interval mapping method for genetic trait loci (GTL) in a finite non-linear regression mixture model. The general model includes most commonly used kernel functions, such as exponential family mixture, logistic regression mixture and generalized linear mixture models, as special cases. The populations derived from either the backcross or intercross design are considered. In particular, unlike all existing results in the literature in the finite mixture models, the large sample results presented in this paper do not require the boundness condition on the parametric space. Therefore, the large sample theory presented in this article possesses general applicability to the interval mapping method of GTL in genetic research. The limiting null distribution of the likelihood ratio test statistics can be utilized easily to determine the threshold values or p-values required in the interval mapping. The limiting distribution is proved to be free of the parameter values of null model and free of the choice of a kernel function. Extension to the multiple marker interval GTL detection is also discussed. Simulation study results show favorable performance of the asymptotic procedure when sample sizes are moderate.  相似文献   

12.
The estimation problem of a permutation parameter on the basis of a random sample of increasing size is considered. A necessary and sufficient condition for the existence of an estimator, asymptotically fully efficient for two different distributions families, is derived. We also study the application of this result to cyclic groups of order two and three.  相似文献   

13.
Lu Lin 《Statistical Papers》2004,45(4):529-544
The quasi-score function, as defined by Wedderburn (1974) and McCullagh (1983) and so on, is a linear function of observations. The generalized quasi-score function introduced in this paper is a linear function of some unbiased basis functions, where the unbiased basis functions may be some linear functions of the observations or not, and can be easily constructed by the meaning of the parameters such as mean and median and so on. The generalized quasi-likelihood estimate obtained by such a generalized quasi-score function is consistent and has an asymptotically normal distribution. As a result, the optimum generalized quasi-score is obtained and a method to construct the optimum unbiased basis function is introduced. In order to construct the potential function, a conservative generalized estimating function is defined. By conservative, a potential function for the projected score has many properties of a log-likelihood function. Finally, some examples are given to illustrate the theoretical results. This paper is supported by NNSF project (10371059) of China and Youth Teacher Foundation of Nankai University.  相似文献   

14.
We discuss the general form of a first-order correction to the maximum likelihood estimator which is expressed in terms of the gradient of a function, which could for example be the logarithm of a prior density function. In terms of Kullback–Leibler divergence, the correction gives an asymptotic improvement over maximum likelihood under rather general conditions. The theory is illustrated for Bayes estimators with conjugate priors. The optimal choice of hyper-parameter to improve the maximum likelihood estimator is discussed. The results based on Kullback–Leibler risk are extended to a wide class of risk functions.  相似文献   

15.
Ranked set sampling (RSS) was first proposed by McIntyre [1952. A method for unbiased selective sampling, using ranked sets. Australian J. Agricultural Res. 3, 385–390] as an effective way to estimate the unknown population mean. Chuiv and Sinha [1998. On some aspects of ranked set sampling in parametric estimation. In: Balakrishnan, N., Rao, C.R. (Eds.), Handbook of Statistics, vol. 17. Elsevier, Amsterdam, pp. 337–377] and Chen et al. [2004. Ranked Set Sampling—Theory and Application. Lecture Notes in Statistics, vol. 176. Springer, New York] have provided excellent surveys of RSS and various inferential results based on RSS. In this paper, we use the idea of order statistics from independent and non-identically distributed (INID) random variables to propose ordered ranked set sampling (ORSS) and then develop optimal linear inference based on ORSS. We determine the best linear unbiased estimators based on ORSS (BLUE-ORSS) and show that they are more efficient than BLUE-RSS for the two-parameter exponential, normal and logistic distributions. Although this is not the case for the one-parameter exponential distribution, the relative efficiency of the BLUE-ORSS (to BLUE-RSS) is very close to 1. Furthermore, we compare both BLUE-ORSS and BLUE-RSS with the BLUE based on order statistics from a simple random sample (BLUE-OS). We show that BLUE-ORSS is uniformly better than BLUE-OS, while BLUE-RSS is not as efficient as BLUE-OS for small sample sizes (n<5n<5).  相似文献   

16.
R. Van de Ven  N. C. Weber 《Statistics》2013,47(3-4):345-352
Upper and lower bounds are obtained for the mean of the negative binomial distribution. These bounds are simple functions of a percentile determined by the shape parameter. The result is then used to obtain a robust estimate of the mean when the shape parameter is known.  相似文献   

17.
General linear models with a common design matrix and with various structures of the variance–covariance matrix are considered. We say that a model is perfect for a linearly estimable parametric function, or the function is perfect in the model, if there exists the best linear unbiased estimator. All perfect models for a given function and all perfect functions in a given model are characterized.  相似文献   

18.
We consider a multivariate linear model for multivariate controlled calibration, and construct some conservative confidence regions, which are nonempty and invariant under nonsingular transformations. The computation of our confidence region is easier compared to some of the existing procedures. We illustrate the results using two examples. The simulation results show the closeness of the coverage probability of our confidence regions to the assumed confidence level.  相似文献   

19.
In a basic multiple decrement model empirical occurrence-exposure rates are defined for each of k risks to which a cohort from an animal or human population is exposed over a time interval. These rates are viewed as the evolution of a stochastic process. Some asymptotic properties of this process are considered. Weak convergence of the process and its uniform strong convergence are shown under mild conditions.  相似文献   

20.
When analyzing incomplete longitudinal clinical trial data, it is often inappropriate to assume that the occurrence of missingness is at random, especially in cases where visits are entirely missed. We present a framework that simultaneously models multivariate incomplete longitudinal data and a non-ignorable missingness mechanism using a Bayesian approach. A criterion measure is presented for comparing models. We demonstrate the feasibility of the methodology through reanalysis of two of the longitudinal measures from a clinical trial of penicillamine treatment for scleroderma patients. We compare the results for univariate and bivariate, ignorable and non-ignorable missingness models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号