首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The Hartley‐Rao‐Cochran sampling design is an unequal probability sampling design which can be used to select samples from finite populations. We propose to adjust the empirical likelihood approach for the Hartley‐Rao‐Cochran sampling design. The approach proposed intrinsically incorporates sampling weights, auxiliary information and allows for large sampling fractions. It can be used to construct confidence intervals. In a simulation study, we show that the coverage may be better for the empirical likelihood confidence interval than for standard confidence intervals based on variance estimates. The approach proposed is simple to implement and less computer intensive than bootstrap. The confidence interval proposed does not rely on re‐sampling, linearization, variance estimation, design‐effects or joint inclusion probabilities.  相似文献   

2.
The Cox (1972) regression model is extended to include discrete and mixed continuous/discrete failure time data by retaining the multiplicative hazard rate form of the absolutely continuous model. Application of martingale arguments to the regression parameter estimating function show the Breslow (1974) estimator to be consistent and asymptotically Gaussian under this model. A computationally convenient estimator of the variance of the score function can be developed, again using martingale arguments. This estimator reduces to the usual hypergeometric form in the special case of testing equality of several survival curves, and it leads more generally to a convenient consistent variance estimator for the regression parameter. A small simulation study is carried out to study the regression parameter estimator and its variance estimator under the discrete Cox model special case and an application to a bladder cancer recurrence dataset is provided.  相似文献   

3.
In case–control studies the Cochran–Armitage trend test is powerful for detection of an association between a risk genetic marker and a disease of interest. To apply this test, a score should be assigned to the genotypes based on the genetic model. When the underlying genetic model is unknown, the trend test statistic is quite sensitive to the choice of the score. In this paper, we study the asymptotic property of the robust suptest statistic defined as a supremum of Cochran–Armitage trend test across all scores between 0 and 1. Through numerical studies we show that small to moderate sample size performances of the suptest appear reasonable in terms of type I error control and we compared empirical powers of the suptest to those of three individual Cochran–Armitage trend tests and the maximum of the three Cochran–Armitage trend tests. The use of the suptest is applied to rheumatoid arthritis data from a genome-wide association study.  相似文献   

4.
In this article, we propose a version of a kernel density estimator which reduces the mean squared error of the existing kernel density estimator by combining bias reduction and variance reduction techniques. Its theoretical properties are investigated, and a Monte Carlo simulation study supporting theoretical results on the proposed estimator is given.  相似文献   

5.
We revisit the well-known Behrens–Fisher problem and apply a newly developed ‘Computational Approach Test’ (CAT) to test the equality of two population means where the populations are assumed to be normal with unknown and possibly unequal variances. An advantage of the CAT is that it does not require the explicit knowledge of the sampling distribution of the test statistic. The CAT is then compared with three widely accepted tests—Welch–Satterthwaite test (WST), Cochran–Cox test (CCT), ‘Generalized p-value’ test (GPT)—and a recently suggested test based on the jackknife procedure, called Singh–Saxena–Srivastava test (SSST). Further, model robustness of these five tests are studied when the data actually came from t-distributions, but wrongly perceived as normal ones. Our detailed study based on a comprehensive simulation indicate some interesting results including the facts that the GPT is quite conservative, and the SSST is not as good as it has been claimed in the literature. To the best of our knowledge, the trends observed in our study have not been reported earlier in the existing literature.  相似文献   

6.
We apply geometric programming, developed by Duffin, Peterson and Zener (1967), to the optimal allocation of stratified samples with several variance constraints arising from several estimates of deficiency rates in the quality control of administrative decisions. We develop also a method for imposing constraints on sample sizes to equalize workloads over time, as required by the practicalities of clerical work for quality control.

We allocate samples by an extension of the work of Neyman (1934), following the exposition of Cochran (1977). Davis and Schwartz (1987) developed methods for multiconstraint Neyman allocation by geometric programming for integrated sampling. They also applied geometric programming to Neyman allocation of a sample for estimating college enrollments by Cornell (1947) and Cochran (1977). This paper continues the application of geometric programming to Neyman allocation with multiple constraints on variances and workloads and minimpal sampling costs.  相似文献   

7.
We analyse a naive method using sample mean and sample variance to test the convergence of simulation. We find this method is valid for identically, independently distributed samples, as well as correlated samples with correlation disappearing in long period. Our simulation results on the approximation to bankruptcy probability (BP) show the naive method compares well with the Half-Width, Geweke and CUSUM methods in terms of accuracy and time cost. There are clear evidences of variance reduction from tail-distribution sampling for all convergence test methods when the true BP is very low.  相似文献   

8.
Abstract.  We propose new control variates for variance reduction in estimation of mean values using the Metropolis–Hastings algorithm. Traditionally, states that are rejected in the Metropolis–Hastings algorithm are simply ignored, which intuitively seems to be a waste of information. We present a setting for construction of zero mean control variates for general target and proposal distributions and develop ideas for the standard Metropolis–Hastings and reversible jump algorithms. We give results for three simulation examples. We get best results for variates that are functions of the current state x and the proposal y , but we also consider variates that in addition are functions of the Metropolis–Hastings acceptance/rejection decision. The variance reduction achieved varies depending on the target distribution and proposal mechanisms used. In simulation experiments, we typically achieve relative variance reductions between 15% and 35%.  相似文献   

9.
In this paper, we propose a novel variance reduction approach for additive functionals of Markov chains based on minimization of an estimate for the asymptotic variance of these functionals over suitable classes of control variates. A distinctive feature of the proposed approach is its ability to significantly reduce the overall finite sample variance. This feature is theoretically demonstrated by means of a deep non-asymptotic analysis of a variance reduced functional as well as by a thorough simulation study. In particular, we apply our method to various MCMC Bayesian estimation problems where it favorably compares to the existing variance reduction approaches.  相似文献   

10.
11.
Most multivariate statistical techniques rely on the assumption of multivariate normality. The effects of nonnormality on multivariate tests are assumed to be negligible when variance–covariance matrices and sample sizes are equal. Therefore, in practice, investigators usually do not attempt to assess multivariate normality. In this simulation study, the effects of skewed and leptokurtic multivariate data on the Type I error and power of Hotelling's T 2 were examined by manipulating distribution, sample size, and variance–covariance matrix. The empirical Type I error rate and power of Hotelling's T 2 were calculated before and after the application of generalized Box–Cox transformation. The findings demonstrated that even when variance–covariance matrices and sample sizes are equal, small to moderate changes in power still can be observed.  相似文献   

12.
In many applications, a single Box–Cox transformation cannot necessarily produce the normality, constancy of variance and linearity of systematic effects. In this paper, by establishing a heterogeneous linear regression model for the Box–Cox transformed response, we propose a hybrid strategy, in which variable selection is employed to reduce the dimension of the explanatory variables in joint mean and variance models, and Box–Cox transformation is made to remedy the response. We propose a unified procedure which can simultaneously select significant variables in the joint mean and variance models of Box–Cox transformation which provide a useful extension of the ordinary normal linear regression models. With appropriate choice of the tuning parameters, we establish the consistency of this procedure and the oracle property of the obtained estimators. Moreover, we also consider the maximum profile likelihood estimator of the Box–Cox transformation parameter. Simulation studies and a real example are used to illustrate the application of the proposed methods.  相似文献   

13.
We develop a variance reduction method for the seemingly unrelated (SUR) kernel estimator of Wang (2003). We show that the quadratic interpolation method introduced in Cheng et al. (2007) works for the SUR kernel estimator. For a given point of estimation, Cheng et al. (2007) define a variance reduced local linear estimate as a linear combination of classical estimates at three nearby points. We develop an analogous variance reduction method for SUR kernel estimators in clustered/longitudinal models and perform simulation studies which demonstrate the efficacy of our variance reduction method in finite sample settings.  相似文献   

14.
Abstract. Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees.  相似文献   

15.
The gamma frailty model is a natural extension of the Cox proportional hazards model in survival analysis. Because the frailties are unobserved, an E-M approach is often used for estimation. Such an approach is shown to lead to finite sample underestimation of the frailty variance, with the corresponding regression parameters also being underestimated as a result. For the univariate case, we investigate the source of the bias with simulation studies and a complete enumeration. The rank-based E-M approach, we note, only identifies frailty through the order in which failures occur; additional frailty which is evident in the survival times is ignored, and as a result the frailty variance is underestimated. An adaption of the standard E-M approach is suggested, whereby the non-parametric Breslow estimate is replaced by a local likelihood formulation for the baseline hazard which allows the survival times themselves to enter the model. Simulations demonstrate that this approach substantially reduces the bias, even at small sample sizes. The method developed is applied to survival data from the North West Regional Leukaemia Register.  相似文献   

16.
Estimating the probability that a sum of random variables (RVs) exceeds a given threshold is a well-known challenging problem. A naive Monte Carlo simulation is the standard technique for the estimation of this type of probability. However, this approach is computationally expensive, especially when dealing with rare events. An alternative approach is represented by the use of variance reduction techniques, known for their efficiency in requiring less computations for achieving the same accuracy requirement. Most of these methods have thus far been proposed to deal with specific settings under which the RVs belong to particular classes of distributions. In this paper, we propose a generalization of the well-known hazard rate twisting Importance Sampling-based approach that presents the advantage of being logarithmic efficient for arbitrary sums of RVs. The wide scope of applicability of the proposed method is mainly due to our particular way of selecting the twisting parameter. It is worth observing that this interesting feature is rarely satisfied by variance reduction algorithms whose performances were only proven under some restrictive assumptions. It comes along with a good efficiency, illustrated by some selected simulation results comparing the performance of the proposed method with some existing techniques.  相似文献   

17.
We study the effect of additive and multiplicative Berkson measurement error in Cox proportional hazard model. By plotting the true and the observed survivor function and the true and the observed hazard function dependent on the exposure one can get ideas about the effect of this type of error on the estimation of the slope parameter corresponding to the variable measured with error. As an example, we analyze the measurement error in the situation of the German Uranium Miners Cohort Study both with graphical methods and with a simulation study. We do not see a substantial bias in the presence of small measurement error and in the rare disease case. Even the effect of a Berkson measurement error with high variance, which is not unrealistic in our example, is a negligible attenuation of the observed effect. However, this effect is more pronounced for multiplicative measurement error.  相似文献   

18.
In an attempt to provide a statistical tool for disease screening and prediction, we propose a semiparametric approach to analysis of the Cox proportional hazards cure model in situations where the observations on the event time are subject to right censoring and some covariates are missing not at random. To facilitate the methodological development, we begin with semiparametric maximum likelihood estimation (SPMLE) assuming that the (conditional) distribution of the missing covariates is known. A variant of the EM algorithm is used to compute the estimator. We then adapt the SPMLE to a more practical situation where the distribution is unknown and there is a consistent estimator based on available information. We establish the consistency and weak convergence of the resulting pseudo-SPMLE, and identify a suitable variance estimator. The application of our inference procedure to disease screening and prediction is illustrated via empirical studies. The proposed approach is used to analyze the tuberculosis screening study data that motivated this research. Its finite-sample performance is examined by simulation.  相似文献   

19.
20.
Some variance reduction techniques utilizing the total hazard are developed to estimate the average run lengths of the cumulative sum charts through simulation when the process follows a general probability distribution. Particularly, we propose the hazard estimator and the cycle estimator. Simulation results are shown for the exponential case and these estimators are compared with the raw simulation estimator. Applicability to multivariate CUSUM schemes is briefly discussed in the conclusion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号