首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this article, an unbalanced one-way random effects model is considered for the log-transformed shift-long exposure measurements. Exact test and confidence interval for the proportion of workers whose mean exposure exceeds the occupational exposure limit are developed based on the concepts of generalized p-value and generalized confidence interval. Some simulation results to compare the performance of the proposed test with that of the existing method are reported. The simulation results indicate that the proposed method appears to have significant gain in the size and power.  相似文献   

2.
Cross-classified data are often obtained in controlled experimental situations and in epidemiologic studies. As an example of the latter, occupational health studies sometimes require personal exposure measurements on a random sample of workers from one or more job groups, in one or more plant locations, on several different sampling dates. Because the marginal distributions of exposure data from such studies are generally right-skewed and well-approximated as lognormal, researchers in this area often consider the use of ANOVA models after a logarithmic transformation. While it is then of interest to estimate original-scale population parameters (e.g., the overall mean and variance), standard candidates such as maximum likelihood estimators (MLEs) can be unstable and highly biased. Uniformly minimum variance unbiased (UMVU) cstiniators offer a viable alternative, and are adaptable to sampling schemes that are typiral of experimental or epidemiologic studies. In this paper, we provide UMVU estimators for the mean and variance under two random effects ANOVA models for logtransformed data. We illustrate substantial mean squared error gains relative to the MLE when estimating the mean under a one-way classification. We illustrate that the results can readily be extended to encompass a useful class of purely random effects models, provided that the study data are balanced.  相似文献   

3.
A simple approach for analyzing longitudinally measured biomarkers is to calculate summary measures such as the area under the curve (AUC) for each individual and then compare the mean AUC between treatment groups using methods such as t test. This two-step approach is difficult to implement when there are missing data since the AUC cannot be directly calculated for individuals with missing measurements. Simple methods for dealing with missing data include the complete case analysis and imputation. A recent study showed that the estimated mean AUC difference between treatment groups based on the linear mixed model (LMM), rather than on individually calculated AUCs by simple imputation, has negligible bias under random missing assumptions and only small bias when missing is not at random. However, this model assumes the outcome to be normally distributed, which is often violated in biomarker data. In this paper, we propose to use a LMM on log-transformed biomarkers, based on which statistical inference for the ratio, rather than difference, of AUC between treatment groups is provided. The proposed method can not only handle the potential baseline imbalance in a randomized trail but also circumvent the estimation of the nuisance variance parameters in the log-normal model. The proposed model is applied to a recently completed large randomized trial studying the effect of nicotine reduction on biomarker exposure of smokers.  相似文献   

4.
Quantitle regression (QR) is a popular approach to estimate functional relations between variables for all portions of a probability distribution. Parameter estimation in QR with missing data is one of the most challenging issues in statistics. Regression quantiles can be substantially biased when observations are subject to missingness. We study several inverse probability weighting (IPW) estimators for parameters in QR when covariates or responses are subject to missing not at random. Maximum likelihood and semiparametric likelihood methods are employed to estimate the respondent probability function. To achieve nice efficiency properties, we develop an empirical likelihood (EL) approach to QR with the auxiliary information from the calibration constraints. The proposed methods are less sensitive to misspecified missing mechanisms. Asymptotic properties of the proposed IPW estimators are shown under general settings. The efficiency gain of EL-based IPW estimator is quantified theoretically. Simulation studies and a data set on the work limitation of injured workers from Canada are used to illustrated our proposed methodologies.  相似文献   

5.
Control charts are widely used for monitoring quality characteristics of high-yield processes. In such processes where a large number of zero observations exists in count data, the zero-inflated binomial (ZIB) models are more appropriate than the ordinary binomial models. In ZIB models, random shocks occur with probability θ, and upon the occurrence of random shocks, the number of non-conforming items in a sample of size n follows the binomial distribution with proportion p. In the present article, we study in more detail the exponentially weighted moving average control chart based on ZIB distribution (ZIB-EWMA) and we also propose a new control chart based on the double exponentially weighted moving average statistic for monitoring ZIB data (ZIB-DEWMA). The two control charts are studied in detecting upward shifts in θ or p individually, as well as in both parameters simultaneously. Through a simulation study, we compare the performance of the proposed chart with the ZIB-Shewhart, ZIB-EWMA and ZIB-CUSUM charts. Finally, an illustrative example is also presented to display the practical application of the ZIB charts.  相似文献   

6.
In this paper we review some results that have been derived on record values for some well known probability density functions and based on m records from Kumaraswamy’s distribution we obtain estimators for the two parameters and the future sth record value. These estimates are derived using the maximum likelihood and Bayesian approaches. In the Bayesian approach, the two parameters are assumed to be random variables and estimators for the parameters and for the future sth record value are obtained, when we have observed m past record values, using the well known squared error loss (SEL) function and a linear exponential (LINEX) loss function. The findings are illustrated with actual and computer generated data.  相似文献   

7.
The problem of making statistical inference about θ =P(X > Y) has been under great investigation in the literature using simple random sampling (SRS) data. This problem arises naturally in the area of reliability for a system with strength X and stress Y. In this study, we will consider making statistical inference about θ using ranked set sampling (RSS) data. Several estimators are proposed to estimate θ using RSS. The properties of these estimators are investigated and compared with known estimators based on simple random sample (SRS) data. The proposed estimators based on RSS dominate those based on SRS. A motivated example using real data set is given to illustrate the computation of the newly suggested estimators.  相似文献   

8.
In this paper, a simulation study is conducted to systematically investigate the impact of different types of missing data on six different statistical analyses: four different likelihood‐based linear mixed effects models and analysis of covariance (ANCOVA) using two different data sets, in non‐inferiority trial settings for the analysis of longitudinal continuous data. ANCOVA is valid when the missing data are completely at random. Likelihood‐based linear mixed effects model approaches are valid when the missing data are at random. Pattern‐mixture model (PMM) was developed to incorporate non‐random missing mechanism. Our simulations suggest that two linear mixed effects models using unstructured covariance matrix for within‐subject correlation with no random effects or first‐order autoregressive covariance matrix for within‐subject correlation with random coefficient effects provide well control of type 1 error (T1E) rate when the missing data are completely at random or at random. ANCOVA using last observation carried forward imputed data set is the worst method in terms of bias and T1E rate. PMM does not show much improvement on controlling T1E rate compared with other linear mixed effects models when the missing data are not at random but is markedly inferior when the missing data are at random. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

9.
This study focuses on the estimation of population mean of a sensitive variable in stratified random sampling based on randomized response technique (RRT) when the observations are contaminated by measurement errors (ME). A generalized estimator of population mean is proposed by using additively scrambled responses for the sensitive variable. The expressions for the bias and mean square error (MSE) of the proposed estimator are derived. The performance of the proposed estimator is evaluated both theoretically and empirically. Results are also applied to a real data set.  相似文献   

10.
In the current paper, the estimation of the shape and location parameters α and c, respectively, of the Pareto distribution will be considered in cases when c is known and when both are unknown. Simple random sampling (SRS) and ranked set sampling (RSS) will be used, and several traditional and ad hoc estimators will be considered. In addition, the estimators of α, when c is known using an RSS version based on the order statistic that maximizes the Fisher information for a fixed set size, will be considered. These estimators will be compared in terms of their biases and mean square errors. The estimators based on RSS can be real competitors against those based on SRS.  相似文献   

11.
In this paper, the hypothesis testing and interval estimation for the intraclass correlation coefficients are considered in a two-way random effects model with interaction. Two particular intraclass correlation coefficients are described in a reliability study. The tests and confidence intervals for the intraclass correlation coefficients are developed when the data are unbalanced. One approach is based on the generalized p-value and generalized confidence interval, the other is based on the modified large-sample idea. These two approaches simplify to the ones in Gilder et al. [2007. Confidence intervals on intraclass correlation coefficients in a balanced two-factor random design. J. Statist. Plann. Inference 137, 1199–1212] when the data are balanced. Furthermore, some statistical properties of the generalized confidence intervals are investigated. Finally, some simulation results to compare the performance of the modified large-sample approach with that of the generalized approach are reported. The simulation results indicate that the modified large-sample approach performs better than the generalized approach in the coverage probability and expected length of the confidence interval.  相似文献   

12.
Several distribution-free bounds on expected values of L-statistics based on the sample of possibly dependent and nonidentically distributed random variables are given in the case when the sample size is a random variable, possibly dependent on the observations, with values in the set {1,2,…}. Some bounds extend the results of Papadatos (2001a) to the case of random sample size. The others provide new evaluations even if the sample size is nonrandom. Some applications of the presented bounds are also indicated.  相似文献   

13.
 文章讨论了含有随机效应的面板数据模型,利用非对称Laplace分布与分位回归之间的关系,文章建立了一种贝叶斯分层分位回归模型。通过对非对称Laplace分布的分解,文章给出了Gibbs抽样算法下模型参数的点估计及区间估计,模拟结果显示,在处理含随机效应的面板数据模型中,特别是在误差非正态的情况下,本文的方法优于传统的均值模型方法。文章最后利用新方法对我国各地区经济与就业面板数据进行了实证研究,得到了有利于宏观调控的有用信息。  相似文献   

14.
New measures of skewness for real-valued random variables are proposed. The measures are based on a functional representation of real-valued random variables. Specifically, the expected value of the transformed random variable can be used to characterize the distribution of the original variable. Firstly, estimators of the proposed skewness measures are analyzed. Secondly, asymptotic tests for symmetry are developed. The tests are consistent for both discrete and continuous distributions. Bootstrap versions improving the empirical results for moderated and small samples are provided. Some simulations illustrate the performance of the tests in comparison to other methods. The results show that our procedures are competitive and have some practical advantages.  相似文献   

15.
As a flexible alternative to the Cox model, the accelerated failure time (AFT) model assumes that the event time of interest depends on the covariates through a regression function. The AFT model with non‐parametric covariate effects is investigated, when variable selection is desired along with estimation. Formulated in the framework of the smoothing spline analysis of variance model, the proposed method based on the Stute estimate ( Stute, 1993 [Consistent estimation under random censorship when covariables are present, J. Multivariate Anal. 45 , 89–103]) can achieve a sparse representation of the functional decomposition, by utilizing a reproducing kernel Hilbert norm penalty. Computational algorithms and theoretical properties of the proposed method are investigated. The finite sample size performance of the proposed approach is assessed via simulation studies. The primary biliary cirrhosis data is analyzed for demonstration.  相似文献   

16.
A two-step estimation approach is proposed for the fixed-effect parameters, random effects and their variance σ2 of a Poisson mixed model. In the first step, it is proposed to construct a small σ2-based approximate likelihood function of the data and utilize this function to estimate the fixed-effect parameters and σ2. In the second step, the random effects are estimated by minimizing their posterior mean squared error. Methods of Waclawiw and Liang (1993) based on so-called Stein-type estimating functions and of Breslow and Clayton (1993) based on penalized quasilikelihood are compared with the proposed likelihood method. The results of a simulation study on the performance of all three approaches are reported.  相似文献   

17.
Abstract.  Censored recurrent event data frequently arise in biomedical studies. Often, the events are not homogenous, and may be categorized. We propose semiparametric regression methods for analysing multiple-category recurrent event data and consider the setting where event times are always known, but the information used to categorize events may be missing. Application of existing methods after censoring events of unknown category (i.e. 'complete-case' methods) produces consistent estimators only when event types are missing completely at random, an assumption which will frequently fail in practice. We propose methods, based on weighted estimating equations, which are applicable when event category missingness is missing at random. Parameter estimators are shown to be consistent and asymptotically normal. Finite sample properties are examined through simulations and the proposed methods are applied to an end-stage renal disease data set obtained from a national organ failure registry.  相似文献   

18.
ABSTRACT

Nonstandard mixtures are those that result from a mixture of a discrete and a continuous random variable. They arise in practice, for example, in medical studies of exposure. Here, a random variable that models exposure might have a discrete mass point at no exposure, but otherwise may be continuous. In this article we explore estimating the distribution function associated with such a random variable from a nonparametric viewpoint. We assume that the locations of the discrete mass points are known so that we will be able to apply a classical nonparametric smoothing approach to the problem. The proposed estimator is a mixture of an empirical distribution function and a kernel estimate of a distribution function. A simple theoretical argument reveals that existing bandwidth selection algorithms can be applied to the smooth component of this estimator as well. The proposed approach is applied to two example sets of data.  相似文献   

19.
The objective of this paper is to present a method which can accommodate certain types of missing data by using the quasi-likelihood function for the complete data. This method can be useful when we can make first and second moment assumptions only; in addition, it can be helpful when the EM algorithm applied to the actual likelihood becomes overly complicated. First we derive a loss function for the observed data using an exponential family density which has the same mean and variance structure of the complete data. This loss function is the counterpart of the quasi-deviance for the observed data. Then the loss function is minimized using the EM algorithm. The use of the EM algorithm guarantees a decrease in the loss function at every iteration. When the observed data can be expressed as a deterministic linear transformation of the complete data, or when data are missing completely at random, the proposed method yields consistent estimators. Examples are given for overdispersed polytomous data, linear random effects models, and linear regression with missing covariates. Simulation results for the linear regression model with missing covariates show that the proposed estimates are more efficient than estimates based on completely observed units, even when outcomes are bimodal or skewed.  相似文献   

20.
The problem of nonparametric estimation of the spectral density function of a partially observed homogeneous random field is addressed. In particular, a class of estimators with favorable asymptotic performance (bias, variance, rate of convergence) is proposed. The proposed estimators are actually shown to be √N-consistent if the autocovariance function of the random field is supported on a compact set, and close to √N-consistent if the autocovariance function decays to zero sufficiently fast for increasing lags.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号