首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 640 毫秒
1.
Parameter dependency within data sets in simulation studies is common, especially in models such as continuous-time Markov chains (CTMCs). Additionally, the literature lacks a comprehensive examination of estimation performance for the likelihood-based general multi-state CTMC. Among studies attempting to assess the estimation, none have accounted for dependency among parameter estimates. The purpose of this research is twofold: (1) to develop a multivariate approach for assessing accuracy and precision for simulation studies (2) to add to the literature a comprehensive examination of the estimation of a general 3-state CTMC model. Simulation studies are conducted to analyze longitudinal data with a trinomial outcome using a CTMC with and without covariates. Measures of performance including bias, component-wise coverage probabilities, and joint coverage probabilities are calculated. An application is presented using Alzheimer's disease caregiver stress levels. Comparisons of joint and component-wise parameter estimates yield conflicting inferential results in simulations from models with and without covariates. In conclusion, caution should be taken when conducting simulation studies aiming to assess performance and choice of inference should properly reflect the purpose of the simulation.  相似文献   

2.
The coefficient of variation (CV) can be used as an index of reliability of measurement. The lognormal distribution has been applied to fit data in many fields. We developed approximate interval estimation of the ratio of two coefficients of variation (CsV) for lognormal distributions by using the Wald-type, Fieller-type, log methods, and method of variance estimates recovery (MOVER). The simulation studies show that empirical coverage rates of the methods are satisfactorily close to a nominal coverage rate for medium sample sizes.  相似文献   

3.
This paper investigates the general linear regression model Y = Xβ+e assuming the dependent variable is observed as a scrambled response using Eichhorn & Hayre's (1983) approach to collecting sensitive personal information. The estimates of the parameters in the model remain unbiased, but the variances of the estimates increase due to scrambling. The Wald test of the null hypothesis H0: β=β0, against the alternative hypothesis Ha: β#β0, is also investigated. Parameter estimates obtained from scrambled responses are compared to those from conventional or direct-question surveys, using simulation. The coverage by nominal 95% confidence intervals is also reported.  相似文献   

4.
Leave-one-out and 632 bootstrap are popular data-based methods of estimating the true error rate of a classification rule, but practical applications almost exclusively quote only point estimates. Interval estimation would provide better assessment of the future performance of the rule, but little has been published on this topic. We first review general-purpose jackknife and bootstrap methodology that can be used in conjunction with leave-one-out estimates to provide prediction intervals for true error rates of classification rules. Monte Carlo simulation is then used to investigate coverage rates of the resulting intervals for normal data, but the results are disappointing; standard intervals show considerable overinclusion, intervals based on Edgeworth approximations or random weighting do not perform well, and while a bootstrap approach provides intervals with coverage rates closer to the nominal ones there is still marked underinclusion. We then turn to intervals constructed from 632 bootstrap estimates, and show that much better results are obtained. Although there is now some overinclusion, particularly for large training samples, the actual coverage rates are sufficiently close to the nominal rates for the method to be recommended. An application to real data illustrates the considerable variability that can arise in practical estimation of error rates.  相似文献   

5.
In this article, we present the performance of the maximum likelihood estimates of the Burr XII parameters for constant-stress partially accelerated life tests under multiple censored data. Two maximum likelihood estimation methods are considered. One method is based on observed-data likelihood function and the maximum likelihood estimates are obtained by using the quasi-Newton algorithm. The other method is based on complete-data likelihood function and the maximum likelihood estimates are derived by using the expectation-maximization (EM) algorithm. The variance–covariance matrices are derived to construct the confidence intervals of the parameters. The performance of these two algorithms is compared with each other by a simulation study. The simulation results show that the maximum likelihood estimation via the EM algorithm outperforms the quasi-Newton algorithm in terms of the absolute relative bias, the bias, the root mean square error and the coverage rate. Finally, a numerical example is given to illustrate the performance of the proposed methods.  相似文献   

6.
The lognormal distribution is currently used extensively to describe the distribution of positive random variables. This is especially the case with data pertaining to occupational health and other biological data. One particular application of the data is statistical inference with regards to the mean of the data. Other authors, namely Zou et al. (2009), have proposed procedures involving the so-called “method of variance estimates recovery” (MOVER), while an alternative approach based on simulation is the so-called generalized confidence interval, discussed by Krishnamoorthy and Mathew (2003). In this paper we compare the performance of the MOVER-based confidence interval estimates and the generalized confidence interval procedure to coverage of credibility intervals obtained using Bayesian methodology using a variety of different prior distributions to estimate the appropriateness of each. An extensive simulation study is conducted to evaluate the coverage accuracy and interval width of the proposed methods. For the Bayesian approach both the equal-tail and highest posterior density (HPD) credibility intervals are presented. Various prior distributions (Independence Jeffreys' prior, Jeffreys'-Rule prior, namely, the square root of the determinant of the Fisher Information matrix, reference and probability-matching priors) are evaluated and compared to determine which give the best coverage with the most efficient interval width. The simulation studies show that the constructed Bayesian confidence intervals have satisfying coverage probabilities and in some cases outperform the MOVER and generalized confidence interval results. The Bayesian inference procedures (hypothesis tests and confidence intervals) are also extended to the difference between two lognormal means as well as to the case of zero-valued observations and confidence intervals for the lognormal variance. In the last section of this paper the bivariate lognormal distribution is discussed and Bayesian confidence intervals are obtained for the difference between two correlated lognormal means as well as for the ratio of lognormal variances, using nine different priors.  相似文献   

7.
Bootstrap in functional linear regression   总被引:1,自引:0,他引:1  
We have considered the functional linear model with scalar response and functional explanatory variable. One of the most popular methodologies for estimating the model parameter is based on functional principal components analysis (FPCA). In recent literature, weak convergence for a wide class of FPCA-type estimates has been proved, and consequently asymptotic confidence sets can be built. In this paper, we have proposed an alternative approach in order to obtain pointwise confidence intervals by means of a bootstrap procedure, for which we have obtained its asymptotic validity. Besides, a simulation study allows us to compare the practical behaviour of asymptotic and bootstrap confidence intervals in terms of coverage rates for different sample sizes.  相似文献   

8.
For constructing simultaneous confidence intervals for ratios of means for lognormal distributions, two approaches using a two-step method of variance estimates recovery are proposed. The first approach proposes fiducial generalized confidence intervals (FGCIs) in the first step followed by the method of variance estimates recovery (MOVER) in the second step (FGCIs–MOVER). The second approach uses MOVER in the first and second steps (MOVER–MOVER). Performance of proposed approaches is compared with simultaneous fiducial generalized confidence intervals (SFGCIs). Monte Carlo simulation is used to evaluate the performance of these approaches in terms of coverage probability, average interval width, and time consumption.  相似文献   

9.
Network meta‐analysis can be implemented by using arm‐based or contrast‐based models. Here we focus on arm‐based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial‐by‐treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi‐likelihood/pseudo‐likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi‐likelihood/pseudo‐likelihood and h‐likelihood reduce bias and yield satisfactory coverage rates. Sum‐to‐zero restriction and baseline contrasts for random trial‐by‐treatment interaction effects, as well as a residual ML‐like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi‐likelihood/pseudo‐likelihood and h‐likelihood are therefore recommended.  相似文献   

10.
ABSTRACT

We propose a multiple imputation method based on principal component analysis (PCA) to deal with incomplete continuous data. To reflect the uncertainty of the parameters from one imputation to the next, we use a Bayesian treatment of the PCA model. Using a simulation study and real data sets, the method is compared to two classical approaches: multiple imputation based on joint modelling and on fully conditional modelling. Contrary to the others, the proposed method can be easily used on data sets where the number of individuals is less than the number of variables and when the variables are highly correlated. In addition, it provides unbiased point estimates of quantities of interest, such as an expectation, a regression coefficient or a correlation coefficient, with a smaller mean squared error. Furthermore, the widths of the confidence intervals built for the quantities of interest are often smaller whilst ensuring a valid coverage.  相似文献   

11.
This article proposes the maximum likelihood estimates based on bare bones particle swarm optimization (BBPSO) algorithm for estimating the parameters of Weibull distribution with censored data, which is widely used in lifetime data analysis. This approach can produce more accuracy of the parameter estimation for the Weibull distribution. Additionally, the confidence intervals for the estimators are obtained. The simulation results show that the BB PSO algorithm outperforms the Newton–Raphson method in most cases in terms of bias, root mean square of errors, and coverage rate. Two examples are used to demonstrate the performance of the proposed approach. The results show that the maximum likelihood estimates via BBPSO algorithm perform well for estimating the Weibull parameters with censored data.  相似文献   

12.
There are several failure modes may cause system failed in reliability and survival analysis. It is usually assumed that the causes of failure modes are independent each other, though this assumption does not always hold. Dependent competing risks modes from Marshall-Olkin bivariate Weibull distribution under Type-I progressive interval censoring scheme are considered in this paper. We derive the maximum likelihood function, the maximum likelihood estimates, the 95% Bootstrap confidence intervals and the 95% coverage percentages of the parameters when shape parameter is known, and EM algorithm is applied when shape parameter is unknown. The Monte-Carlo simulation is given to illustrate the theoretical analysis and the effects of parameters estimates under different sample sizes. Finally, a data set has been analyzed for illustrative purposes.  相似文献   

13.
This article examines confidence intervals for the single coefficient of variation and the difference of coefficients of variation in the two-parameter exponential distributions, using the method of variance of estimates recovery (MOVER), the generalized confidence interval (GCI), and the asymptotic confidence interval (ACI). In simulation, the results indicate that coverage probabilities of the GCI maintain the nominal level in general. The MOVER performs well in terms of coverage probability when data only consist of positive values, but it has wider expected length. The coverage probabilities of the ACI satisfy the target for large sample sizes. We also illustrate our confidence intervals using a real-world example in the area of medical science.  相似文献   

14.
Asymptotic variance plays an important role in the inference using interval estimate of attributable risk. This paper compares asymptotic variances of attributable risk estimate using the delta method and the Fisher information matrix for a 2×2 case–control study due to the practicality of applications. The expressions of these two asymptotic variance estimates are shown to be equivalent. Because asymptotic variance usually underestimates the standard error, the bootstrap standard error has also been utilized in constructing the interval estimates of attributable risk and compared with those using asymptotic estimates. A simulation study shows that the bootstrap interval estimate performs well in terms of coverage probability and confidence length. An exact test procedure for testing independence between the risk factor and the disease outcome using attributable risk is proposed and is justified for the use with real-life examples for a small-sample situation where inference using asymptotic variance may not be valid.  相似文献   

15.
"This article presents estimates of net coverage of the national population in the 1990 [U.S.] census, based on the method of demographic analysis. The general techniques of demographic analysis as an analytic tool for coverage measurement are discussed, including use of the demographic accounting equation, data components, and strengths and limitations of the method. Patterns of coverage displayed by the 1990 estimates are described, along with similarities or differences from comparable demographic estimates for previous censuses....A final section presents the results of the first statistical assessment of the uncertainty in the demographic coverage estimates for 1990." Comments by Clifford C. Clogg and Christine L. Himes (pp. 1,072-4) and Jeffrey S. Passel (pp. 1,074-7) and a rejoinder by the authors (pp. 1,077-9) are included.  相似文献   

16.
"Net undercount rates in the U.S. decennial census have been steadily declining over the last several censuses. Differential undercounts among race groups and geographic areas, however, appear to persist. In the following, we examine and compare several methodologies for providing small area estimates of census coverage by constructing artificial populations. Measures of performance are also introduced to assess the various small area estimates. Synthetic estimation in combination with regression modelling provide the best results over the methods considered. Sampling error effects are also simulated. The results form the basis for determining coverage evaluation survey small area estimates of the 1900 decennial census."  相似文献   

17.
This paper compares the ordinary unweighted average, weighted average, and maximum likelihood methods for estimating a common bioactivity from multiple parallel line bioassays. Some of these or similar methods are also used in meta‐analysis. Based on a simulation study, these methods are assessed by comparing coverage probabilities of the true relative bioactivity and the length of the confidence intervals computed for these methods. The ordinary unweighted average method outperforms all statistical methods by consistently giving the best coverage probability but with somewhat wider confidence intervals. The weighted average methods give good coverage and smaller confidence intervals when combining homogeneous bioactivities. For heterogeneous bioactivities, these methods work well when a liberal significance level for testing homogeneity of bioactivities is used. The maximum likelihood methods gave good coverage when homogeneous bioactivities were considered. Overall, the preferred methods are the ordinary unweighted average and two weighted average methods that were specifically developed for bioassays. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

18.
We respond to criticism leveled at bootstrap confidence intervals for the correlation coefficient by recent authors by arguing that in the correlation coefficient case, non–standard methods should be employed. We propose two such methods. The first is a bootstrap coverage coorection algorithm using iterated bootstrap techniques (Hall, 1986; Beran, 1987a; Hall and Martin, 1988) applied to ordinary percentile–method intervals (Efron, 1979), giving intervals with high coverage accuracy and stable lengths and endpoints. The simulation study carried out for this method gives results for sample sizes 8, 10, and 12 in three parent populations. The second technique involves the construction of percentile–t bootstrap confidence intervals for a transformed correlation coefficient, followed by an inversion of the transformation, to obtain “transformed percentile–t” intervals for the correlation coefficient. In particular, Fisher's z–transformation is used, and nonparametric delta method and jackknife variance estimates are used to Studentize the transformed correlation coefficient, with the jackknife–Studentized transformed percentile–t interval yielding the better coverage accuracy, in general. Percentile–t intervals constructed without first using the transformation perform very poorly, having large expected lengths and erratically fluctuating endpoints. The simulation study illustrating this technique gives results for sample sizes 10, 15 and 20 in four parent populations. Our techniques provide confidence intervals for the correlation coefficient which have good coverage accuracy (unlike ordinary percentile intervals), and stable lengths and endpoints (unlike ordinary percentile–t intervals).  相似文献   

19.
We consider the problem of robust M-estimation of a vector of regression parameters, when the errors are dependent. We assume a weakly stationary, but otherwise quite general dependence structure. Our model allows for the representation of the correlations of any time series of finite length. We first construct initial estimates of the regression, scale, and autocorrelation parameters. The initial autocorrelation estimates are used to transform the model to one of approximate independence. In this transformed model, final one-step M-estimates are calculated. Under appropriate assumptions, the regression estimates so obtained are asymptotically normal, with a variance-covariance structure identical to that in the case in which the autocorrelations are known a priori. The results of a simulation study are given. Two versions of our estimator are compared with the L1 -estimator and several Huber-type M-estimators. In terms of bias and mean squared error, the estimators are generally very close. In terms of the coverage probabilities of confidence intervals, our estimators appear to be quite superior to both the L1-estimator and the other estimators. The simulations also indicate that the approach to normality is quite fast.  相似文献   

20.
We recently proposed a representation of the bivariate survivor function as a mapping of the hazard function for truncated failure time variates. The representation led to a class of estimators that includes van der Laan’s repaired nonparametric maximum likelihood estimator (NPMLE) as an important special case. We proposed a Greenwood-like variance estimator for the repaired NPMLE but found somewhat poor agreement between the empirical variance estimates and these analytic estimates for the sample sizes and bandwidths considered in our simulation study. The simulation results also confirmed those of others in showing slightly inferior performance for the repaired NPMLE compared to other competing estimators as well as a sensitivity to bandwidth choice in moderate sized samples. Despite its attractive asymptotic properties, the repaired NPMLE has drawbacks that hinder its practical application. This paper presents a modification of the repaired NPMLE that improves its performance in moderate sized samples and renders it less sensitive to the choice of bandwidth. Along with this modified estimator, more extensive simulation studies of the repaired NPMLE and Greenwood-like variance estimates are presented. The methods are then applied to a real data example. This revised version was published online in September 2005 with a correction to the second author's name.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号