首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
An analysis of inter-rater agreement is presented. We study the problem with several raters using a Bayesian model based on the Dirichlet distribution. Inter-rater agreement, including global and partial agreement, is studied by determining the joint posterior distribution of the raters. Posterior distributions are computed with a direct resampling technique. Our method is illustrated with an example involving four residents, who are diagnosing 12 psychiatric patients suspected of having a thought disorder. Initially employing analytical and resampling methods, total agreement between the four is examined with a Bayesian testing technique. Later, partial agreement is examined by determining the posterior probability of certain orderings among the rater means. The power of resampling is revealed by its ability to compute complex multiple integrals that represent various posterior probabilities of agreement and disagreement between several raters.  相似文献   

2.
In the regression model with censored data, it is not straightforward to estimate the covariances of the regression estimators, since their asymptotic covariances may involve the unknown error density function and its derivative. In this article, a resampling method for making inferences on the parameter, based on some estimating functions, is discussed for the censored regression model. The inference procedures are associated with a weight function. To find the best weight functions for the proposed procedures, extensive simulations are performed. The validity of the approximation to the distribution of the estimator by a resampling technique is also examined visually. Implementation of the procedures is discussed and illustrated in a real data example.  相似文献   

3.
Semiparametric accelerated failure time (AFT) models directly relate the expected failure times to covariates and are a useful alternative to models that work on the hazard function or the survival function. For case-cohort data, much less development has been done with AFT models. In addition to the missing covariates outside of the sub-cohort in controls, challenges from AFT model inferences with full cohort are retained. The regression parameter estimator is hard to compute because the most widely used rank-based estimating equations are not smooth. Further, its variance depends on the unspecified error distribution, and most methods rely on computationally intensive bootstrap to estimate it. We propose fast rank-based inference procedures for AFT models, applying recent methodological advances to the context of case-cohort data. Parameters are estimated with an induced smoothing approach that smooths the estimating functions and facilitates the numerical solution. Variance estimators are obtained through efficient resampling methods for nonsmooth estimating functions that avoids full blown bootstrap. Simulation studies suggest that the recommended procedure provides fast and valid inferences among several competing procedures. Application to a tumor study demonstrates the utility of the proposed method in routine data analysis.  相似文献   

4.
In recent years different approaches for the analysis of time-to-event data in the presence of competing risks, i.e. when subjects can fail from one of two or more mutually exclusive types of event, were introduced. Different approaches for the analysis of competing risks data, focusing either on cause-specific or subdistribution hazard rates, were presented in statistical literature. Many new approaches use complicated weighting techniques or resampling methods, not allowing an analytical evaluation of these methods. Simulation studies often replace analytical comparisons, since they can be performed more easily and allow investigation of non-standard scenarios. For adequate simulation studies the generation of appropriate random numbers is essential. We present an approach to generate competing risks data following flexible prespecified subdistribution hazards. Event times and types are simulated using possibly time-dependent cause-specific hazards, chosen in a way that the generated data will follow the desired subdistribution hazards or hazard ratios, respectively.  相似文献   

5.
Importance resampling is an approach that uses exponential tilting to reduce the resampling necessary for the construction of nonparametric bootstrap confidence intervals. The properties of bootstrap importance confidence intervals are well established when the data is a smooth function of means and when there is no censoring. However, in the framework of survival or time-to-event data, the asymptotic properties of importance resampling have not been rigorously studied, mainly because of the unduly complicated theory incurred when data is censored. This paper uses extensive simulation to show that, for parameter estimates arising from fitting Cox proportional hazards models, importance bootstrap confidence intervals can be constructed if the importance resampling probabilities of the records for the n individuals in the study are determined by the empirical influence function for the parameter of interest. Our results show that, compared to uniform resampling, importance resampling improves the relative mean-squared-error (MSE) efficiency by a factor of nine (for n = 200). The efficiency increases significantly with sample size, is mildly associated with the amount of censoring, but decreases slightly as the number of bootstrap resamples increases. The extra CPU time requirement for calculating importance resamples is negligible when compared to the large improvement in MSE efficiency. The method is illustrated through an application to data on chronic lymphocytic leukemia, which highlights that the bootstrap confidence interval is the preferred alternative to large sample inferences when the distribution of a specific covariate deviates from normality. Our results imply that, because of its computational efficiency, importance resampling is recommended whenever bootstrap methodology is implemented in a survival framework. Its use is particularly important when complex covariates are involved or the survival problem to be solved is part of a larger problem; for instance, when determining confidence bounds for models linking survival time with clusters identified in gene expression microarray data.  相似文献   

6.
Using a direct resampling process, a Bayesian approach is developed for the analysis of the shiftpoint problem. In many problems it is straight forward to isolate the marginal posterior distribution of the shift-point parameter and the conditional distribution of some of the parameters given the shift point and the other remaining parameters. When this is possible, a direct sampling approach is easily implemented whereby standard random number generators can be used to generate samples from the joint posterior distribution of aii the parameters in the model. This technique is illustrated with examples involving one shift for Poisson processes and regression models.  相似文献   

7.
Quasi-random sequences are known to give efficient numerical integration rules in many Bayesian statistical problems where the posterior distribution can be transformed into periodic functions on then-dimensional hypercube. From this idea we develop a quasi-random approach to the generation of resamples used for Monte Carlo approximations to bootstrap estimates of bias, variance and distribution functions. We demonstrate a major difference between quasi-random bootstrap resamples, which are generated by deterministic algorithms and have no true randomness, and the usual pseudo-random bootstrap resamples generated by the classical bootstrap approach. Various quasi-random approaches are considered and are shown via a simulation study to result in approximants that are competitive in terms of efficiency when compared with other bootstrap Monte Carlo procedures such as balanced and antithetic resampling.  相似文献   

8.
In this paper, we develop Bayes factor based testing procedures for the presence of a correlation or a partial correlation. The proposed Bayesian tests are obtained by restricting the class of the alternative hypotheses to maximize the probability of rejecting the null hypothesis when the Bayes factor is larger than a specified threshold. It turns out that they depend simply on the frequentist t-statistics with the associated critical values and can thus be easily calculated by using a spreadsheet in Excel and in fact by just adding one more step after one has performed the frequentist correlation tests. In addition, they are able to yield an identical decision with the frequentist paradigm, provided that the evidence threshold of the Bayesian tests is determined by the significance level of the frequentist paradigm. We illustrate the performance of the proposed procedures through simulated and real-data examples.  相似文献   

9.
ABSTRACT

In statistical practice, inferences on standardized regression coefficients are often required, but complicated by the fact that they are nonlinear functions of the parameters, and thus standard textbook results are simply wrong. Within the frequentist domain, asymptotic delta methods can be used to construct confidence intervals of the standardized coefficients with proper coverage probabilities. Alternatively, Bayesian methods solve similar and other inferential problems by simulating data from the posterior distribution of the coefficients. In this paper, we present Bayesian procedures that provide comprehensive solutions for inferences on the standardized coefficients. Simple computing algorithms are developed to generate posterior samples with no autocorrelation and based on both noninformative improper and informative proper prior distributions. Simulation studies show that Bayesian credible intervals constructed by our approaches have comparable and even better statistical properties than their frequentist counterparts, particularly in the presence of collinearity. In addition, our approaches solve some meaningful inferential problems that are difficult if not impossible from the frequentist standpoint, including identifying joint rankings of multiple standardized coefficients and making optimal decisions concerning their sizes and comparisons. We illustrate applications of our approaches through examples and make sample R functions available for implementing our proposed methods.  相似文献   

10.
It is cleared in recent researches that the raising of missing values in datasets is inevitable. Imputation of missing data is one of the several methods which have been introduced to overcome this issue. Imputation techniques are trying to answer the case of missing data by covering missing values with reasonable estimates permanently. There are a lot of benefits for these procedures rather than their drawbacks. The operation of these methods has not been clarified, which means that they provide mistrust among analytical results. One approach to evaluate the outcomes of the imputation process is estimating uncertainty in the imputed data. Nonparametric methods are appropriate to estimating the uncertainty when data are not followed by any particular distribution. This paper deals with a nonparametric method for estimation and testing the significance of the imputation uncertainty, which is based on Wilcoxon test statistic, and which could be employed for estimating the precision of the imputed values created by imputation methods. This proposed procedure could be employed to judge the possibility of the imputation process for datasets, and to evaluate the influence of proper imputation methods when they are utilized to the same dataset. This proposed approach has been compared with other nonparametric resampling methods, including bootstrap and jackknife to estimate uncertainty in the imputed data under the Bayesian bootstrap imputation method. The ideas supporting the proposed method are clarified in detail, and a simulation study, which indicates how the approach has been employed in practical situations, is illustrated.  相似文献   

11.
In many applications, the parameters of interest are estimated by solving non‐smooth estimating functions with U‐statistic structure. Because the asymptotic covariances matrix of the estimator generally involves the underlying density function, resampling methods are often used to bypass the difficulty of non‐parametric density estimation. Despite its simplicity, the resultant‐covariance matrix estimator depends on the nature of resampling, and the method can be time‐consuming when the number of replications is large. Furthermore, the inferences are based on the normal approximation that may not be accurate for practical sample sizes. In this paper, we propose a jackknife empirical likelihood‐based inferential procedure for non‐smooth estimating functions. Standard chi‐square distributions are used to calculate the p‐value and to construct confidence intervals. Extensive simulation studies and two real examples are provided to illustrate its practical utilities.  相似文献   

12.
Progressively censored data from a classical Pareto distribution are to be used to make inferences about its shape and precision parameters and the reliability function. An approximation form due to Tierney and Kadane (1986) is used for obtaining the Bayes estimates. Bayesian prediction of further observations from this distribution is also considered. When the Bayesian approach is concerned, conjugate priors for either the one or the two parameters cases are considered. To illustrate the given procedures, a numerical example and a simulation study are given.  相似文献   

13.
Summary.  We consider the on-line Bayesian analysis of data by using a hidden Markov model, where inference is tractable conditional on the history of the state of the hidden component. A new particle filter algorithm is introduced and shown to produce promising results when analysing data of this type. The algorithm is similar to the mixture Kalman filter but uses a different resampling algorithm. We prove that this resampling algorithm is computationally efficient and optimal, among unbiased resampling algorithms, in terms of minimizing a squared error loss function. In a practical example, that of estimating break points from well-log data, our new particle filter outperforms two other particle filters, one of which is the mixture Kalman filter, by between one and two orders of magnitude.  相似文献   

14.
Standard methods for analyzing binomial regression data rely on asymptotic inferences. Bayesian methods can be performed using simple computations, and they apply for any sample size. We provide a relatively complete discussion of Bayesian inferences for binomial regression with emphasis on inferences for the probability of “success.” Furthermore, we illustrate diagnostic tools, perform model selection among nonnested models, and examine the sensitivity of the Bayesian methods.  相似文献   

15.
Although bootstrapping has become widely used in statistical analysis, there has been little reported concerning bootstrapped Bayesian analyses, especially when there is proper prior informa-tion concerning the parameter of interest. In this paper, we first propose an operationally implementable definition of a Bayesian bootstrap. Thereafter, in simulated studies of the estimation of means and variances, this Bayesian bootstrap is compared to various parametric procedures. It turns out that little information is lost in using the Bayesian bootstrap even when the sampling distribution is known. On the other hand, the parametric procedures are at times very sensitive to incorrectly specified sampling distributions, implying that the Bayesian bootstrap is a very robust procedure for determining the posterior distribution of the parameter.  相似文献   

16.
This paper is concerned with Bayesian estimation and prediction in the context of start-up demonstration tests in which rejection of a unit is possible when a pre-specified number of failures is observed prior to obtaining the number of consecutive successes required for acceptance of the unit. A method for implementing Bayesian inference on the probability of success is developed for use when the test result of each start-up is not reported or even recorded, and only the number of trials until termination of the testing is available. Some errors in the related literature on the Bayesian analysis of start-up demonstration tests are corrected. The method developed in this paper is a Markov chain Monte Carlo (MCMC) method incorporating data augmentation, and it additionally enables Bayesian posterior inference on the number of failures given the number of start-up trials until termination to be made, along with Bayesian predictive inferences on the number of start-up trials and the number of failures until termination for any future run of the start-up demonstration test. An illustrative example is also included.  相似文献   

17.
Abstract.  Multivariate failure time data arises when each study subject can potentially ex-perience several types of failures or recurrences of a certain phenomenon, or when failure times are sampled in clusters. We formulate the marginal distributions of such multivariate data with semiparametric accelerated failure time models (i.e. linear regression models for log-transformed failure times with arbitrary error distributions) while leaving the dependence structures for related failure times completely unspecified. We develop rank-based monotone estimating functions for the regression parameters of these marginal models based on right-censored observations. The estimating equations can be easily solved via linear programming. The resultant estimators are consistent and asymptotically normal. The limiting covariance matrices can be readily estimated by a novel resampling approach, which does not involve non-parametric density estimation or evaluation of numerical derivatives. The proposed estimators represent consistent roots to the potentially non-monotone estimating equations based on weighted log-rank statistics. Simulation studies show that the new inference procedures perform well in small samples. Illustrations with real medical data are provided.  相似文献   

18.
Inferences are made concerning population proportions when data are not missing at random.Both one sample and two sample situations are considered with examples in clinical trials.The one samplesituation involves the existence of response related incomplete data in a study conducted to make inferences involving the proportion. The two sample problem involves comparing two treatments in clinical trials when there exists dropouts due to both the treatment and the response to the treatment.Bayes procedures are used in estimating parameters of interest and testing hypotheses of interest in these two situations. An ad-hoc approach to the classical inference is presented for each ofthe two situations and compared with the Bayesian approach discussed. To illustrate the theory developed, data from clinical trials of severe head trauma patients at the Medical College of Virginia Head Injury Center from 1984 to 1987 is considered  相似文献   

19.
In this article, we extended the widely used Bland-Altman graphical technique of comparing two measurements in clinical studies to include an analytical approach using a linear mixed model. The proposed statistical inferences can be conducted easily by commercially available statistical software such as SAS. The linear mixed model approach was illustrated using a real example in a clinical nursing study of oxygen saturation measurements, when functional oxygen saturation was compared against fractional oxy-hemoglobin.  相似文献   

20.
Checking compatibility for two given conditional distributions and identifying the corresponding unique compatible marginal distributions are important problems in mathematical statistics, especially in Bayesian inferences. In this article, we develop a unified method to check the compatibility and uniqueness for two finite discrete conditional distributions. By formulating the compatibility problem into a system of linear equations subject to constraints, it can be reduced to a quadratic optimization problem with box constraints. We also extend the proposed method from two-dimensional cases to higher-dimensional cases. Finally, we show that our method can be easily applied to checking compatibility and uniqueness for a regression function and a conditional distribution. Several numerical examples are used to illustrate the proposed method. Some comparisons with existing methods are also presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号