首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT

We consider point and interval estimation of the unknown parameters of a generalized inverted exponential distribution in the presence of hybrid censoring. The maximum likelihood estimates are obtained using EM algorithm. We then compute Fisher information matrix using the missing value principle. Bayes estimates are derived under squared error and general entropy loss functions. Furthermore, approximate Bayes estimates are obtained using Tierney and Kadane method as well as using importance sampling approach. Asymptotic and highest posterior density intervals are also constructed. Proposed estimates are compared numerically using Monte Carlo simulations and a real data set is analyzed for illustrative purposes.  相似文献   

2.
The problems of constructing tolerance intervals (TIs) in random effects model and in a mixed linear model are considered. The methods based on the generalized variable (GV) approach and the one based on the modified large sample (MLS) procedure are evaluated with respect to coverage probabilities and expected width in various setups using Monte Carlo simulation. Our comparison studies indicate that the TIs based on the MLS procedure are comparable to or better than those based on the GV approach. As the MLS TIs are in closed-form, they are easier to compute than those based on the GV approach. TIs for a two-way nested model are also derived using the MLS method, and their merits are evaluated using simulation. The procedures are illustrated using a practical example.  相似文献   

3.
Disease prediction based on longitudinal data can be done using various modeling approaches. Alternative approaches are compared using data from a longitudinal study to predict the onset of disease. The data are modeled using linear mixed-effects models. Posterior probabilities of group membership are computed starting with the first observation and sequentially adding observations until the subject is classified as developing the disease or until the last measurement is used. Individuals are classified by computing posterior probabilities using the marginal distributions of the mixed-effects models, the conditional distributions (conditional on the group-specific random effects), and the distributions of the random effects.  相似文献   

4.
Ranked-set sampling (RSS) and judgment post-stratification (JPS) use ranking information to obtain more efficient inference than is possible using simple random sampling. Both methods were developed with subjective, judgment-based rankings in mind, but the idea of ranking using a covariate has received a lot of attention. We provide evidence here that when rankings are done using a covariate, the standard RSS and JPS mean estimators no longer make efficient use of the available information. We first show that when rankings are done using a covariate, the standard nonparametric mean estimators in JPS and unbalanced RSS are inadmissible under squared error loss. We then show that when rankings are done using a covariate, nonparametric regression techniques yield mean estimators that tend to be significantly more efficient than the standard RSS and JPS mean estimators. We conclude that the standard estimators are best reserved for settings where only subjective, judgment-based rankings are available.  相似文献   

5.
ABSTRACT

In this paper, Vasicek [A test for normality based on sample entropy. J R Stat Soc Ser B. 1976;38:54–59] entropy estimator is modified using paired ranked set sampling (PRSS) method. Also, two goodness-of-fit tests using PRSS are suggested for the inverse Gaussian and Laplace distributions. The new suggested entropy estimator and goodness-of-fit tests using PRSS are compared with their counterparts using simple random sampling (SRS) via Monte Carlo simulations. The critical values of the suggested tests are obtained, and the powers of the tests based on several alternatives hypotheses using SRS and PRSS are calculated. It turns out that the proposed PRSS entropy estimator is more efficient than the SRS counterpart in terms of root mean square error. Also, the proposed PRSS goodness-of-fit tests have higher powers than their counterparts using SRS for all alternative considered in this study.  相似文献   

6.
In this paper, we consider estimation of unknown parameters of an inverted exponentiated Rayleigh distribution under type II progressive censored samples. Estimation of reliability and hazard functions is also considered. Maximum likelihood estimators are obtained using the Expectation–Maximization (EM) algorithm. Further, we obtain expected Fisher information matrix using the missing value principle. Bayes estimators are derived under squared error and linex loss functions. We have used Lindley, and Tiernery and Kadane methods to compute these estimates. In addition, Bayes estimators are computed using importance sampling scheme as well. Samples generated from this scheme are further utilized for constructing highest posterior density intervals for unknown parameters. For comparison purposes asymptotic intervals are also obtained. A numerical comparison is made between proposed estimators using simulations and observations are given. A real-life data set is analyzed for illustrative purposes.  相似文献   

7.
The problem of estimation of the parameters of two-parameter inverse Weibull distributions has been considered. We establish existence and uniqueness of the maximum likelihood estimators of the scale and shape parameters. We derive Bayes estimators of the parameters under the entropy loss function. Hierarchical Bayes estimator, equivariant estimator and a class of minimax estimators are derived when shape parameter is known. Ordered Bayes estimators using information about second population are also derived. We investigate the reliability of multi-component stress-strength model using classical and Bayesian approaches. Risk comparison of the classical and Bayes estimators is done using Monte Carlo simulations. Applications of the proposed estimators are shown using real data sets.  相似文献   

8.
There are various techniques for dealing with incomplete data; some are computationally highly intensive and others are not as computationally intensive, while all may be comparable in their efficiencies. In spite of these developments, analysis using only the complete data subset is performed when using popular statistical software. In an attempt to demonstrate the efficiencies and advantages of using all available data, we compared several approaches that are relatively simple but efficient alternatives to those using the complete data subset for analyzing repeated measures data with missing values, under the assumption of a multivariate normal distribution of the data. We also assumed that the missing values occur in a monotonic pattern and completely at random. The incomplete data procedure is demonstrated to be more powerful than the procedure of using the complete data subset, generally when the within-subject correlation gets large. One other principal finding is that even with small sample data, for which various covariance models may be indistinguishable, the empirical size and power are shown to be sensitive to misspecified assumptions about the covariance structure. Overall, the testing procedures that do not assume any particular covariance structure are shown to be more robust in keeping the empirical size at the nominal level than those assuming a special structure.  相似文献   

9.
The maximum likelihood estimates (MLEs) of the parameters of a two-parameter lognormal distribution with left truncation and right censoring are developed through the Expectation Maximization (EM) algorithm. For comparative purpose, the MLEs are also obtained by the Newton–Raphson method. The asymptotic variance-covariance matrix of the MLEs is obtained by using the missing information principle, under the EM framework. Then, using asymptotic normality of the MLEs, asymptotic confidence intervals for the parameters are constructed. Asymptotic confidence intervals are also obtained using the estimated variance of the MLEs by the observed information matrix, and by using parametric bootstrap technique. Different confidence intervals are then compared in terms of coverage probabilities, through a Monte Carlo simulation study. A prediction problem concerning the future lifetime of a right censored unit is also considered. A numerical example is given to illustrate all the inferential methods developed here.  相似文献   

10.
Abstract.  The spatial clustering of points from two or more classes (or species) has important implications in many fields and may cause segregation or association, which are two major types of spatial patterns between the classes. These patterns can be studied using a nearest neighbour contingency table (NNCT) which is constructed using the frequencies of nearest neighbour types. Three new multivariate clustering tests are proposed based on NNCTs using the appropriate sampling distribution of the cell counts in a NNCT. The null patterns considered are random labelling (RL) and complete spatial randomness (CSR) of points from two or more classes. The finite sample performance of these tests are compared with other tests in terms of empirical size and power. It is demonstrated that the newly proposed NNCT tests perform relatively well compared with their competitors and the tests are illustrated using two example data sets.  相似文献   

11.
This paper concerns the geometric treatment of graphical models using Bayes linear methods. We introduce Bayes linear separation as a second order generalised conditional independence relation, and Bayes linear graphical models are constructed using this property. A system of interpretive and diagnostic shadings are given, which summarise the analysis over the associated moral graph. Principles of local computation are outlined for the graphical models, and an algorithm for implementing such computation over the junction tree is described. The approach is illustrated with two examples. The first concerns sales forecasting using a multivariate dynamic linear model. The second concerns inference for the error variance matrices of the model for sales, and illustrates the generality of our geometric approach by treating the matrices directly as random objects. The examples are implemented using a freely available set of object-oriented programming tools for Bayes linear local computation and graphical diagnostic display.  相似文献   

12.
In the case that vectors X and Y have a joint multivariate normal distribution, tolerance regions are found for the best linear predictor of Y using X if samples are used to estimate the regression coeffierante. Tolerance regions are also found for Y. In addition, simultaneous tolerance intervals for all linear functions of Y or of the best linear predictor of Y using X are found.  相似文献   

13.
Abstract

Estimators using multiplicative tuning parameters for maximum likelihood estimators in cross-validation are called cross-data estimators in this paper. Single-sample versions of the cross-data estimators have been called predictive estimators in literatures, which are given by maximizing the expected log-likelihood, where the two-fold expectations are taken over the distributions of future and current data using maximum likelihood estimators based on current data. An asymptotic equivalence of the cross-data and predictive estimators is shown, which guarantees an optimality of the predictive estimator when an unknown population parameter vector is replaced by the sample counterpart. Examples using typical statistical distributions are shown.  相似文献   

14.
Often, categorical ordinal data are clustered using a well-defined similarity measure for this kind of data and then using a clustering algorithm not specifically developed for them. The aim of this article is to introduce a new clustering method suitably planned for ordinal data. Objects are grouped using a multinomial model, a cluster tree and a pruning strategy. Two types of pruning are analyzed through simulations. The proposed method allows to overcome two typical problems of cluster analysis: the choice of the number of groups and the scale invariance.  相似文献   

15.
Nonlinear regression-adjusted control variables are investigated for improving variance reduction in statistical and system simulations. To this end, simple control variables are piecewise sectioned and then transformed using linear and nonlinear transformations. Optimal parameters of these transformations are selected using linear or nonlinear least-squares regression algorithms. As an example, piecewise power-transformed variables are used in the estimation of the mean for the twovariable Anderson-Darling goodness-of-fit statistic W 2 2. Substantial variance reduction over straightforward controls is obtained. These parametric transformations are compared against optimal, additive nonparametric transformations obtained by using the ACE algorithm and are shown, in comparison to the results from ACE, to be nearly optimal.  相似文献   

16.
Exact expressions, in the form of infinite series expansions, are given for the first and second moments of two well known generalized ridge estimators. These series expansions are then evaluated using recursive formulas and computations are verified using approximations. Results are presented for the relative mean square error and bias of these estimators as well as their relative efficiency with respect to least squares.  相似文献   

17.
In clinical trials, missing data commonly arise through nonadherence to the randomized treatment or to study procedure. For trials in which recurrent event endpoints are of interests, conventional analyses using the proportional intensity model or the count model assume that the data are missing at random, which cannot be tested using the observed data alone. Thus, sensitivity analyses are recommended. We implement the control‐based multiple imputation as sensitivity analyses for the recurrent event data. We model the recurrent event using a piecewise exponential proportional intensity model with frailty and sample the parameters from the posterior distribution. We impute the number of events after dropped out and correct the variance estimation using a bootstrap procedure. We apply the method to an application of sitagliptin study.  相似文献   

18.
We consider estimation of unknown parameters of a Burr XII distribution based on progressively Type I hybrid censored data. The maximum likelihood estimates are obtained using an expectation maximization algorithm. Asymptotic interval estimates are constructed from the Fisher information matrix. We obtain Bayes estimates under the squared error loss function using the Lindley method and Metropolis–Hastings algorithm. The predictive estimates of censored observations are obtained and the corresponding prediction intervals are also constructed. We compare the performance of the different methods using simulations. Two real datasets have been analyzed for illustrative purposes.  相似文献   

19.
Randomized response is an interview technique designed to eliminate response bias when sensitive questions are asked. In this paper, we present a logistic regression model on randomized response data when the covariates on some subjects are missing at random. In particular, we propose Horvitz and Thompson (1952)-type weighted estimators by using different estimates of the selection probabilities. We present large sample theory for the proposed estimators and show that they are more efficient than the estimator using the true selection probabilities. Simulation results support theoretical analysis. We also illustrate the approach using data from a survey of cable TV.  相似文献   

20.
In an attempt to produce more realistic stress–strength models, this article considers the estimation of stress–strength reliability in a multi-component system with non-identical component strengths based on upper record values from the family of Kumaraswamy generalized distributions. The maximum likelihood estimator of the reliability, its asymptotic distribution and asymptotic confidence intervals are constructed. Bayes estimates under symmetric squared error loss function using conjugate prior distributions are computed and corresponding highest probability density credible intervals are also constructed. In Bayesian estimation, Lindley approximation and the Markov Chain Monte Carlo method are employed due to lack of explicit forms. For the first time using records, the uniformly minimum variance unbiased estimator and the closed form of Bayes estimator using conjugate and non-informative priors are derived for a common and known shape parameter of the stress and strength variates distributions. Comparisons of the performance of the estimators are carried out using Monte Carlo simulations, the mean squared error, bias and coverage probabilities. Finally, a demonstration is presented on how the proposed model may be utilized in materials science and engineering with the analysis of high-strength steel fatigue life data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号