首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The problems of constructing tolerance intervals (TIs) in random effects model and in a mixed linear model are considered. The methods based on the generalized variable (GV) approach and the one based on the modified large sample (MLS) procedure are evaluated with respect to coverage probabilities and expected width in various setups using Monte Carlo simulation. Our comparison studies indicate that the TIs based on the MLS procedure are comparable to or better than those based on the GV approach. As the MLS TIs are in closed-form, they are easier to compute than those based on the GV approach. TIs for a two-way nested model are also derived using the MLS method, and their merits are evaluated using simulation. The procedures are illustrated using a practical example.  相似文献   

2.
This work is concerned with the Bayesian prediction problem of the number of components which will fail in a future time interval, when the failure times are Weibull distributed. Both the 1-sample and the 2-sample prediction problems are dealed with, and some choices of the prior densities on the distribution parameters are discussed which are relatively easy to work with and allow different degrees of knowledge on the failure mechanism to be incorporated in the predictive procedure. Useful relations between the predictive distribution on the number of future failures and the predictive distribution on the future failure times are derived. Numerical examples are also given.  相似文献   

3.
Inferences for survival curves based on right censored data are studied for situations in which it is believed that the treatments have survival times at least as large as the control or at least as small as the control. Testing homogeneity with the appropriate order restricted alternative and testing the order restriction as the null hypothesis are considered. Under a proportional hazards model, the ordering on the survival curves corresponds to an ordering on the regression coefficients. Approximate likelihood methods, which are obtained by applying order restricted procedures to the estimates of the regression coefficients, and ordered analogues to the log rank test, which are based on the score statistics, are considered. Mau's (1988) test, which does not require proportional hazards, is extended to this ordering on the survival curves. Using Monte Carlo techniques, the type I error rates are found to be close to the nominal level and the powers of these tests are compared. Other order restrictions on the survival curves are discussed briefly.  相似文献   

4.
Inferences for survival curves based on right censored continuous or grouped data are studied. Testing homogeneity with an ordered restricted alternative and testing the order restriction as the null hypothesis are considered. Under a proportional hazards model, the ordering on the survival curves corresponds to an ordering on the regression coefficients. Approximate likelihood methods are obtained by applying order restricted procedures to the estimates of the regression coefficients. Ordered analogues to the log rank test which are based on the score statistics are considered also. Chi-bar-squared distributions, which have been studied extensively, are shown to provide reasonable approximations to the null distributions of these tests statistics. Using Monte Carlo techniques, the powers of these two types of tests are compared with those that are available in the literature.  相似文献   

5.
The parameters of Downton's bivariate exponential distribution are estimated based on a ranked set sample. Parametric and nonparametric methods are considered. The suggested estimators are compared to the corresponding ones based on simple random sampling. It turns out that some of the suggested estimators are significantly more efficient than the ones based on simple random sampling.  相似文献   

6.
Estimating the parameter of a Dirichlet distribution is an interesting question since this distribution arises in many situations of applied probability. Classical procedures are based on sample of Dirichlet distribution. In this paper we exhibit five different estimators from only one observation. They are based either on residual allocation model decompositions or on sampling properties of Dirichlet distributions. Two ways are investigated: the first one uses fragments’ size and the second one uses size-biased permutations of a partition. Numerical computations based on simulations are supplied. The estimators are finally used to estimate birth probabilities per month.  相似文献   

7.
We consider a dependent thinning of a regular point process with the aim of obtaining aggregation on the large scale and regularity on the small scale in the resulting target point process of retained points. Various parametric models for the underlying processes are suggested and the properties of the target point process are studied. Simulation and inference procedures are discussed when a realization of the target point process is observed, depending on whether the thinned points are observed or not. The paper extends previous work by Dietrich Stoyan on interrupted point processes.  相似文献   

8.
Various procedures, mainly graphical are presented for analyzing large sets of ranking data in which the permutations are not equally likely. One method is based on box plots, the others are motivated by a model originally proposed by Mallows. The model is characterised by two parameters corresponding to location and dispersion. Graphical methods based on Q-Q plots are also discussed for comparing two groups of judges. The proposed methods are illustrated on an empirical data set.  相似文献   

9.
In accelerated life testing (ALT), products are exposed to stress levels higher than those at normal use in order to obtain information in a timely manner. Past work on planning ALT is predominantly on a single cause of failure. This article presents methods for planning ALT in the presence of k competing risks. Expressions for computing the Fisher information matrix are presented when risks are independently distributed lognormal. Optimal test plans are obtained under criteria that are based on determinants and maximum likelihood estimation. The proposed method is demonstrated on ALT of motor insulation.  相似文献   

10.
Various methods to control the influence of a covariate on a response variable are compared. These methods are ANOVA with or without homogeneity of variances (HOV) of errors and Kruskal–Wallis (K–W) tests on (covariate-adjusted) residuals and analysis of covariance (ANCOVA). Covariate-adjusted residuals are obtained from the overall regression line fit to the entire data set ignoring the treatment levels or factors. It is demonstrated that the methods on covariate-adjusted residuals are only appropriate when the regression lines are parallel and covariate means are equal for all treatments. Empirical size and power performance of the methods are compared by extensive Monte Carlo simulations. We manipulated the conditions such as assumptions of normality and HOV, sample size, and clustering of the covariates. The parametric methods on residuals and ANCOVA exhibited similar size and power when error terms have symmetric distributions with variances having the same functional form for each treatment, and covariates have uniform distributions within the same interval for each treatment. In such cases, parametric tests have higher power compared to the K–W test on residuals. When error terms have asymmetric distributions or have variances that are heterogeneous with different functional forms for each treatment, the tests are liberal with K–W test having higher power than others. The methods on covariate-adjusted residuals are severely affected by the clustering of the covariates relative to the treatment factors when covariate means are very different for treatments. For data clusters, ANCOVA method exhibits the appropriate level. However, such a clustering might suggest dependence between the covariates and the treatment factors, so makes ANCOVA less reliable as well.  相似文献   

11.
对复杂样本进行推断通常有两种体系,一种是传统的基于随机化理论的统计推断,另一种是基于模型的统计推断。传统的抽样理论以随机化理论为基础,将总体取值视为固定,随机性仅体现在样本的选取上,对总体的推断依赖于抽样设计。该方法在大样本情况下具有稳健估计量,但在小样本、数据缺失等情况下失效。基于模型的抽样推断认为总体是超总体模型中抽取的一个随机样本,对总体的推断取决于模型的建立,但在不可忽略抽样设计下估计量是有偏估计。在对这两类推断方法分析的基础上,提出抽样设计辅助的模型推断,并指出该方法在复杂抽样中具有重要的应用价值。  相似文献   

12.
13.
Methods for interval estimation and hypothesis testing about the ratio of two independent inverse Gaussian (IG) means based on the concept of generalized variable approach are proposed. As assessed by simulation, the coverage probabilities of the proposed approach are found to be very close to the nominal level even for small samples. The proposed new approaches are conceptually simple and are easy to use. Similar procedures are developed for constructing confidence intervals and hypothesis testing about the difference between two independent IG means. Monte Carlo comparison studies show that the results based on the generalized variable approach are as good as those based on the modified likelihood ratio test. The methods are illustrated using two examples.  相似文献   

14.
In forensic science, in order to determine whether sets of traces are from the same source or not, it is widely advocated to evaluate evidential value of similarity of the traces by likelihood ratios (LRs). If traces are expressed by measurements following a two-level model with random effects and known variances, closed LR formulas are available given normality, or kernel density distributions, on the effects. For the known variances estimators are used though, which leads to uncertainty on the resulting LRs which is hard to quantify. The above is analyzed in an approach in which both effects and variances are random, following standard prior distributions on univariate data, leading to posterior LRs. For non-informative and conjugate priors, closed LR formulas are obtained that are interesting in structure and generalize a known result given fixed variance. A semi-conjugate prior on the model seems usable in many applications. It is described how to obtain credible intervals using Monte Carlo Markov Chain and regular simulation, and an example is described for comparison of XTC tablets based on MDMA content. In this way, uncertainty on LR estimation is expressed more clearly which makes the evidential value more transparent in a judicial context.  相似文献   

15.
魏浩  刘吟 《统计研究》2011,28(8):34-42
 本文利用全球125个国家的统计数据,分所有国家、发达国家、发展中国家、亚洲发展中国家四个层面,实证分析了进出口贸易对世界各国以及不同类型国家国内收入差距的影响程度、作用以及在影响收入差距所有因素中的地位。研究结果表明:(1)对于所有国家来说,进口和出口对国家内部的收入差距影响系数较小且不显著,金融发展程度因素和高等教育因素是影响国家内部收入差距的重要因素。(2)对于发达国家来说,进口和出口因素是影响国家内部收入差距的主要因素,进口因素和出口因素的影响系数比较大且都比较显著,进口增加有利于扩大收入差距,出口增加有利于缩小收入差距。经济自由度也是影响发达国家国内收入差距的主要因素。(3)对于所有发展中国家来说,进口和出口因素对国家内部收入差距的影响系数较小且都不显著,只有外商直接投资因素和基础教育因素具有显著性。(4)对于亚洲发展中国家来说,进出口贸易因素是影响亚洲发展中国家国内收入差距的重要因素,进口有利于缩小收入差距,出口扩大了收入差距,与所有发展中国家的考察结果相比,进出口贸易对亚洲发展中国家国内收入差距的影响程度更大,且更加显著。  相似文献   

16.
Exact methods for constructing two-sided tolerance intervals (TIs) and tolerance intervals that control percentages in both tails for a location-scale family of distributions are proposed. The proposed methods are illustrated by constructing TIs for a normal, logistic, and Laplace (double exponential) distributions based on type II singly censored samples. Factors for constructing one-sided and two-sided TIs for a logistic distribution are tabulated for the case of uncensored samples. Factors for constructing TIs based on censored samples for all three distributions are also tabulated. The factors for all cases are estimated by Monte Carlo simulation. An adjustment to the tolerance factors based on type II censored samples is proposed so that they can be used to find approximate TIs based on type I censored samples. Coverage studies of the approximate TIs based on type I censored samples indicate that the approximation is satisfactory as long as the proportion of censored observations is no more than 0.70. The methods are illustrated using some practical examples.  相似文献   

17.
The purpose of this article is to investigate hypothesis testing in functional comparative calibration models. Wald type statistics are considered which are asymptotically distributed according to the chi-square distribution. The statistics are based on maximum likelihood, corrected score approach, and method of moment estimators of the model parameters, which are shown to be consistent and asymptotically normally distributed. Results of analytical and simulation studies seem to indicate that the Wald statistics based on the method of moment estimators and the corrected score estimators are, as expected, less efficient than the Wald type statistic based on the maximum likelihood estimators for small n. Wald statistic based on moment estimators are simpler to compute than the other Wald statistics tests and their performance improves significantly as n increases. Comparisons with an alternative F statistics proposed in the literature are also reported.  相似文献   

18.
It is often of interest in survival analysis to test whether the distribution of lifetimes from which the sample under study was derived is the same as a reference distribution. The latter can be specified on the basis of previous studies or on subject matter considerations. In this paper several tests are developed for the above hypothesis, suitable for right-censored observations. The tests are based on modifications of Moses' one-sample limits of some classical two-sample rank tests. The asymptotic distributions of the test statistics are derived, consistency is established for alternatives which are stochastically ordered with respect to the null, and Pitman asymptotic efficiencies are calculated relative to competing tests. Simulated power comparisons are reported. An example is given with data on the survival times of lung cancer patients.  相似文献   

19.
Improved unbiased estimators in adaptive cluster sampling   总被引:1,自引:0,他引:1  
Summary.  The usual design-unbiased estimators in adaptive cluster sampling are easy to compute but are not functions of the minimal sufficient statistic and hence can be improved. Improved unbiased estimators obtained by conditioning on sufficient statistics—not necessarily minimal—are described. First, estimators that are as easy to compute as the usual design-unbiased estimators are given. Estimators obtained by conditioning on the minimal sufficient statistic which are more difficult to compute are also discussed. Estimators are compared in examples.  相似文献   

20.
Low dose risk estimation via simultaneous statistical inferences   总被引:2,自引:0,他引:2  
Summary.  The paper develops and studies simultaneous confidence bounds that are useful for making low dose inferences in quantitative risk analysis. Application is intended for risk assessment studies where human, animal or ecological data are used to set safe low dose levels of a toxic agent, but where study information is limited to high dose levels of the agent. Methods are derived for estimating simultaneous, one-sided, upper confidence limits on risk for end points measured on a continuous scale. From the simultaneous confidence bounds, lower confidence limits on the dose that is associated with a particular risk (often referred to as a bench-mark dose ) are calculated. An important feature of the simultaneous construction is that any inferences that are based on inverting the simultaneous confidence bounds apply automatically to inverse bounds on the bench-mark dose.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号