首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
There are a number of situations in which an observation is retained only if it is a record value, which include studies in industrial quality control experiments, destructive stress testing, meteorology, hydrology, seismology, athletic events and mining. When the number of records is fixed in advance, the data are referred to as inversely sampled record-breaking data. In this paper, we study the problems of constructing the nonparametric confidence intervals for quantiles and quantile intervals of the parent distribution based on record data. For a single record-breaking sample, the confidence coefficients of the confidence intervals for the pth quantile cannot exceed p and 1?p, on the basis of upper and lower records, respectively; hence, replication is required. So, we develop the procedure based on k independent record-breaking samples. Various cases have been studied and in each case, the optimal k and the exact nonparametric confidence intervals are obtained, and exact expressions for the confidence coefficients of these confidence intervals are derived. Finally, the results are illustrated by numerical computations.  相似文献   

3.
The standard hypothesis testing procedure in meta-analysis (or multi-center clinical trials) in the absence of treatment-by-center interaction relies on approximating the null distribution of the standard test statistic by a standard normal distribution. For relatively small sample sizes, the standard procedure has been shown by various authors to have poor control of the type I error probability, leading to too many liberal decisions. In this article, two test procedures are proposed, which rely on thet—distribution as the reference distribution. A simulation study indicates that the proposed procedures attain significance levels closer to the nominal level compared with the standard procedure.  相似文献   

4.
Likelihood-ratio tests (LRTs) are often used for inferences on one or more logistic regression coefficients. Conventionally, for given parameters of interest, the nuisance parameters of the likelihood function are replaced by their maximum likelihood estimates. The new function created is called the profile likelihood function, and is used for inference from LRT. In small samples, LRT based on the profile likelihood does not follow χ2 distribution. Several corrections have been proposed to improve LRT when used with small-sample data. Additionally, complete or quasi-complete separation is a common geometric feature for small-sample binary data. In this article, for small-sample binary data, we have derived explicitly the correction factors of LRT for models with and without separation, and proposed an algorithm to construct confidence intervals. We have investigated the performances of different LRT corrections, and the corresponding confidence intervals through simulations. Based on the simulation results, we propose an empirical rule of thumb on the use of these methods. Our simulation findings are also supported by real-world data.  相似文献   

5.
Jörg Polzehl 《Statistics》2013,47(1):139-149
A method of constructing jackknife confidence regions for a function of the structural parameter of a nonlinear model is investigated. The method is available for nonlinear regression models as well as for models with errors in the variables. Properties are discussed in comparison with traditional methods. This is supported by a simulation study.  相似文献   

6.
This article deals with the bootstrap as an alternative method to construct confidence intervals for the hyperparameters of structural models. The bootstrap procedure considered is the classical nonparametric bootstrap in the residuals of the fitted model using a well-known approach. The performance of this procedure is empirically obtained through Monte Carlo simulations implemented in Ox. Asymptotic and percentile bootstrap confidence intervals for the hyperparameters are built and compared by means of the coverage percentages. The results are similar but the bootstrap procedure is better for small sample sizes. The methods are applied to a real time series and confidence intervals are built for the hyperparameters.  相似文献   

7.
Point and interval estimators for small domains based exclusively on current and domain specific sample observations are generally ineffective because of inadequate sample-sizes. So, borrowing strength from sample values for analogous domains and simultaneously from all relevant past and auxiliary data is useful in deriving improved small domain statistics. Postulating for simplicity a linear regression model with a single covariate and a zero intercept but a time-specific domain-invariant slope we start with “synthetic” generalized regression predictors for the domain totals. These borrow across only domains. For further improvements a simple autoregressive model is postulated for the slope parameters. Employing Kalman filtering the previous predictors are revised to borrow supplementary strength across time. As drastic simplifying assumptions are needed in such predictions the efficacy of the procedure is examined through an empirical exercise using live data as well as simulations. The numerical findings turn out encouraging.  相似文献   

8.
Abstract

The Kruskal–Wallis test is a popular nonparametric test for comparing k independent samples. In this article we propose a new algorithm to compute the exact null distribution of the Kruskal–Wallis test. Generating the exact null distribution of the Kruskal–Wallis test is needed to compare several approximation methods. The 5% cut-off points of the exact null distribution which StatXact cannot produce are obtained as by-products. We also investigate graphically a reason that the exact and approximate distributions differ, and hope that it will be a useful tutorial tool to teach about the Kruskal–Wallis test in undergraduate course.  相似文献   

9.
Mixture experiments are commonly encountered in many fields including chemical, pharmaceutical and consumer product industries. Due to their wide applications, mixture experiments, a special study of response surface methodology, have been given greater attention in both model building and determination of designs compared with other experimental studies. In this paper, some new approaches are suggested on model building and selection for the analysis of the data in mixture experiments by using a special generalized linear models, logistic regression model, proposed by Chen et al. [7]. Generally, the special mixture models, which do not have a constant term, are highly affected by collinearity in modeling the mixture experiments. For this reason, in order to alleviate the undesired effects of collinearity in the analysis of mixture experiments with logistic regression, a new mixture model is defined with an alternative ratio variable. The deviance analysis table is given for standard mixture polynomial models defined by transformations and special mixture models used as linear predictors. The effects of components on the response in the restricted experimental region are given by using an alternative representation of Cox's direction approach. In addition, odds ratio and the confidence intervals of odds ratio are identified according to the chosen reference and control groups. To compare the suggested models, some model selection criteria, graphical odds ratio and the confidence intervals of the odds ratio are used. The advantage of the suggested approaches is illustrated on tumor incidence data set.  相似文献   

10.
Point and interval estimators for the scale parameter of the component lifetime distribution of a k-component parallel system are obtained when the component lifetimes are assumed to be independently and identically exponentially distributed. We prove that the maximum likelihood estimator of the scale parameter based on progressively Type-II censored system lifetimes is unique and can be obtained by a fixed-point iteration procedure. In particular, we illustrate that the Newton–Raphson method does not converge for any initial value. Furthermore, exact confidence intervals are constructed by a transformation using normalized spacings and other component lifetime distributions including Weibull distribution are discussed.  相似文献   

11.
This paper treats the problem of estimating the Mahalanobis distance for the purpose of detecting outliers in high-dimensional data. Three ridge-type estimators are proposed and risk functions for deciding an appropriate value of the ridge coefficient are developed. It is argued that one of the ridge estimator has particularly tractable properties, which is demonstrated through outlier analysis of real and simulated data.  相似文献   

12.
13.
The main objective of this work is to evaluate the performance of confidence intervals, built using the deviance statistic, for the hyperparameters of state space models. The first procedure is a marginal approximation to confidence regions, based on the likelihood test, and the second one is based on the signed root deviance profile. Those methods are computationally efficient and are not affected by problems such as intervals with limits outside the parameter space, which can be the case when the focus is on the variances of the errors. The procedures are compared to the usual approaches existing in the literature, which includes the method based on the asymptotic distribution of the maximum likelihood estimator, as well as bootstrap confidence intervals. The comparison is performed via a Monte Carlo study, in order to establish empirically the advantages and disadvantages of each method. The results show that the methods based on the deviance statistic possess a better coverage rate than the asymptotic and bootstrap procedures.  相似文献   

14.
Logistic regression using conditional maximum likelihood estimation has recently gained widespread use. Many of the applications of logistic regression have been in situations in which the independent variables are collinear. It is shown that collinearity among the independent variables seriously effects the conditional maximum likelihood estimator in that the variance of this estimator is inflated in much the same way that collinearity inflates the variance of the least squares estimator in multiple regression. Drawing on the similarities between multiple and logistic regression several alternative estimators, which reduce the effect of the collinearity and are easy to obtain in practice, are suggested and compared in a simulation study.  相似文献   

15.
The maximum likelihood estimation of parameters of the Poisson binomial distribution, based on a sample with exact and grouped observations, is considered by applying the EM algorithm (Dempster et al, 1977). The results of Louis (1982) are used in obtaining the observed information matrix and accelerating the convergence of the EM algorithm substantially. The maximum likelihood estimation from samples consisting entirely of complete (Sprott, 1958) or grouped observations are treated as special cases of the estimation problem mentioned above. A brief account is given for the implementation of the EM algorithm when the sampling distribution is the Neyman Type A since the latter is a limiting form of the Poisson binomial. Numerical examples based on real data are included.  相似文献   

16.
Median survival times and their associated confidence intervals are often used to summarize the survival outcome of a group of patients in clinical trials with failure-time endpoints. Although there is an extensive literature on this topic for the case in which the patients come from a homogeneous population, few papers have dealt with the case in which covariates are present as in the proportional hazards model. In this paper we propose a new approach to this problem and demonstrate its advantages over existing methods, not only for the proportional hazards model but also for the widely studied cases where covariates are absent and where there is no censoring. As an illustration, we apply it to the Stanford Heart Transplant data. Asymptotic theory and simulation studies show that the proposed method indeed yields confidence intervals and bands with accurate coverage errors.  相似文献   

17.
ABSTRACT

Based on the tampered failure rate model under the adaptive Type-II progressively hybrid censoring data, we discuss the maximum likelihood estimators of the unknown parameters and acceleration factors in the general step-stress accelerated life tests in this paper. We also construct the exact and unique confidence interval for the extended Weibull shape parameter. In the numerical analysis, we describe the simulation procedures to obtain the adaptive Type-II progressively hybrid censoring data in the step-stress accelerated life tests and present an experimental data to illustrate the performance of the estimators.  相似文献   

18.
Longitudinal studies suffer from patient dropout. The dropout process may be informative if there exists an association between dropout patterns and the rate of change in the response over time. Multiple patterns are plausible in that different causes of dropout might contribute to different patterns. These multiple patterns can be dichotomized into two groups: quantitative and qualitative interaction. Quantitative interaction indicates that each of the multiple sources is biasing the estimate of the rate of change in the same direction, although with differing magnitudes. Alternatively, qualitative interaction results in the multiple sources biasing the estimate of the rate of change in opposing directions. Qualitative interaction is of special concern, since it is less likely to be detected by conventional methods and can lead to highly misleading slope estimates. We explore a test for qualitative interaction based on simultaneous confidence intervals. The test accommodates the realistic situation where reasons for dropout are not fully understood, or even entirely unknown. It allows for an additional level of clustering among participating subjects. We apply these methods to a study exploring tumor growth rates in mice as well as a longitudinal study exploring rates of change in cognitive functioning for Alzheimer's patients.  相似文献   

19.
The authors develop empirical likelihood (EL) based methods of inference for a common mean using data from several independent but nonhomogeneous populations. For point estimation, they propose a maximum empirical likelihood (MEL) estimator and show that it is n‐consistent and asymptotically optimal. For confidence intervals, they consider two EL based methods and show that both intervals have approximately correct coverage probabilities under large samples. Finite‐sample performances of the MEL estimator and the EL based confidence intervals are evaluated through a simulation study. The results indicate that overall the MEL estimator and the weighted EL confidence interval are superior alternatives to the existing methods.  相似文献   

20.
A two-sided sequential confidence interval is suggested for the number of equally probable cells in a given multinomial population with prescribed width and confidence coefficient. We establish large-sample properties of the fixed-width confidence interval procedure using a normal approximation, and some comparisons are made. In addition, a simulation study is carried out in order to investigate the finite sample behaviour of the suggested sequential interval estimation procedure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号