首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The inflated beta regression model aims to enable the modeling of responses in the intervals (0, 1], [0, 1), or [0, 1]. In this model, hypothesis testing is often performed based on the likelihood ratio statistic. The critical values are obtained from asymptotic approximations, which may lead to distortions of size in small samples. In this sense, this article proposes the bootstrap Bartlett correction to the statistic of likelihood ratio in the inflated beta regression model. The proposed adjustment only requires a simple Monte Carlo simulation. Through extensive Monte Carlo simulations the finite sample performance (size and power) of the proposed corrected test is compared to the usual likelihood ratio test and the Skovgaard adjustment already proposed in the literature. The numerical results evidence that inference based on the proposed correction is much more reliable than that based on the usual likelihood ratio statistics and the Skovgaard adjustment. At the end of the work, an application to real data is also presented.  相似文献   

2.
The class of beta regression models proposed by Ferrari and Cribari-Neto [Beta regression for modelling rates and proportions, Journal of Applied Statistics 31 (2004), pp. 799–815] is useful for modelling data that assume values in the standard unit interval (0, 1). The dependent variable relates to a linear predictor that includes regressors and unknown parameters through a link function. The model is also indexed by a precision parameter, which is typically taken to be constant for all observations. Some authors have used, however, variable dispersion beta regression models, i.e., models that include a regression submodel for the precision parameter. In this paper, we show how to perform testing inference on the parameters that index the mean submodel without having to model the data precision. This strategy is useful as it is typically harder to model dispersion effects than mean effects. The proposed inference procedure is accurate even under variable dispersion. We present the results of extensive Monte Carlo simulations where our testing strategy is contrasted to that in which the practitioner models the underlying dispersion and then performs testing inference. An empirical application that uses real (not simulated) data is also presented and discussed.  相似文献   

3.
The Lorenz curve describes the wealth proportion for an income-ordered population. In this paper, we introduce a kernel smoothing estimator for the Lorenz curve and propose a smoothed jackknife empirical likelihood method for constructing confidence intervals of Lorenz ordinates. Extensive simulation studies are conducted to evaluate finite sample performances of the proposed methods. A real dataset of Georgia professor’s income is used to illustrate the proposed methods.  相似文献   

4.
ABSTRACT

In this paper, we consider the problem of constructing non parametric confidence intervals for the mean of a positively skewed distribution. We suggest calibrated, smoothed bootstrap upper and lower percentile confidence intervals. For the theoretical properties, we show that the proposed one-sided confidence intervals have coverage probability α + O(n? 3/2). This is an improvement upon the traditional bootstrap confidence intervals in terms of coverage probability. A version smoothed approach is also considered for constructing a two-sided confidence interval and its theoretical properties are also studied. A simulation study is performed to illustrate the performance of our confidence interval methods. We then apply the methods to a real data set.  相似文献   

5.
In this study, we propose a median control chart. In order to determine the control limits, we consider using an estimate of the variance of sample median. Also, we consider applying the bootstrap methods. Then we illustrate the proposed median control chart with an example and compare the bootstrap methods by simulation study. Finally, we discuss some peculiar features for the median control chart as concluding remarks.  相似文献   

6.
Given a pair of sample estimators of two independent proportions, bootstrap methods are a common strategy towards deriving the associated confidence interval for the relative risk. We develop a new smooth bootstrap procedure, which generates pseudo-samples from a continuous quantile function. Under a variety of settings, our simulation studies show that our method possesses a better or equal performance in comparison with asymptotic theory based and existing bootstrap methods, particularly for heavily unbalanced data in terms of coverage probability and power. We illustrate our procedure as applied to several published data sets.  相似文献   

7.
The sampling distribution of kendall's partial rank correlation coefficient, Jxy?z, is not known for N>4, where N is the number of subjectts. Moran (1951) used a direcr conbinatorial method to obtain the distribution of Jxy?z forN=4; however, ten minor computationa; errors in his Table 2apparently resulted in how erroneous entries for his frequency table. Since the parctial limits of the direct combinatorial approach have been reached once N>4, the first main objective of this paper was to obtain the exact distribution of Jxy?z for N=f, 6, and 7 using an electronic computer. The second was to use the Monte Carlo method to obtain reliable estimates of the quantiles of Jxy?z for N=8,9,...,30  相似文献   

8.
In vitro dissolution similarity has been suggested as a surrogate for assessing equivalence between the pre-changed and post-changed formulations for postapproval changes of a drug. The difference factor f1, based on the absolute mean difference, has been proposed as a criterion for evaluating similarity between dissolution profiles. Statistical properties including density function, bias, and asymptotic distribution of a consistent estimator are investigated. Due to complexity of the distribution of the estimator, we suggest the use of the confidence intervals obtained from the bootstrap method for evaluation of dissolution similarity. A simulation was conducted to examine the size and power of the proposed CI procedure. Comparisons with other criteria such as similarity factor are also provided. Numerical examples are used to illustrate the proposed CI procedure.  相似文献   

9.
Recently, the methods used to estimate monotonic regression (MR) models have been substantially improved, and some algorithms can now produce high-accuracy monotonic fits to multivariate datasets containing over a million observations. Nevertheless, the computational burden can be prohibitively large for resampling techniques in which numerous datasets are processed independently of each other. Here, we present efficient algorithms for estimation of confidence limits in large-scale settings that take into account the similarity of the bootstrap or jackknifed datasets to which MR models are fitted. In addition, we introduce modifications that substantially improve the accuracy of MR solutions for binary response variables. The performance of our algorithms is illustrated using data on death in coronary heart disease for a large population. This example also illustrates that MR can be a valuable complement to logistic regression.  相似文献   

10.
Stute (1993, Consistent estimation under random censorship when covariables are present. Journal of Multivariate Analysis 45, 89–103) proposed a new method to estimate regression models with a censored response variable using least squares and showed the consistency and asymptotic normality for his estimator. This article proposes a new bootstrap-based methodology that improves the performance of the asymptotic interval estimation for the small sample size case. Therefore, we compare the behavior of Stute's asymptotic confidence interval with that of several confidence intervals that are based on resampling bootstrap techniques. In order to build these confidence intervals, we propose a new bootstrap resampling method that has been adapted for the case of censored regression models. We use simulations to study the improvement the performance of the proposed bootstrap-based confidence intervals show when compared to the asymptotic proposal. Simulation results indicate that, for the new proposals, coverage percentages are closer to the nominal values and, in addition, intervals are narrower.  相似文献   

11.
ABSTRACT

In this paper, under Type-I progressive hybrid censoring sample, we obtain maximum likelihood estimator of unknown parameter when the parent distribution belongs to proportional hazard rate family. We derive the conditional probability density function of the maximum likelihood estimator using moment-generating function technique. The exact confidence interval is obtained and compared by conducting a Monte Carlo simulation study for burr Type XII distribution. Finally, we obtain the Bayes and posterior regret gamma minimax estimates of the parameter under a precautionary loss function with precautionary index k = 2 and compare their behavior via a Monte Carlo simulation study.  相似文献   

12.
In many studies a large number of variables is measured and the identification of relevant variables influencing an outcome is an important task. For variable selection several procedures are available. However, focusing on one model only neglects that there usually exist other equally appropriate models. Bayesian or frequentist model averaging approaches have been proposed to improve the development of a predictor. With a larger number of variables (say more than ten variables) the resulting class of models can be very large. For Bayesian model averaging Occam’s window is a popular approach to reduce the model space. As this approach may not eliminate any variables, a variable screening step was proposed for a frequentist model averaging procedure. Based on the results of selected models in bootstrap samples, variables are eliminated before deriving a model averaging predictor. As a simple alternative screening procedure backward elimination can be used. Through two examples and by means of simulation we investigate some properties of the screening step. In the simulation study we consider situations with fifteen and 25 variables, respectively, of which seven have an influence on the outcome. With the screening step most of the uninfluential variables will be eliminated, but also some variables with a weak effect. Variable screening leads to more applicable models without eliminating models, which are more strongly supported by the data. Furthermore, we give recommendations for important parameters of the screening step.  相似文献   

13.
A regression estimator using two prior values of population mean (μx) of an auxiliary variable (x) is proposed after a preliminary test of closeness of these prior values to the true valueμx. The proposed preliminary test regression estimator has been found to be more efficient in general than the usual regression estimator when prior values are used in place of μxwithout preliminary test of significance. The efficiency of the proposed estimator over the usual regression estimator has also been computed for different values of Δ0, Δ1, n, and ρ, which showed considerable gain in precision.  相似文献   

14.
Bootstrap in functional linear regression   总被引:1,自引:0,他引:1  
We have considered the functional linear model with scalar response and functional explanatory variable. One of the most popular methodologies for estimating the model parameter is based on functional principal components analysis (FPCA). In recent literature, weak convergence for a wide class of FPCA-type estimates has been proved, and consequently asymptotic confidence sets can be built. In this paper, we have proposed an alternative approach in order to obtain pointwise confidence intervals by means of a bootstrap procedure, for which we have obtained its asymptotic validity. Besides, a simulation study allows us to compare the practical behaviour of asymptotic and bootstrap confidence intervals in terms of coverage rates for different sample sizes.  相似文献   

15.
In this article, we consider a two-phase tandem queueing model with a second optional service. In this model, the service is done by two phases. The first phase of service is essential for all customers and after the completion of the first phase of service, any customer receives the second phase of service with probability α, or leaves the system with probability 1 ? α. Also, there are two heterogeneous servers which work independently, one of them providing the first phase of service and the other a second phase of service. In this model, our main purpose is to estimate the parameters of the model, traffic intensity, and mean system size, in the steady state, via maximum likelihood and Bayesian methods. Furthermore, we find asymptotic confidence intervals for mean system size. Finally, by a simulation study, we compute the confidence levels and mean length for asymptotic confidence intervals of mean system size with a nominal level 0.95.  相似文献   

16.
    
We propose a universal robust likelihood that is able to accommodate correlated binary data without any information about the underlying joint distributions. This likelihood function is asymptotically valid for the regression parameter for any underlying correlation configurations, including varying under- or over-dispersion situations, which undermines one of the regularity conditions ensuring the validity of crucial large sample theories. This robust likelihood procedure can be easily implemented by using any statistical software that provides naïve and sandwich covariance matrices for regression parameter estimates. Simulations and real data analyses are used to demonstrate the efficacy of this parametric robust method.  相似文献   

17.
It is well-known that when ranked set sampling (RSS) scheme is employed to estimate the mean of a population, it is more efficient than simple random sampling (SRS) with the same sample size. One can use a RSS analog of SRS regression estimator to estimate the population mean of Y using its concomitant variable X when they are linearly related. Unfortunately, the variance of this estimate cannot be evaluated unless the distribution of X is known. We investigate the use of resampling methods to establish confidence intervals for the regression estimation of the population mean. Simulation studies show that the proposed methods perform well in a variety of situations when the assumption of linearity holds, and decently well under mild non-linearity.  相似文献   

18.
A challenging problem in the analysis of high-dimensional data is variable selection. In this study, we describe a bootstrap based technique for selecting predictors in partial least-squares regression (PLSR) and principle component regression (PCR) in high-dimensional data. Using a bootstrap-based technique for significance tests of the regression coefficients, a subset of the original variables can be selected to be included in the regression, thus obtaining a more parsimonious model with smaller prediction errors. We compare the bootstrap approach with several variable selection approaches (jack-knife and sparse formulation-based methods) on PCR and PLSR in simulation and real data.  相似文献   

19.
20.
Confidence intervals obtained by bootstrap methods and normal approximation are compared, based on output data from terminating and steady-state simulations. Bootstrap intervals are equal or better than normal approximation intervals in actual probability coverages. Furthermore, bootstrap methods capture the skewness in the distribution of outputs and, therefore, are more desirable than normal approximation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号