首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
M. Tez 《Statistical Papers》1991,32(1):253-260
The maximization and minimization procedure for constructing confidence bands for the general nonlinear regression model is explained. Then, using the maximization and minimization procedure, a conservative confidence band for the Michaelis-Menten kinetic model used in enzyme kinetics is constructed.  相似文献   

2.
The construction of confidence sets for the parameters of a flexible simple linear regression model for interval-valued random sets is addressed. For that purpose, the asymptotic distribution of the least-squares estimators is analyzed. A simulation study is conducted to investigate the performance of those confidence sets. In particular, the empirical coverages are examined for various interval linear models. The applicability of the procedure is illustrated by means of a real-life case study.  相似文献   

3.
The results of analyzing experimental data using a parametric model may heavily depend on the chosen model for regression and variance functions, moreover also on a possibly underlying preliminary transformation of the variables. In this paper we propose and discuss a complex procedure which consists in a simultaneous selection of parametric regression and variance models from a relatively rich model class and of Box-Cox variable transformations by minimization of a cross-validation criterion. For this it is essential to introduce modifications of the standard cross-validation criterion adapted to each of the following objectives: 1. estimation of the unknown regression function, 2. prediction of future values of the response variable, 3. calibration or 4. estimation of some parameter with a certain meaning in the corresponding field of application. Our idea of a criterion oriented combination of procedures (which usually if applied, then in an independent or sequential way) is expected to lead to more accurate results. We show how the accuracy of the parameter estimators can be assessed by a “moment oriented bootstrap procedure", which is an essential modification of the “wild bootstrap” of Härdle and Mammen by use of more accurate variance estimates. This new procedure and its refinement by a bootstrap based pivot (“double bootstrap”) is also used for the construction of confidence, prediction and calibration intervals. Programs written in Splus which realize our strategy for nonlinear regression modelling and parameter estimation are described as well. The performance of the selected model is discussed, and the behaviour of the procedures is illustrated, e.g., by an application in radioimmunological assay.  相似文献   

4.
Volume 3 of Analysis of Messy Data by Milliken & Johnson (2002) provides detailed recommendations about sequential model development for the analysis of covariance. In his review of this volume, Koehler (2002) asks whether users should be concerned about the effect of this sequential model development on the coverage probabilities of confidence intervals for comparing treatments. We present a general methodology for the examination of these coverage probabilities in the context of the two‐stage model selection procedure that uses two F tests and is proposed in Chapter 2 of Milliken & Johnson (2002). We apply this methodology to an illustrative example from this volume and show that these coverage probabilities are typically very far below nominal. Our conclusion is that users should be very concerned about the coverage probabilities of confidence intervals for comparing treatments constructed after this two‐stage model selection procedure.  相似文献   

5.
By combining the progressive hybrid censoring with the step-stress partially accelerated lifetime test, we propose an adaptive step-stress partially accelerated lifetime test, which allows random changing of the number of step-stress levels according to the pre-fixed censoring number and time points. Thus, the time expenditure and economic cost of the test will be reduced greatly. Based on the Lindley-distributed tampered failure rate (TFR) model with masked system lifetime data, the BFGS method is introduced in the expectation maximization (EM) algorithm to obtain the maximum likelihood estimation (MLE), which overcomes the difficulties of the vague maximization procedure in the M-step. Asymptotic confidence intervals of components' distribution parameters are also investigated according to the missing information principle. As comparison, the Bayesian estimation and the highest probability density (HPD) credible intervals are obtained by using adaptive rejection sampling. Furthermore, the reliability of the system and components are estimated at a specified time under usual and severe operating conditions. Finally, a numerical simulation example is presented to illustrate the performance of our proposed method.  相似文献   

6.
ABSTRACT

In applications using a simple regression model with a balanced two-fold nested error structure, interest focuses on inferences concerning the regression coefficient. This article derives exact and approximate confidence intervals on the regression coefficient in the simple regression model with a balanced two-fold nested error structure. Eleven methods are considered for constructing the confidence intervals on the regression coefficient. Computer simulation is performed to compare the proposed confidence intervals. Recommendations are suggested for selecting an appropriate method.  相似文献   

7.
The family of power series cure rate models provides a flexible modeling framework for survival data of populations with a cure fraction. In this work, we present a simplified estimation procedure for the maximum likelihood (ML) approach. ML estimates are obtained via the expectation-maximization (EM) algorithm where the expectation step involves computation of the expected number of concurrent causes for each individual. It has the big advantage that the maximization step can be decomposed into separate maximizations of two lower-dimensional functions of the regression and survival distribution parameters, respectively. Two simulation studies are performed: the first to investigate the accuracy of the estimation procedure for different numbers of covariates and the second to compare our proposal with the direct maximization of the observed log-likelihood function. Finally, we illustrate the technique for parameter estimation on a dataset of survival times for patients with malignant melanoma.  相似文献   

8.
Variable selection in regression analysis is of importance because it can simplify model and enhance predictability. After variable selection, however, the resulting working model may be biased when it does not contain all of significant variables. As a result, the commonly used parameter estimation is either inconsistent or needs estimating high-dimensional nuisance parameter with very strong assumptions for consistency, and the corresponding confidence region is invalid when the bias is relatively large. We in this paper introduce a simulation-based procedure to reformulate a new model so as to reduce the bias of the working model, with no need to estimate high-dimensional nuisance parameter. The resulting estimators of the parameters in the working model are asymptotic normally distributed whether the bias is small or large. Furthermore, together with the empirical likelihood, we build simulation-based confidence regions for the parameters in the working model. The newly proposed estimators and confidence regions outperform existing ones in the sense of consistency.  相似文献   

9.
The sample coordination problem involves maximization or minimization of overlap of sampling units in different/repeated surveys. Several optimal techniques using transportation theory, controlled rounding, and controlled selection have been suggested in literature to solve the sample coordination problem. In this article, using the multiple objective programming, we propose a method for sample coordination which facilitates variance estimation using the Horvitz–Thompson estimator. The proposed procedure can be applied to any two-sample surveys having identical universe and stratification. Some examples are discussed to demonstrate the utility of the proposed procedure.  相似文献   

10.
In many applications of linear regression models, randomness due to model selection is commonly ignored in post-model selection inference. In order to account for the model selection uncertainty, least-squares frequentist model averaging has been proposed recently. We show that the confidence interval from model averaging is asymptotically equivalent to the confidence interval from the full model. The finite-sample confidence intervals based on approximations to the asymptotic distributions are also equivalent if the parameter of interest is a linear function of the regression coefficients. Furthermore, we demonstrate that this equivalence also holds for prediction intervals constructed in the same fashion.  相似文献   

11.
This paper develops a recursive expectation–maximization (REM) algorithm for estimating a mixture autoregression (MAR) with an independent and identically distributed regime transition process. The proposed method, which is useful for long time series as well as for data available in real time, follows a recursive predictor error-type scheme. Based on a slightly modified system to the expectation–maximization (EM) equations for an MAR model, the REM algorithm consists of two steps at each iteration: the expectation step, in which the current unobserved regime transition is estimated from new data using previous recursive estimates, and the minimization step, in which the MAR parameter estimates are recursively updated following a minimization direction. Details of implementation of the REM algorithm are given and its finite-sample performance is shown via simulation experiments. In particular, the EM and REM provide roughly similar estimates, especially for moderate and long time series.  相似文献   

12.
In this paper we develop a regression model for survival data in the presence of long-term survivors based on the generalized Gompertz distribution introduced by El-Gohary et al. [The generalized Gompertz distribution. Appl Math Model. 2013;37:13–24] in a defective version. This model includes as special case the Gompertz cure rate model proposed by Gieser et al. [Modelling cure rates using the Gompertz model with covariate information. Stat Med. 1998;17:831–839]. Next, an expectation maximization algorithm is then developed for determining the maximum likelihood estimates (MLEs) of the parameters of the model. In addition, we discuss the construction of confidence intervals for the parameters using the asymptotic distributions of the MLEs and the parametric bootstrap method, and assess their performance through a Monte Carlo simulation study. Finally, the proposed methodology was applied to a database on uterine cervical cancer.  相似文献   

13.
In this article we deal with simultaneous two-sided tolerance intervals for a univariate linear regression model with independent normally distributed errors. We present a method for determining the intervals derived by the general confidence-set approach (GCSA), i.e. the intervals are constructed based on a specified confidence set for unknown parameters of the model. The confidence set used in the new method is formed based on a suggested hypothesis test about all parameters of the model. The simultaneous two-sided tolerance intervals determined by the presented method are found to be efficient and fast to compute based on a preliminary numerical comparison of all the existing methods based on GCSA.  相似文献   

14.
In this paper, we consider a new mixture of varying coefficient models, in which each mixture component follows a varying coefficient model and the mixing proportions and dispersion parameters are also allowed to be unknown smooth functions. We systematically study the identifiability, estimation and inference for the new mixture model. The proposed new mixture model is rather general, encompassing many mixture models as its special cases such as mixtures of linear regression models, mixtures of generalized linear models, mixtures of partially linear models and mixtures of generalized additive models, some of which are new mixture models by themselves and have not been investigated before. The new mixture of varying coefficient model is shown to be identifiable under mild conditions. We develop a local likelihood procedure and a modified expectation–maximization algorithm for the estimation of the unknown non‐parametric functions. Asymptotic normality is established for the proposed estimator. A generalized likelihood ratio test is further developed for testing whether some of the unknown functions are constants. We derive the asymptotic distribution of the proposed generalized likelihood ratio test statistics and prove that the Wilks phenomenon holds. The proposed methodology is illustrated by Monte Carlo simulations and an analysis of a CO2‐GDP data set.  相似文献   

15.
We present a random coefficient regression model in which a response is linearly related to some explanatory variables with random coefficients following a Dirichlet distribution. These coefficients can be interpreted as weights because they are nonnegative and add up to one. The proposed estimation procedure combines iteratively reweighted least squares and the maximization on an approximated likelihood function. We also present a diagnostic tool based on a residual Q–Q plot and two procedures for estimating individual weights. The model is used to construct an index for measuring the quality of the railroad system in Spain.  相似文献   

16.
Count data often contain many zeros. In parametric regression analysis of zero-inflated count data, the effect of a covariate of interest is typically modelled via a linear predictor. This approach imposes a restrictive, and potentially questionable, functional form on the relation between the independent and dependent variables. To address the noted restrictions, a flexible parametric procedure is employed to model the covariate effect as a linear combination of fixed-knot cubic basis splines or B-splines. The semiparametric zero-inflated Poisson regression model is fitted by maximizing the likelihood function through an expectation–maximization algorithm. The smooth estimate of the functional form of the covariate effect can enhance modelling flexibility. Within this modelling framework, a log-likelihood ratio test is used to assess the adequacy of the covariate function. Simulation results show that the proposed test has excellent power in detecting the lack of fit of a linear predictor. A real-life data set is used to illustrate the practicality of the methodology.  相似文献   

17.
The residual standard deviation of a general linear model provides information about predictive accuracy that is not revealed by the multiple correlation or regression coefficients. The classic confidence interval for a residual standard deviation is hypersensitive to minor violations of the normality assumption and its robustness does not improve with increasing sample size. An approximate confidence interval for the residual standard deviation is proposed and shown to be robust to moderate violations of the normality assumption with robustness to extreme non-normality that improves with increasing sample size.  相似文献   

18.
In this paper, we investigate empirical likelihood (EL) inferences via weighted composite quantile regression for non linear models. Under regularity conditions, we establish that the proposed empirical log-likelihood ratio is asymptotically chi-squared, and then the confidence intervals for the regression coefficients are constructed. The proposed method avoids estimating the unknown error density function involved in the asymptotic covariance matrix of the estimators. Simulations suggest that the proposed EL procedure is more efficient and robust, and a real data analysis is used to illustrate the performance.  相似文献   

19.
Inference for a scalar parameter in the pressence of nuisance parameters requires high dimensional integrations of the joint density of the pivotal quantities. Recent development in asymptotic methods provides accurate approximations for significance levels and thus confidence intervals for a scalar component parameter. In this paper, a simple, efficient and accurate numerical procedure is first developed for the location model and is then extended to the location-scale model and the linear regression model. This numerical procedure only requires a fine tabulation of the parameter and the observed log likelihood function, which can be either the full, marginal or conditional observed log likelihood function, as input and output is the corresponding significance function. Numerical results showed that this approximation is not only simple but also very accurate. It outperformed the usual approximations such as the signed likelihood ratio statistic, the maximum likelihood estimate and the score statistic.  相似文献   

20.
Dynamic regression models (also known as distributed lag models) are widely used in engineering for quality control and in economics for forecasting. In this article I propose a procedure for specifying such models in practice. The proposed procedure requires no prewhitening and can directly handle the nonstationary series. Furthermore, the procedure cross-validates prior beliefs about causal relationships between variables with empirical findings to ensure the suitability of model structure. An illustrative example is given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号