首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper,we propose a class of general partially linear varying-coefficient transformation models for ranking data. In the models, the functional coefficients are viewed as nuisance parameters and approximated by B-spline smoothing approximation technique. The B-spline coefficients and regression parameters are estimated by rank-based maximum marginal likelihood method. The three-stage Monte Carlo Markov Chain stochastic approximation algorithm based on ranking data is used to compute estimates and the corresponding variances for all the B-spline coefficients and regression parameters. Through three simulation studies and a Hong Kong horse racing data application, the proposed procedure is illustrated to be accurate, stable and practical.  相似文献   

2.
Bayesian analyses frequently employ two-stage hierarchical models involving two-variance parameters: one controlling measurement error and the other controlling the degree of smoothing implied by the model's higher level. These analyses can be hampered by poorly identified variances which may lead to difficulty in computing and in choosing reference priors for these parameters. In this paper, we introduce the class of two-variance hierarchical linear models and characterize the aspects of these models that lead to well-identified or poorly identified variances. These ideas are illustrated with a spatial analysis of a periodontal data set and examined in some generality for specific two-variance models including the conditionally autoregressive (CAR) and one-way random effect models. We also connect this theory with other constrained regression methods and suggest a diagnostic that can be used to search for missing spatially varying fixed effects in the CAR model.  相似文献   

3.
The mixed random effect model is commonly used in longitudinal data analysis within either frequentist or Bayesian framework. Here we consider a case, in which we have prior knowledge on partial parameters, while no such information on the rest of the parameters. Thus, we use the hybrid approach on the random-effects model with partial parameters. The parameters are estimated via Bayesian procedure, and the rest of parameters by the frequentist maximum likelihood estimation (MLE), simultaneously on the same model. In practice, we often know partial prior information such as, covariates of age, gender, etc. These information can be used, and accurate estimations in mixed random-effects model can be obtained. A series of simulation studies were performed to compare the results with the commonly used random-effects model with and without partial prior information. The results in hybrid estimation (HYB) and MLE were very close to each other. The estimated θ values in with partial prior information model (HYB) were more closer to true θ values, and showed less variances than without partial prior information in MLE. To compare with true θ values, the mean square of errors are much less in HYB than in MLE. This advantage of HYB is very obvious in longitudinal data with a small sample size. The methods of HYB and MLE are applied to a real longitudinal data for illustration purposes.  相似文献   

4.
In a panel data model with fixed individual effects, a number of alternative transformations are available to eliminate these effects such that the slope parameters can be estimated from ordinary least squares on transformed data. In this note we show that each transformation leads to algebraically the same estimator if the transformed data are used efficiently (i.e. if GLS is applied). If OLS is used, however, differences may occur and the routinely computed variances, even after degrees of freedom correction, are incorrect. In addition, it may matter whether “redundant” observations are used or not.  相似文献   

5.
In this article, we propose a beta regression model with multiplicative log-normal measurement errors. Three estimation methods are presented, namely, naive, calibration regression, and pseudo likelihood. The nuisance parameters are estimated from a system of estimation equations using replicated data and these estimates are used to propose a pseudo likelihood function. A simulation study was performed to assess some properties of the proposed methods. Results from an example with a real dataset, including diagnostic tools, are also reported.  相似文献   

6.
Analysis of categorical data by linear models is extended to data obtained by stratified random sampling. It is shown that, asymptotically, proportional allocation reduces the variances of estimators from those obtained hy simple random sampling. The difference between the asymptotic covariance matrices of estimated parameters obtained by simple random sampling and stratified random sampling with proportional allocation is shown to be positive definite vinder fairly non-restrictive conditions, when an asymptotically efficient method of estimation is used. Data from a major community study of mental health are used to illustrate application of the technique.  相似文献   

7.
Regression analysis is one of the most commonly used techniques in statistics. When the dimension of independent variables is high, it is difficult to conduct efficient non-parametric analysis straightforwardly from the data. As an important alternative to the additive and other non-parametric models, varying-coefficient models can reduce the modelling bias and avoid the "curse of dimensionality" significantly. In addition, the coefficient functions can easily be estimated via a simple local regression. Based on local polynomial techniques, we provide the asymptotic distribution for the maximum of the normalized deviations of the estimated coefficient functions away from the true coefficient functions. Using this result and the pre-asymptotic substitution idea for estimating biases and variances, simultaneous confidence bands for the underlying coefficient functions are constructed. An important question in the varying coefficient models is whether an estimated coefficient function is statistically significantly different from zero or a constant. Based on newly derived asymptotic theory, a formal procedure is proposed for testing whether a particular parametric form fits a given data set. Simulated and real-data examples are used to illustrate our techniques.  相似文献   

8.
The aim of this study is to apply the Bayesian method of identifying optimal experimental designs to a toxicokinetic-toxicodynamic model that describes the response of aquatic organisms to time dependent concentrations of toxicants. As for experimental designs, we restrict ourselves to pulses and constant concentrations. A design of an experiment is called optimal within this set of designs if it maximizes the expected gain of knowledge about the parameters. Focus is on parameters that are associated with the auxiliary damage variable of the model that can only be inferred indirectly from survival time series data. Gain of knowledge through an experiment is quantified both with the ratio of posterior to prior variances of individual parameters and with the entropy of the posterior distribution relative to the prior on the whole parameter space. The numerical methods developed to calculate expected gain of knowledge are expected to be useful beyond this case study, in particular for multinomially distributed data such as survival time series data.  相似文献   

9.
《统计学通讯:理论与方法》2012,41(13-14):2437-2444
We propose a new approach to estimate the parameters of the Cox proportional hazards model in the presence of collinearity. Generally, a maximum partial likelihood estimator is used to estimate parameters for the Cox proportional hazards model. However, the maximum partial likelihood estimators can be seriously affected by the presence of collinearity since the parameter estimates result in large variances.

In this study, we develop a Liu-type estimator for Cox proportional hazards model parameters and compare it with a ridge regression estimator based on the scalar mean squared error (MSE). Finally, we evaluate its performance through a simulation study.  相似文献   

10.
The mixed effects model, in its various forms, is a common model in applied statistics. A useful strategy for fitting this model implements EM-type algorithms by treating the random effects as missing data. Such implementations, however, can be painfully slow when the variances of the random effects are small relative to the residual variance. In this paper, we apply the 'working parameter' approach to derive alternative EM-type implementations for fitting mixed effects models, which we show empirically can be hundreds of times faster than the common EM-type implementations. In our limited simulations, they also compare well with the routines in S-PLUS® and Stata® in terms of both speed and reliability. The central idea of the working parameter approach is to search for efficient data augmentation schemes for implementing the EM algorithm by minimizing the augmented information over the working parameter, and in the mixed effects setting this leads to a transfer of the mixed effects variances into the regression slope parameters. We also describe a variation for computing the restricted maximum likelihood estimate and an adaptive algorithm that takes advantage of both the standard and the alternative EM-type implementations.  相似文献   

11.
We propose using the weighted likelihood method to fit a general relative risk regression model for the current status data with missing data as arise, for example, in case‐cohort studies. The missingness probability is either known or can be reasonably estimated. Asymptotic properties of the weighted likelihood estimators are established. For the case of using estimated weights, we construct a general theorem that guarantees the asymptotic normality of the M‐estimator of a finite dimensional parameter in a class of semiparametric models, where the infinite dimensional parameter is allowed to converge at a slower than parametric rate, and some other parameters in the objective function are estimated a priori. The weighted bootstrap method is employed to estimate the variances. Simulations show that the proposed method works well for finite sample sizes. A motivating example of the case‐cohort study from an HIV vaccine trial is used to demonstrate the proposed method. The Canadian Journal of Statistics 39: 557–577; 2011. © 2011 Statistical Society of Canada  相似文献   

12.
Empirical Bayes methods are used to estimate cell probabi-lities under a multiplicative-Interaction model for a two-way contingency table. The methods assign uniform and normal priors with unknown variances to the main effects and the separable scores. A priori the analysis assumes exchangeability of sets of parameters. The unknown variance components are estimated empirically from the data via the EM algorithm as discussed by Laird (1978)and Dempster, Laird and Rubin (1977). An example Is Included.  相似文献   

13.
The article considers a new approach for small area estimation based on a joint modelling of mean and variances. Model parameters are estimated via expectation–maximization algorithm. The conditional mean squared error is used to evaluate the prediction error. Analytical expressions are obtained for the conditional mean squared error and its estimator. Our approximations are second‐order correct, an unwritten standardization in the small area literature. Simulation studies indicate that the proposed method outperforms the existing methods in terms of prediction errors and their estimated values.  相似文献   

14.
In testing, item response theory models are widely used in order to estimate item parameters and individual abilities. However, even unidimensional models require a considerable sample size so that all parameters can be estimated precisely. The introduction of empirical prior information about candidates and items might reduce the number of candidates needed for parameter estimation. Using data for IQ measurement, this work shows how empirical information about items can be used effectively for item calibration and in adaptive testing. First, we propose multivariate regression trees to predict the item parameters based on a set of covariates related to the item-solving process. Afterwards, we compare the item parameter estimation when tree-fitted values are included in the estimation or when they are ignored. Model estimation is fully Bayesian, and is conducted via Markov chain Monte Carlo methods. The results are two-fold: (a) in item calibration, it is shown that the introduction of prior information is effective with short test lengths and small sample sizes and (b) in adaptive testing, it is demonstrated that the use of the tree-fitted values instead of the estimated parameters leads to a moderate increase in the test length, but provides a considerable saving of resources.  相似文献   

15.
Methods for the simultaneous analysis of the relationships of binary variables for efficacy and toxicity to dosage of an experimental drug are developed. Properties of two models of ‘within-dose’ dependence of efficacy and toxicity in parallel designs - one a bivariate analogue of the familiar univariate logistic model, and the other an adaptation of a general model developed by D.R. Cox– are explored. The cell probabilities predicted by these models are often quite similar to those predicted by a model of independence of efficacy and toxicity, but large discrepancies can occur when there is approximate equality of the median effective and median toxic doses. Asymptotic variances of estimates of parameters involved in assessing correlation are large when there is little or no dependence in the data, but parameters can be estimated with good precision in at least some cases of moderate to strong dependence between efficacy and toxicity.  相似文献   

16.
Incomplete growth curve data often result from missing or mistimed observations in a repeated measures design. Virtually all methods of analysis rely on the dispersion matrix estimates. A Monte Carlo simulation was used to compare three methods of estimation of dispersion matrices for incomplete growth curve data. The three methods were: 1) maximum likelihood estimation with a smoothing algorithm, which finds the closest positive semidefinite estimate of the pairwise estimated dispersion matrix; 2) a mixed effects model using the EM (estimation maximization) algorithm; and 3) a mixed effects model with the scoring algorithm. The simulation included 5 dispersion structures, 20 or 40 subjects with 4 or 8 observations per subject and 10 or 30% missing data. In all the simulations, the smoothing algorithm was the poorest estimator of the dispersion matrix. In most cases, there were no significant differences between the scoring and EM algorithms. The EM algorithm tended to be better than the scoring algorithm when the variances of the random effects were close to zero, especially for the simulations with 4 observations per subject and two random effects.  相似文献   

17.
ABSTRACT

It is well known that ignoring heteroscedasticity in regression analysis adversely affects the efficiency of estimation and renders the usual procedure for constructing prediction intervals inappropriate. In some applications, such as off-line quality control, knowledge of the variance function is also of considerable interest in its own right. Thus the modeling of variance constitutes an important part of regression analysis. A common practice in modeling variance is to assume that a certain function of the variance can be closely approximated by a function of a known parametric form. The logarithm link function is often used even if it does not fit the observed variation satisfactorily, as other alternatives may yield negative estimated variances. In this paper we propose a rich class of link functions for more flexible variance modeling which alleviates the major difficulty of negative variances. We suggest also an alternative analysis for heteroscedastic regression models that exploits the principle of “separation” discussed in Box (Signal-to-Noise Ratios, Performance Criteria and Transformation. Technometrics 1988, 30, 1–31). The proposed method does not require any distributional assumptions once an appropriate link function for modeling variance has been chosen. Unlike the analysis in Box (Signal-to-Noise Ratios, Performance Criteria and Transformation. Technometrics 1988, 30, 1–31), the estimated variances and their associated asymptotic variances are found in the original metric (although a transformation has been applied to achieve separation in a different scale), making interpretation of results considerably easier.  相似文献   

18.
In estimating the proportion ‘cured’ after adjuvant treatment, a population of cancer patients can be assumed to be a mixture of two Gompertz subpopulations, those who will die of other causes with no evidence of disease relapse and those who will die of their primary cancer. Estimates of the parameters of the component dying of other causes can be obtained from census data, whereas maximum likelihood estimates for the proportion cured and for the parameters of the component of patients dying of cancer can be obtained from follow-up data.

This paper examines, through simulation of follow-up data, the feasibility of maximum likelihood estimation of a mixture of two Gompertz distributions when censoring occurs. Means, variances and mean square error of the maximum likelihood estimates and the estimated asymptotic variance-covariance matrix is obtained from the simulated samples. The relationship of these variances with sample size, proportion censored, mixing proportion and population parameters are considered.

Moderate sample size typical of cooperative trials yield clinically acceptable estimates. Both increasing sample size and decreasing proportion of censored data decreases variance and covariance of the unknown parameters. Useful results can be obtained with data which are as much as 50% censored. Moreover, if the sample size is sufficiently large, survival data which are as much as 70% censored can yield satisfactory results.  相似文献   

19.
Quantile regression methods have been widely used in many research areas in recent years. However conventional estimation methods for quantile regression models do not guarantee that the estimated quantile curves will be non‐crossing. While there are various methods in the literature to deal with this problem, many of these methods force the model parameters to lie within a subset of the parameter space in order for the required monotonicity to be satisfied. Note that different methods may use different subspaces of the space of model parameters. This paper establishes a relationship between the monotonicity of the estimated conditional quantiles and the comonotonicity of the model parameters. We develope a novel quasi‐Bayesian method for parameter estimation which can be used to deal with both time series and independent statistical data. Simulation studies and an application to real financial returns show that the proposed method has the potential to be very useful in practice.  相似文献   

20.
For small area estimation of area‐level data, the Fay–Herriot model is extensively used as a model‐based method. In the Fay–Herriot model, it is conventionally assumed that the sampling variances are known, whereas estimators of sampling variances are used in practice. Thus, the settings of knowing sampling variances are unrealistic, and several methods are proposed to overcome this problem. In this paper, we assume the situation where the direct estimators of the sampling variances are available as well as the sample means. Using this information, we propose a Bayesian yet objective method producing shrinkage estimation of both means and variances in the Fay–Herriot model. We consider the hierarchical structure for the sampling variances, and we set uniform prior on model parameters to keep objectivity of the proposed model. For validity of the posterior inference, we show under mild conditions that the posterior distribution is proper and has finite variances. We investigate the numerical performance through simulation and empirical studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号