首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Parameter Orthogonality and Bias Adjustment for Estimating Functions   总被引:1,自引:0,他引:1  
Abstract.  We consider an extended notion of parameter orthogonality for estimating functions, called nuisance parameter insensitivity, which allows a unified treatment of nuisance parameters for a wide range of methods, including Liang and Zeger's generalized estimating equations. Nuisance parameter insensitivity has several important properties in common with conventional parameter orthogonality, such as the nuisance parameter causing no loss of efficiency for estimating the interest parameter, and a simplified estimation algorithm. We also consider bias adjustment for profile estimating functions, and apply the results to restricted maximum likelihood estimation of dispersion parameters in generalized estimating equations.  相似文献   

2.
Summary.  The paper proposes an estimation approach for panel models with mixed continuous and ordered categorical outcomes based on generalized estimating equations for the mean and pseudoscore equations for the covariance parameters. A numerical study suggests that efficiency can be gained in the mean parameter estimators by using individual covariance matrices in the estimating equations for the mean parameters. The approach is applied to estimate the returns to occupational qualification in terms of income and perceived job security in a 9-year period based on the German Socio-Economic Panel. To compensate for missing data, a combined multiple imputation–weighting approach is adopted.  相似文献   

3.
Summary.  The paper develops a Bayesian hierarchical model for estimating the catch at age of cod landed in Norway. The model includes covariate effects such as season and gear, and can also account for the within-boat correlation. The hierarchical structure allows us to account properly for the uncertainty in the estimates.  相似文献   

4.
We obtain an estimator of the r th central moment of a distribution, which is unbiased for all distributions for which the first r moments exist. We do this by finding the kernel which allows the r th central moment to be written as a regular statistical functional. The U-statistic associated with this kernel is the unique symmetric unbiased estimator of the r th central moment, and, for each distribution, it has minimum variance among all estimators which are unbiased for all these distributions.  相似文献   

5.
We propose an orthogonal locally ancillary estimating function that provides first-order bias correction of inferences. It requires the specification of merely the first two moments of the observations when applying to analysis of stratified clustered (continuous or binary) data with the parameters of interest in both the first and second joint moments of dependent data. Simulation results confirm that the estimators obtained using the proposed method are substantially improved over those using regular profile estimating functions.  相似文献   

6.
Summary.  We introduce a new method for generating optimal split-plot designs. These designs are optimal in the sense that they are efficient for estimating the fixed effects of the statistical model that is appropriate given the split-plot design structure. One advantage of the method is that it does not require the prior specification of a candidate set. This makes the production of split-plot designs computationally feasible in situations where the candidate set is too large to be tractable. The method allows for flexible choice of the sample size and supports inclusion of both continuous and categorical factors. The model can be any linear regression model and may include arbitrary polynomial terms in the continuous factors and interaction terms of any order. We demonstrate the usefulness of this flexibility with a 100-run polypropylene experiment involving 11 factors where we found a design that is substantially more efficient than designs that are produced by using other approaches.  相似文献   

7.
Summary.  We introduce a flexible marginal modelling approach for statistical inference for clustered and longitudinal data under minimal assumptions. This estimated estimating equations approach is semiparametric and the proposed models are fitted by quasi-likelihood regression, where the unknown marginal means are a function of the fixed effects linear predictor with unknown smooth link, and variance–covariance is an unknown smooth function of the marginal means. We propose to estimate the nonparametric link and variance–covariance functions via smoothing methods, whereas the regression parameters are obtained via the estimated estimating equations. These are score equations that contain nonparametric function estimates. The proposed estimated estimating equations approach is motivated by its flexibility and easy implementation. Moreover, if data follow a generalized linear mixed model, with either a specified or an unspecified distribution of random effects and link function, the model proposed emerges as the corresponding marginal (population-average) version and can be used to obtain inference for the fixed effects in the underlying generalized linear mixed model, without the need to specify any other components of this generalized linear mixed model. Among marginal models, the estimated estimating equations approach provides a flexible alternative to modelling with generalized estimating equations. Applications of estimated estimating equations include diagnostics and link selection. The asymptotic distribution of the proposed estimators for the model parameters is derived, enabling statistical inference. Practical illustrations include Poisson modelling of repeated epileptic seizure counts and simulations for clustered binomial responses.  相似文献   

8.
In this article, we investigate estimating moments, up to fourth order, in linear mixed models. For this estimation, we only assume the existence of moments. The obtained estimators of the model parameters and the third and fourth moments of the errors and random effects are proved to be consistent or asymptotically normal. The estimation provides a base for further statistical inference such as confidence region construction and hypothesis testing for the parameters of interest. Moreover, the method is readily extended to estimate higher moments. A simulation is carried out to examine the performance of this estimating method.  相似文献   

9.
There has recently been growing interest in modeling and estimating alternative continuous time multivariate stochastic volatility models. We propose a continuous time fractionally integrated Wishart stochastic volatility (FIWSV) process, and derive the conditional Laplace transform of the FIWSV model in order to obtain a closed form expression of moments. A two-step procedure is used, namely estimating the parameter of fractional integration via the local Whittle estimator in the first step, and estimating the remaining parameters via the generalized method of moments in the second step. Monte Carlo results for the procedure show a reasonable performance in finite samples. The empirical results for the S&P 500 and FTSE 100 indexes show that the data favor the new FIWSV process rather than the one-factor and two-factor models of the Wishart autoregressive process for the covariance structure.  相似文献   

10.
In this paper, statistical inferences for the size-biased Weibull distribution in two different cases are drawn. In the first case where the size r of the bias is considered known, it is proven that the maximum-likelihood estimators (MLEs) always exist. In the second case where the size r is considered as an unknown parameter, the estimating equations for the MLEs are presented and the Fisher information matrix is found. The estimation with the method of moments can be utilized in the case the MLEs do not exist. The advantage of treating r as an unknown parameter is that it allows us to perform tests concerning the existence of size-bias in the sample. Finally a program in Mathematica is written which provides all the statistical results from the procedures developed in this paper.  相似文献   

11.
For binary experimental data, we discuss randomization‐based inferential procedures that do not need to invoke any modeling assumptions. In addition to the classical method of moments, we also introduce model‐free likelihood and Bayesian methods based solely on the physical randomization without any hypothetical super population assumptions about the potential outcomes. These estimators have some properties superior to moment‐based ones such as only giving estimates in regions of feasible support. Due to the lack of identification of the causal model, we also propose a sensitivity analysis approach that allows for the characterization of the impact of the association between the potential outcomes on statistical inference.  相似文献   

12.
Scientific experiments commonly result in clustered discrete and continuous data. Existing methods for analyzing such data include the use of quasi-likelihood procedures and generalized estimating equations to estimate marginal mean response parameters. In applications to areas such as developmental toxicity studies, where discrete and continuous measurements are recorded on each fetus, or clinical ophthalmologic trials, where different types of observations are made on each eye, the assumption that data within cluster are exchangeable is often very reasonable. We use this assumption to formulate fully parametric regression models for clusters of bivariate data with binary and continuous components. The regression models proposed have marginal interpretations and reproducible model structures. Tractable expressions for likelihood equations are derived and iterative schemes are given for computing efficient estimates (MLEs) of the marginal mean, correlations, variances and higher moments. We demonstrate the use the ‘exchangeable’ procedure with an application to a developmental toxicity study involving fetal weight and malformation data.  相似文献   

13.
Abstract.  Multivariate failure time data arises when each study subject can potentially ex-perience several types of failures or recurrences of a certain phenomenon, or when failure times are sampled in clusters. We formulate the marginal distributions of such multivariate data with semiparametric accelerated failure time models (i.e. linear regression models for log-transformed failure times with arbitrary error distributions) while leaving the dependence structures for related failure times completely unspecified. We develop rank-based monotone estimating functions for the regression parameters of these marginal models based on right-censored observations. The estimating equations can be easily solved via linear programming. The resultant estimators are consistent and asymptotically normal. The limiting covariance matrices can be readily estimated by a novel resampling approach, which does not involve non-parametric density estimation or evaluation of numerical derivatives. The proposed estimators represent consistent roots to the potentially non-monotone estimating equations based on weighted log-rank statistics. Simulation studies show that the new inference procedures perform well in small samples. Illustrations with real medical data are provided.  相似文献   

14.
Hierarchical models are widely-used to characterize the performance of individual healthcare providers. However, little attention has been devoted to system-wide performance evaluations, the goals of which include identifying extreme (e.g., top 10%) provider performance and developing statistical benchmarks to define high-quality care. Obtaining optimal estimates of these quantities requires estimating the empirical distribution function (EDF) of provider-specific parameters that generate the dataset under consideration. However, the difficulty of obtaining uncertainty bounds for a square-error loss minimizing EDF estimate has hindered its use in system-wide performance evaluations. We therefore develop and study a percentile-based EDF estimate for univariate provider-specific parameters. We compute order statistics of samples drawn from the posterior distribution of provider-specific parameters to obtain relevant uncertainty assessments of an EDF estimate and its features, such as thresholds and percentiles. We apply our method to data from the Medicare End Stage Renal Disease (ESRD) Program, a health insurance program for people with irreversible kidney failure. We highlight the risk of misclassifying providers as exceptionally good or poor performers when uncertainty in statistical benchmark estimates is ignored. Given the high stakes of performance evaluations, statistical benchmarks should be accompanied by precision estimates.  相似文献   

15.
Summary.  In process characterization the quality of information that is obtained depends directly on the quality of process model. The current quality revolution is now providing a strong stimulus for rethinking and re-evaluating many statistical ideas. Among these are the role of theoretic knowledge and data in statistical inference and some issues in theoretic–empirical modelling. With this concern the paper takes a broad, pragmatic view of statistical inference to include all aspects of model formulation. The estimation of model parameters traditionally assumes that a model has a prespecified known form and takes no account of possible uncertainty regarding model structure. But in practice model structural uncertainty is a fact of life and is likely to be more serious than other sources of uncertainty which have received far more attention. This is true whether the model is specified on subject-matter grounds or when a model is formulated, fitted and checked on the same data set in an iterative interactive way. For that reason novel modelling techniques have been fashioned for reducing model uncertainty. Using available knowledge for theoretic model elaboration the techniques that have been created approximate the exact unknown process model concurrently by accessible theoretic and polynomial empirical functions. The paper examines the effects of uncertainty for hybrid theoretic–empirical models and, for reducing uncertainty, additive and multiplicative methods of model formulation are fashioned. Such modelling techniques have been successfully applied to perfect a steady flow model for an air gauge sensor. Validation of the models elaborated has revealed that the multiplicative modelling approach allows us to attain a satisfactory model with small discrepancy from empirical evidence.  相似文献   

16.
Summary.  We consider the problem of obtaining population-based inference in the presence of missing data and outliers in the context of estimating the prevalence of obesity and body mass index measures from the 'Healthy for life' study. Identifying multiple outliers in a multivariate setting is problematic because of problems such as masking, in which groups of outliers inflate the covariance matrix in a fashion that prevents their identification when included, and swamping, in which outliers skew covariances in a fashion that makes non-outlying observations appear to be outliers. We develop a latent class model that assumes that each observation belongs to one of K unobserved latent classes, with each latent class having a distinct covariance matrix. We consider the latent class covariance matrix with the largest determinant to form an 'outlier class'. By separating the covariance matrix for the outliers from the covariance matrices for the remainder of the data, we avoid the problems of masking and swamping. As did Ghosh-Dastidar and Schafer, we use a multiple-imputation approach, which allows us simultaneously to conduct inference after removing cases that appear to be outliers and to promulgate uncertainty in the outlier status through the model inference. We extend the work of Ghosh-Dastidar and Schafer by embedding the outlier class in a larger mixture model, consider penalized likelihood and posterior predictive distributions to assess model choice and model fit, and develop the model in a fashion to account for the complex sample design. We also consider the repeated sampling properties of the multiple imputation removal of outliers.  相似文献   

17.
ABSTRACT

We introduce a new methodology for estimating the parameters of a two-sided jump model, which aims at decomposing the daily stock return evolution into (unobservable) positive and negative jumps as well as Brownian noise. The parameters of interest are the jump beta coefficients which measure the influence of the market jumps on the stock returns, and are latent components. For this purpose, at first we use the Variance Gamma (VG) distribution which is frequently used in modeling financial time series and leads to the revelation of the hidden market jumps' distributions. Then, our method is based on the central moments of the stock returns for estimating the parameters of the model. It is proved that the proposed method provides always a solution in terms of the jump beta coefficients. We thus achieve a semi-parametric fit to the empirical data. The methodology itself serves as a criterion to test the fit of any sets of parameters to the empirical returns. The analysis is applied to NASDAQ and Google returns during the 2006–2008 period.  相似文献   

18.
The late-2000s financial crisis stressed the need to understand the world financial system as a network of countries, where cross-border financial linkages play a fundamental role in the spread of systemic risks. Financial network models, which take into account the complex interrelationships between countries, seem to be an appropriate tool in this context. To improve the statistical performance of financial network models, we propose to generate them by means of multivariate graphical models. We then introduce Bayesian graphical models, which can take model uncertainty into account, and dynamic Bayesian graphical models, which provide a convenient framework to model temporal cross-border data, decomposing the model into autoregressive and contemporaneous networks. The article shows how the application of the proposed models to the Bank of International Settlements locational banking statistics allows the identification of four distinct groups of countries, that can be considered central in systemic risk contagion.  相似文献   

19.
In this study, we present different estimation procedures for the parameters of the Poisson–exponential distribution, such as the maximum likelihood, method of moments, modified moments, ordinary and weighted least-squares, percentile, maximum product of spacings, Cramer–von Mises and the Anderson–Darling maximum goodness-of-fit estimators and compare them using extensive numerical simulations. We showed that the Anderson–Darling estimator is the most efficient for estimating the parameters of the proposed distribution. Our proposed methodology was also illustrated in three real data sets related to the minimum, average and the maximum flows during October at São Carlos River in Brazil demonstrating that the PE distribution is a simple alternative to be used in hydrological applications.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号