首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 358 毫秒
1.
The problem of comparing the linear calibration equations of several measuring methods, each designed to measure the same characteristic on a common group of individuals, is discussed. We consider the factor analysis version of the model and propose to estimate the model parameters using the EM algorithm. The equations that define the 'M' step are simple to implement and computationally in expensive, requiring no additional maximization procedures. The derivation of the complete data log-likelihood function makes it possible to obtain the expected and observed information matrices for any number p(> 3) of instruments in closed form, upon which large sample inference on the parameters can be based. Re-analysis of two actual data sets is presented.  相似文献   

2.
Inference in the presence of nuisance parameters is often carried out by using the χ2-approximation to the profile likelihood ratio statistic. However, in small samples, the accuracy of such procedures may be poor, in part because the profile likelihood does not behave as a true likelihood, in particular having a profile score bias and information bias which do not vanish. To account better for nuisance parameters, various researchers have suggested that inference be based on an additively adjusted version of the profile likelihood function. Each of these adjustments to the profile likelihood generally has the effect of reducing the bias of the associated profile score statistic. However, these adjustments are not applicable outside the specific parametric framework for which they were developed. In particular, it is often difficult or even impossible to apply them where the parameter about which inference is desired is multidimensional. In this paper, we propose a new adjustment function which leads to an adjusted profile likelihood having reduced score and information biases and is readily applicable to a general parametric framework, including the case of vector-valued parameters of interest. Examples are given to examine the performance of the new adjusted profile likelihood in small samples, and also to compare its performance with other adjusted profile likelihoods.  相似文献   

3.
In this paper we consider semiparametric inference methods for the time scale parameters in general time scale models (Oakes, 1995, Duchesne and Lawless, 2000). We use the results of Robins and Tsiatis (1992) and Lin and Ying (1995) to derive a rank-based estimator that is more efficient and robust than the traditional minimum coefficient of variation (min CV) estimator of Kordonsky and Gerstbakh (1993) for many underlying models. Moreover, our estimator can readily handle censored samples, which is not the case with the min CV method.  相似文献   

4.
We develop a general approach to estimation and inference for income distributions using grouped or aggregate data that are typically available in the form of population shares and class mean incomes, with unknown group bounds. We derive generic moment conditions and an optimal weight matrix that can be used for generalized method-of-moments (GMM) estimation of any parametric income distribution. Our derivation of the weight matrix and its inverse allows us to express the seemingly complex GMM objective function in a relatively simple form that facilitates estimation. We show that our proposed approach, which incorporates information on class means as well as population proportions, is more efficient than maximum likelihood estimation of the multinomial distribution, which uses only population proportions. In contrast to the earlier work of Chotikapanich, Griffiths, and Rao, and Chotikapanich, Griffiths, Rao, and Valencia, which did not specify a formal GMM framework, did not provide methodology for obtaining standard errors, and restricted the analysis to the beta-2 distribution, we provide standard errors for estimated parameters and relevant functions of them, such as inequality and poverty measures, and we provide methodology for all distributions. A test statistic for testing the adequacy of a distribution is proposed. Using eight countries/regions for the year 2005, we show how the methodology can be applied to estimate the parameters of the generalized beta distribution of the second kind (GB2), and its special-case distributions, the beta-2, Singh–Maddala, Dagum, generalized gamma, and lognormal distributions. We test the adequacy of each distribution and compare predicted and actual income shares, where the number of groups used for prediction can differ from the number used in estimation. Estimates and standard errors for inequality and poverty measures are provided. Supplementary materials for this article are available online.  相似文献   

5.
The aim of this paper is to propose methods of detecting change in the coefficients of a multinomial logistic regression model for categorical time series offline. The alternatives to the null hypothesis of stationarity can be either the hypothesis that it is not true, or that there is a temporary change in the sequence. We use the efficient score vector of the partial likelihood function. This has several advantages. First, the alternative value of the parameter does not have to be estimated; hence, we have a procedure that has a simple structure with only one parameter estimation using all available observations. This is in contrast with the generalized likelihood ratio-based change point tests. The efficient score vector is used in various ways. As a vector, its components correspond to the different components of the multinomial logistic regression model’s parameter vector. Using its quadratic form a test can be defined, where the presence of a change in any or all parameters is tested for. If there are too many parameters one can test for any subset while treating the rest as nuisance parameters. Our motivating example is a DNA sequence of four categories, and our test result shows that in the published data the distribution of the four categories is not stationary.  相似文献   

6.
There are many models that require the estimation of a set of ordered parameters. For example, multivariate analysis of variance often is formulated as testing for the equality of the parameters versus an ordered alternative. This problem, referred to as isotonic inference, constrained inference, or isotonic regression, has led to the development of general solutions, not often easy to apply in special models. In this expository paper, we study the special case of a separable convex quadratic programming problem for which the optimality conditions lead to a readily solved linear complementarity problem in the Lagrange multipliers, and subsequently to an equivalent linear programming problem, whose solution can be used to recover the solution of the original isotonic problem. The method can be applied to estimating ordered correlations, ordered binomial probabilities, ordered Poisson parameters, ordered exponential scale parameters, or ordered risk differences.  相似文献   

7.
The score function is associated with some optimality features in statistical inference. This review article looks on the central role of the score in testing and estimation. The maximization of the power in testing and the quest for efficiency in estimation lead to score as a guiding principle. In hypothesis testing, the locally most powerful test statistic is the score test or a transformation of it. In estimation, the optimal estimating function is the score. The same link can be made in the case of nuisance parameters: the optimal test function should have maximum correlation with the score of the parameter of primary interest. We complement this result by showing that the same criterion should be satisfied in the estimation problem as well.  相似文献   

8.
Random coefficient regression models have been used t odescribe repeated measures on members of a sample of n in dividuals . Previous researchers have proposed methods of estimating the mean parameters of such models. Their methods require that eachindividual be observed under the same settings of independent variablesor , lesss stringently , that the number of observations ,r , on each individual be the same. Under the latter restriction ,estimators of mean regression parameters exist which are consist ent as both r→∞and n→∞ and efficient as r→∞, and large sample ( r large ) tests of mean parameters are available . These results are easily extended to the case where not a11 individuals are observed an equal number of times provided limit are taken as min(r) → ∞. Existing methods of inference , however, are not justified by the current literature when n is large and r is small, as is the case i n many bio-medical applications . The primary con tribution of the current paper is a derivation of the asymptotic properties of modifications of existing estimators as n alone tends to infinity, r fixed. From these properties it is shown that existing methods of inference, which are currently justified only when min(r) is large, are also justifiable when n is large and min(r) is small. A secondary contribution is the definition of a positive definite estimator of the covariance matrix for the random coefficients in these models. Use of this estimator avoids computational problems that can otherwise arise.  相似文献   

9.
In this paper, we study the change-point inference problem motivated by the genomic data that were collected for the purpose of monitoring DNA copy number changes. DNA copy number changes or copy number variations (CNVs) correspond to chromosomal aberrations and signify abnormality of a cell. Cancer development or other related diseases are usually relevant to DNA copy number changes on the genome. There are inherited random noises in such data, therefore, there is a need to employ an appropriate statistical model for identifying statistically significant DNA copy number changes. This type of statistical inference is evidently crucial in cancer researches, clinical diagnostic applications, and other related genomic researches. For the high-throughput genomic data resulting from DNA copy number experiments, a mean and variance change point model (MVCM) for detecting the CNVs is appropriate. We propose to use a Bayesian approach to study the MVCM for the cases of one change and propose to use a sliding window to search for all CNVs on a given chromosome. We carry out simulation studies to evaluate the estimate of the locus of the DNA copy number change using the derived posterior probability. These simulation results show that the approach is suitable for identifying copy number changes. The approach is also illustrated on several chromosomes from nine fibroblast cancer cell line data (array-based comparative genomic hybridization data). All DNA copy number aberrations that have been identified and verified by karyotyping are detected by our approach on these cell lines.  相似文献   

10.
This article studies the probabilistic structure and asymptotic inference of the first-order periodic generalized autoregressive conditional heteroscedasticity (PGARCH(1, 1)) models in which the parameters in volatility process are allowed to switch between different regimes. First, we establish necessary and sufficient conditions for a PGARCH(1, 1) process to have a unique stationary solution (in periodic sense) and for the existence of moments of any order. Second, using the representation of squared PGARCH(1, 1) model as a PARMA(1, 1) model, we then consider Yule-Walker type estimators for the parameters in PGARCH(1, 1) model and derives their consistency and asymptotic normality. The estimator can be surprisingly efficient for quite small numbers of autocorrelations and, in some cases can be more efficient than the least squares estimate (LSE). We use a residual bootstrap to define bootstrap estimators for the Yule-Walker estimates and prove the consistency of this bootstrap method. A set of numerical experiments illustrates the practical relevance of our theoretical results.  相似文献   

11.
We consider the development of Bayesian Nonparametric methods for product partition models such as Hidden Markov Models and change point models. Our approach uses a Mixture of Dirichlet Process (MDP) model for the unknown sampling distribution (likelihood) for the observations arising in each state and a computationally efficient data augmentation scheme to aid inference. The method uses novel MCMC methodology which combines recent retrospective sampling methods with the use of slice sampler variables. The methodology is computationally efficient, both in terms of MCMC mixing properties, and robustness to the length of the time series being investigated. Moreover, the method is easy to implement requiring little or no user-interaction. We apply our methodology to the analysis of genomic copy number variation.  相似文献   

12.
In segmentation problems, inference on change-point position and model selection are two difficult issues due to the discrete nature of change-points. In a Bayesian context, we derive exact, explicit and tractable formulae for the posterior distribution of variables such as the number of change-points or their positions. We also demonstrate that several classical Bayesian model selection criteria can be computed exactly. All these results are based on an efficient strategy to explore the whole segmentation space, which is very large. We illustrate our methodology on both simulated data and a comparative genomic hybridization profile.  相似文献   

13.
In the problem of parametric statistical inference with a finite parameter space, we propose some simple rules for defining posterior upper and lower probabilities directly from the observed likelihood function, without using any prior information. The rules satisfy the likelihood principle and a basic consistency principle ('avoiding sure loss'), they produce vacuous inferences when the likelihood function is constant, and they have other symmetry, monotonicity and continuity properties. One of the rules also satisfies fundamental frequentist principles. The rules can be used to eliminate nuisance parameters, and to interpret the likelihood function and to use it in making decisions. To compare the rules, they are applied to the problem of sampling from a finite population. Our results indicate that there are objective statistical methods which can reconcile three general approaches to statistical inference: likelihood inference, coherent inference and frequentist inference.  相似文献   

14.
David R. Bickel 《Statistics》2018,52(3):552-570
Learning from model diagnostics that a prior distribution must be replaced by one that conflicts less with the data raises the question of which prior should instead be used for inference and decision. The same problem arises when a decision maker learns that one or more reliable experts express unexpected beliefs. In both cases, coherence of the solution would be guaranteed by applying Bayes's theorem to a distribution of prior distributions that effectively assigns the initial prior distribution a probability arbitrarily close to 1. The new distribution for inference would then be the distribution of priors conditional on the insight that the prior distribution lies in a closed convex set that does not contain the initial prior. A readily available distribution of priors needed for such conditioning is the law of the empirical distribution of sufficiently large number of independent parameter values drawn from the initial prior. According to the Gibbs conditioning principle from the theory of large deviations, the resulting new prior distribution minimizes the entropy relative to the initial prior. While minimizing relative entropy accommodates the necessity of going beyond the initial prior without departing from it any more than the insight demands, the large-deviation derivation also ensures the advantages of Bayesian coherence. This approach is generalized to uncertain insights by allowing the closed convex set of priors to be random.  相似文献   

15.
ABSTRACT

There is no established procedure for testing for trend with nominal outcomes that would provide both a global hypothesis test and outcome-specific inference. We derive a simple formula for such a test using a weighted sum of Cochran–Armitage test statistics evaluating the trend in each outcome separately. The test is shown to be equivalent to the score test for multinomial logistic regression, however, the new formulation enables the derivation of a sample size formula and multiplicity-adjusted inference for individual outcomes. The proposed methods are implemented in the R package multiCA.  相似文献   

16.
Summary.  We introduce a flexible marginal modelling approach for statistical inference for clustered and longitudinal data under minimal assumptions. This estimated estimating equations approach is semiparametric and the proposed models are fitted by quasi-likelihood regression, where the unknown marginal means are a function of the fixed effects linear predictor with unknown smooth link, and variance–covariance is an unknown smooth function of the marginal means. We propose to estimate the nonparametric link and variance–covariance functions via smoothing methods, whereas the regression parameters are obtained via the estimated estimating equations. These are score equations that contain nonparametric function estimates. The proposed estimated estimating equations approach is motivated by its flexibility and easy implementation. Moreover, if data follow a generalized linear mixed model, with either a specified or an unspecified distribution of random effects and link function, the model proposed emerges as the corresponding marginal (population-average) version and can be used to obtain inference for the fixed effects in the underlying generalized linear mixed model, without the need to specify any other components of this generalized linear mixed model. Among marginal models, the estimated estimating equations approach provides a flexible alternative to modelling with generalized estimating equations. Applications of estimated estimating equations include diagnostics and link selection. The asymptotic distribution of the proposed estimators for the model parameters is derived, enabling statistical inference. Practical illustrations include Poisson modelling of repeated epileptic seizure counts and simulations for clustered binomial responses.  相似文献   

17.
Summary.  Generalized linear latent variable models (GLLVMs), as defined by Bartholomew and Knott, enable modelling of relationships between manifest and latent variables. They extend structural equation modelling techniques, which are powerful tools in the social sciences. However, because of the complexity of the log-likelihood function of a GLLVM, an approximation such as numerical integration must be used for inference. This can limit drastically the number of variables in the model and can lead to biased estimators. We propose a new estimator for the parameters of a GLLVM, based on a Laplace approximation to the likelihood function and which can be computed even for models with a large number of variables. The new estimator can be viewed as an M -estimator, leading to readily available asymptotic properties and correct inference. A simulation study shows its excellent finite sample properties, in particular when compared with a well-established approach such as LISREL. A real data example on the measurement of wealth for the computation of multidimensional inequality is analysed to highlight the importance of the methodology.  相似文献   

18.
Approximate Bayesian Inference for Survival Models   总被引:1,自引:0,他引:1  
Abstract. Bayesian analysis of time‐to‐event data, usually called survival analysis, has received increasing attention in the last years. In Cox‐type models it allows to use information from the full likelihood instead of from a partial likelihood, so that the baseline hazard function and the model parameters can be jointly estimated. In general, Bayesian methods permit a full and exact posterior inference for any parameter or predictive quantity of interest. On the other side, Bayesian inference often relies on Markov chain Monte Carlo (MCMC) techniques which, from the user point of view, may appear slow at delivering answers. In this article, we show how a new inferential tool named integrated nested Laplace approximations can be adapted and applied to many survival models making Bayesian analysis both fast and accurate without having to rely on MCMC‐based inference.  相似文献   

19.
This paper examines modeling and inference questions for experiments in which different subsets of a set of k possibly dependent components are tested in r different environments. In each environment, the failure times of the set of components on test is assumed to be governed by a particular type of multivariate exponential (MVE) distribution. For any given component tested in several environments, it is assumed that its marginal failure rate varies from one environment to another via a change of scale between the environments, resulting in a joint MVE model which links in a natural way the applicable MVE distributions describing component behavior in each fixed environment. This study thus extends the work of Proschan and Sullo (1976) to multiple environments and the work of Kvam and Samaniego (1993) to dependent data. The problem of estimating model parameters via the method of maximum likelihood is examined in detail. First, necessary and sufficient conditions for the identifiability of model parameters are established. We then treat the derivation of the MLE via a numerically-augmented application of the EM algorithm. The feasibility of the estimation method is demonstrated in an example in which the likelihood ratio test of the hypothesis of equal component failure rates within any given environment is carried out.  相似文献   

20.
This paper addresses the problem of inference for the antedependence model (Gabriel, 1961, 1962). Antedependence can be formulated as an autoregressive process of general order which is non-stationary in time. Its primary application is in the analysis of repeated measurements data, that is, data consisting of independent replicates of relatively short time series. Our focus is on testing a general linear hypothesis in the context of a multivariate regression model with multivariate normal antedependent errors. Although the relevant likelihood ratio statistic was first presented by Gabriel (1961), the distribution of this statistics has not yet been derived. We present this derivation and show how this result leads to a simple correction factor to improve the x2 approximation of the likelihood ratio statistic.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号