首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The author introduces robust techniques for estimation, inference and variable selection in the analysis of longitudinal data. She first addresses the problem of the robust estimation of the regression and nuisance parameters, for which she derives the asymptotic distribution. She uses weighted estimating equations to build robust quasi‐likelihood functions. These functions are then used to construct a class of test statistics for variable selection. She derives the limiting distribution of these tests and shows its robustness properties in terms of stability of the asymptotic level and power under contamination. An application to a real data set allows her to illustrate the benefits of a robust analysis.  相似文献   

2.
The paper considers the problem of identifying spatial clustering, for instance of one group of individuals in relation to the spatial distribution of another. First, some of the literature is reviewed, some operational problems of practical investigations are discussed and a data set is introduced that involves a group of laryngeal cancer patients and a group of lung cancer patients in south Lancashire. Two techniques, an integrated squared difference statistic and a two-dimensional version of the scan statistic, are then outlined, some of their properties are discussed and they are applied to the data set. The final section takes stock of the data analysis and the characteristics of the techniques.  相似文献   

3.
Exact tests for the equality of several linear models are developed using permutation techniques. Two cases of the linear model, characterized by either stochastic or nonstochastic predictors, are considered: the linear regression model (LRM) and the general linear model (GLM). A general class of test statistics using the volume of simplexes as the basic unit of analysis is proposed for this problem. The resulting class of statistics is shown to be a natural generalization of the multi-response permutation procedure (MRPP) test statistics which have been shown to comprise many of the statistics used in both parametric and nonparametric analysis of the standard g—sample problem. In the LRM case, exact moments of all orders are derived for the permutation distribution of any test statistic in the general class. Moment-based approximation of significance levels is shown to be computationally feasible in the simple LRM.  相似文献   

4.
Uncertainty and sensitivity analyses for systems that involve both stochastic (i.e., aleatory) and subjective (i.e., epistemic) uncertainty are discussed. In such analyses, the dependent variable is usually a complementary cumulative distribution function (CCDF) that arises from stochastic uncertainty; uncertainty analysis involves the determination of a distribution of CCDFs that results from subjective uncertainty, and sensitivity analysis involves the determination of the effects of subjective uncertainty in individual variables on this distribution of CCDFs. Uncertainty analysis is presented as an integration problem involving probability spaces for stochastic and subjective uncertainty. Approximation procedures for the underlying integrals are described that provide an assessment of the effects of stochastic uncertainty, an assessment of the effects of subjective uncertainty, and a basis for performing sensitivity studies. Extensive use is made of Latin hypercube sampling, importance sampling and regression-based sensitivity analysis techniques. The underlying ideas, which are initially presented in an abstract form, are central to the design and performance of real analyses. To emphasize the connection between concept and computational practice, these ideas are illustrated with an analysis involving the MACCS reactor accident consequence model a, performance assessment for the Waste Isolation Pilot Plant, and a probabilistic risk assessment for a nuclear power station.  相似文献   

5.
New techniques for the analysis of stochastic volatility models in which the logarithm of conditional variance follows an autoregressive model are developed. A cyclic Metropolis algorithm is used to construct a Markov-chain simulation tool. Simulations from this Markov chain converge in distribution to draws from the posterior distribution enabling exact finite-sample inference. The exact solution to the filtering/smoothing problem of inferring about the unobserved variance states is a by-product of our Markov-chain method. In addition, multistep-ahead predictive densities can be constructed that reflect both inherent model variability and parameter uncertainty. We illustrate our method by analyzing both daily and weekly data on stock returns and exchange rates. Sampling experiments are conducted to compare the performance of Bayes estimators to method of moments and quasi-maximum likelihood estimators proposed in the literature. In both parameter estimation and filtering, the Bayes estimators outperform these other approaches.  相似文献   

6.
Summary.  The problem motivating the paper is the determination of sample size in clinical trials under normal likelihoods and at the substantive testing stage of a financial audit where normality is not an appropriate assumption. A combination of analytical and simulation-based techniques within the Bayesian framework is proposed. The framework accommodates two different prior distributions: one is the general purpose fitting prior distribution that is used in Bayesian analysis and the other is the expert subjective prior distribution, the sampling prior which is believed to generate the parameter values which in turn generate the data. We obtain many theoretical results and one key result is that typical non-informative prior distributions lead to very small sample sizes. In contrast, a very informative prior distribution may either lead to a very small or a very large sample size depending on the location of the centre of the prior distribution and the hypothesized value of the parameter. The methods that are developed are quite general and can be applied to other sample size determination problems. Some numerical illustrations which bring out many other aspects of the optimum sample size are given.  相似文献   

7.
Methods for assessing the variability of an estimated contour of a density are discussed. A new method called the coverage plot is proposed. Techniques including sectioning and bootstrap techniques are compared for a particular problem which arises in Monte Carlo simulation approaches to estimating the spatial distribution of risk in the operation of weapons firing ranges. It is found that, for computational reasons, the sectioning procedure outperforms the bootstrap for this problem. The roles of bias and sample size are also seen in the examples shown.  相似文献   

8.
Bayesian analysis of outlier problems using the Gibbs sampler   总被引:6,自引:0,他引:6  
We consider the Bayesian analysis of outlier models. We show that the Gibbs sampler brings considerable conceptual and computational simplicity to the problem of calculating posterior marginals. Although other techniques for finding posterior marginals are available, the Gibbs sampling approach is notable for its ease of implementation. Allowing the probability of an outlier to be unknown introduces an extra parameter into the model but this turns out to involve only minor modification to the algorithm. We illustrate these ideas using a contaminated Gaussian distribution, at-distribution, a contaminated binomial model and logistic regression.  相似文献   

9.
Summary.  We suggest two new methods, which are applicable to both deconvolution and regression with errors in explanatory variables, for nonparametric inference. The two approaches involve kernel or orthogonal series methods. They are based on defining a low order approximation to the problem at hand, and proceed by constructing relatively accurate estimators of that quantity rather than attempting to estimate the true target functions consistently. Of course, both techniques could be employed to construct consistent estimators, but in many contexts of importance (e.g. those where the errors are Gaussian) consistency is, from a practical viewpoint, an unattainable goal. We rephrase the problem in a form where an explicit, interpretable, low order approximation is available. The information that we require about the error distribution (the error-in-variables distribution, in the case of regression) is only in the form of low order moments and so is readily obtainable by a rudimentary analysis of indirect measurements of errors, e.g. through repeated measurements. In particular, we do not need to estimate a function, such as a characteristic function, which expresses detailed properties of the error distribution. This feature of our methods, coupled with the fact that all our estimators are explicitly defined in terms of readily computable averages, means that the methods are particularly economical in computing time.  相似文献   

10.
Both knowledge-based systems and statistical models are typically concerned with making predictions about future observables. Here we focus on assessment of predictive performance and provide two techniques for improving the predictive performance of Bayesian graphical models. First, we present Bayesian model averaging, a technique for accounting for model uncertainty.

Second, we describe a technique for eliciting a prior distribution for competing models from domain experts. We explore the predictive performance of both techniques in the context of a urological diagnostic problem.  相似文献   

11.
In regression models having symmetric errors, exact distribution-free inference about individual parameters may be carried out by grouping observations, eliminating unwanted parameters within groups, and applying distribution free techniques for the symmetric location parameter problem. Models whose errors have identical but not symmetric distributions may obtain symmetry by taking differences between pairs of observations. Both grouping and differencing involve potential efficiency loss. The choice of an optimal scheme to minimize efficiency loss is expressible as a multi–assignment type of problem, whose solutions, exact and approximate, are discussed.  相似文献   

12.
Efficient industrial experiments for reliability analysis of manufactured goods may consist in subjecting the units to higher stress levels than those of the usual working conditions. This results in the so called "accelerated life tests" where, for each pre-fixed stress level, the experiment ends after the failure of a certain pre-fixed proportion of units or a certain test time is reached. The aim of this paper is to determine estimates of the mean lifetime of the units under usual working conditions from censored failure data obtained under stress conditions. This problem is approached through generalized linear modelling and related inferential techniques, considering a Weibull failure distribution and a log-linear stress-response relationship. The general framework considered has as particular cases, the Inverse Power Law model, the Eyring model, the Arrhenius model and the generalized Eyring model. In order to illustrate the proposed methodology, a numerical example is provided.  相似文献   

13.
An essential ingredient of any time series analysis is the estimation of the model parameters and the forecasting of future observations. This investigation takes a Bayesian approach to the analysis of time series by making inferences of the model parameters from the posterior distribution and forecasting from the predictive distribution.

The foundation of the approach is to approximate the condi-tional likelihood by a normal-gamma distribution on the parameter space. The techniques illustrated with many examples of ARMA processes.  相似文献   

14.
The authors present theoretical results that show how one can simulate a mixture distribution whose components live in subspaces of different dimension by reformulating the problem in such a way that observations may be drawn from an auxiliary continuous distribution on the largest subspace and then transformed in an appropriate fashion. Motivated by the importance of enlarging the set of available Markov chain Monte Carlo (MCMC) techniques, the authors show how their results can be fruitfully employed in problems such as model selection (or averaging) of nested models, or regeneration of Markov chains for evaluating standard deviations of estimated expectations derived from MCMC simulations.  相似文献   

15.
In this work we study robustness in Bayesian models through a generalization of the Normal distribution. We show new appropriate techniques in order to deal with this distribution in Bayesian inference. Then we propose two approaches to decide, in some applications, if we should replace the usual Normal model by this generalization. First, we pose this dilemma as a model rejection problem, using diagnostic measures. In the second approach we evaluate the model's predictive efficiency. We illustrate those perspectives with a simulation study, a non linear model and a longitudinal data model.  相似文献   

16.
The complex Bingham distribution is relevant for the shape analysis of landmark data in two dimensions. In this paper it is shown that the problem of simulating from this distribution reduces to simulation from a truncated multivariate exponential distribution. Several simulation methods are described and their efficiencies are compared.  相似文献   

17.
边限检验理论及几点讨论   总被引:4,自引:0,他引:4  
检验经济变量之间长期关系的协整技术要求变量是同阶单整的,这不可避免地涉及一定程度的预检验问题,而预检验问题会增加变量间长期关系分析的不确定性。当不能确定变量的单整类型时,边限检验理论提出了一个可以直接检验一个变量和一组解释变量之间长期关系的新方法。在介绍了边限检验方法中基本的VAR模型和假设及边限检验方法中用到的重要统计量——Wald统计量和T统计量及它们各自的渐近分布形式后,说明了边限检验理论在理论和实际运用当中需要注意的几个问题,最后通过实例分析说明了边限检验理论的运用。  相似文献   

18.
We consider the problem of estimating the shape parameter of a Pareto distribution with unknown scale under an arbitrary strictly bowl-shaped loss function. Classes of estimators improving upon minimum risk equivariant estimator are derived by adopting Stein, Brown, and Kubokawa techniques. The classes of estimators are shown to include some known procedures such as Stein-type and Brewster and Zidek-type estimators from literature. We also provide risk plots of proposed estimators for illustration purpose.  相似文献   

19.
Over the last 25 years, increasing attention has been given to the problem of analysing data arising from circular distributions. The most important circular distribution was introduced by Von Mises (1918) which takes the form:

[Formulas]

where Io(k) is a modified Bessel function, u0 is the mean direction and k is the concentration parameter of the distribution. Watson & Williams (1956) laid the foundation of analysis of variance type techniques for the two-dimensional case of circular data using the Von Mises distribution. Stephens (1962a,b; 1969, 1972). Upton (1974) and Stephens (1982) made further improvements to Watson & Williams’ work. In this paper the authors will discuss the pitfalls of the methods adopted by Stephens (1982) and present a unified analysis of variance type approach for circular data.  相似文献   


20.
Quantile regression (QR) is becoming increasingly popular due to its relevance in many scientific investigations. There is a great amount of work about linear and nonlinear QR models. Specifically, nonparametric estimation of the conditional quantiles received particular attention, due to its model flexibility. However, nonparametric QR techniques are limited in the number of covariates. Dimension reduction offers a solution to this problem by considering low-dimensional smoothing without specifying any parametric or nonparametric regression relation. The existing dimension reduction techniques focus on the entire conditional distribution. We, on the other hand, turn our attention to dimension reduction techniques for conditional quantiles and introduce a new method for reducing the dimension of the predictor $$\mathbf {X}$$. The novelty of this paper is threefold. We start by considering a single index quantile regression model, which assumes that the conditional quantile depends on $$\mathbf {X}$$ through a single linear combination of the predictors, then extend to a multi-index quantile regression model, and finally, generalize the proposed methodology to any statistical functional of the conditional distribution. The performance of the methodology is demonstrated through simulation examples and real data applications. Our results suggest that this method has a good finite sample performance and often outperforms the existing methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号