首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 24 毫秒
1.
Multilevel models have been widely applied to analyze data sets which present some hierarchical structure. In this paper we propose a generalization of the normal multilevel models, named elliptical multilevel models. This proposal suggests the use of distributions in the elliptical class, thus involving all symmetric continuous distributions, including the normal distribution as a particular case. Elliptical distributions may have lighter or heavier tails than the normal ones. In the case of normal error models with the presence of outlying observations, heavy-tailed error models may be applied to accommodate such observations. In particular, we discuss some aspects of the elliptical multilevel models, such as maximum likelihood estimation and residual analysis to assess features related to the fitting and the model assumptions. Finally, two motivating examples analyzed under normal multilevel models are reanalyzed under Student-t and power exponential multilevel models. Comparisons with the normal multilevel model are performed by using residual analysis.  相似文献   

2.
There are various techniques for dealing with incomplete data; some are computationally highly intensive and others are not as computationally intensive, while all may be comparable in their efficiencies. In spite of these developments, analysis using only the complete data subset is performed when using popular statistical software. In an attempt to demonstrate the efficiencies and advantages of using all available data, we compared several approaches that are relatively simple but efficient alternatives to those using the complete data subset for analyzing repeated measures data with missing values, under the assumption of a multivariate normal distribution of the data. We also assumed that the missing values occur in a monotonic pattern and completely at random. The incomplete data procedure is demonstrated to be more powerful than the procedure of using the complete data subset, generally when the within-subject correlation gets large. One other principal finding is that even with small sample data, for which various covariance models may be indistinguishable, the empirical size and power are shown to be sensitive to misspecified assumptions about the covariance structure. Overall, the testing procedures that do not assume any particular covariance structure are shown to be more robust in keeping the empirical size at the nominal level than those assuming a special structure.  相似文献   

3.
Investigations of multivariate population are pretty common in applied researches, and the two-way crossed factorial design is a common design used at the exploratory phase in industrial applications. When assumptions such as multivariate normality and covariance homogeneity are violated, the conventional wisdom is to resort to nonparametric tests for hypotheses testing. In this paper we compare the performances, and in particular the power, of some nonparametric and semi-parametric methods that have been developed in recent years. Specifically, we examined resampling methods and robust versions of classical multivariate analysis of variance (MANOVA) tests. In a simulation study, we generate data sets with different configurations of factor''s effect, number of replicates, number of response variables under null hypothesis, and number of response variables under alternative hypothesis. The objective is to elicit practical advice and guides to practitioners regarding the sensitivity of the tests in the various configurations, the tradeoff between power and type I error, the strategic impact of increasing number of response variables, and the favourable performance of one test when the alternative is sparse. A real case study from an industrial engineering experiment in thermoformed packaging production is used to compare and illustrate the application of the various methods.  相似文献   

4.
Penalized likelihood inference in extreme value analyses   总被引:1,自引:0,他引:1  
Models for extreme values are usually based on detailed asymptotic argument, for which strong ergodic assumptions such as stationarity, or prescribed perturbations from stationarity, are required. In most applications of extreme value modelling such assumptions are not satisfied, but the type of departure from stationarity is either unknown or complex, making asymptotic calculations unfeasible. This has led to various approaches in which standard extreme value models are used as building blocks for conditional or local behaviour of processes, with more general statistical techniques being used at the modelling stage to handle the non-stationarity. This paper presents another approach in this direction based on penalized likelihood. There are some advantages to this particular approach: the method has a simple interpretation; computations for estimation are relatively straightforward using standard algorithms; and a simple reinterpretation of the model enables broader inferences, such as confidence intervals, to be obtained using MCMC methodology. Methodological details together with applications to both athletics and environmental data are given.  相似文献   

5.
Multi-stage time evolving models are common statistical models for biological systems, especially insect populations. In stage-duration distribution models, parameter estimation for the models use the Laplace transform method. This method involves assumptions such as known constant shapes, known constant rates or the same overall hazard rate for all stages. These assumptions are strong and restrictive. The main aim of this paper is to weaken these assumptions by using a Bayesian approach. In particular, a Metropolis-Hastings algorithm based on deterministic transformations is used to estimate parameters. We will use two models, one which has no hazard rates, and the other has stage-wise constant hazard rates. These methods are validated in simulation studies followed by a case study of cattle parasites. The results show that the proposed methods are able to estimate the parameters comparably well, as opposed to using the Laplace transform methods.  相似文献   

6.
We consider the role of global robustness measures in Bayes linear analysis. We suggest two such measures, one for expectation comparisons and one for variance comparisons. Geometric interpretations of the measures are presented. The approach is illustrated by considering the robustness of certain multiplicative models to assumptions of independence, with particular application to a problem arising in an asset management model for water resources.  相似文献   

7.
The need to use rigorous, transparent, clearly interpretable, and scientifically justified methodology for preventing and dealing with missing data in clinical trials has been a focus of much attention from regulators, practitioners, and academicians over the past years. New guidelines and recommendations emphasize the importance of minimizing the amount of missing data and carefully selecting primary analysis methods on the basis of assumptions regarding the missingness mechanism suitable for the study at hand, as well as the need to stress‐test the results of the primary analysis under different sets of assumptions through a range of sensitivity analyses. Some methods that could be effectively used for dealing with missing data have not yet gained widespread usage, partly because of their underlying complexity and partly because of lack of relatively easy approaches to their implementation. In this paper, we explore several strategies for missing data on the basis of pattern mixture models that embody clear and realistic clinical assumptions. Pattern mixture models provide a statistically reasonable yet transparent framework for translating clinical assumptions into statistical analyses. Implementation details for some specific strategies are provided in an Appendix (available online as Supporting Information), whereas the general principles of the approach discussed in this paper can be used to implement various other analyses with different sets of assumptions regarding missing data. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

8.
Hierarchical models are popular in many applied statistics fields including Small Area Estimation. One well known model employed in this particular field is the Fay–Herriot model, in which unobservable parameters are assumed to be Gaussian. In Hierarchical models assumptions about unobservable quantities are difficult to check. For a special case of the Fay–Herriot model, Sinharay and Stern [2003. Posterior predictive model checking in Hierarchical models. J. Statist. Plann. Inference 111, 209–221] showed that violations of the assumptions about the random effects are difficult to detect using posterior predictive checks. In this present paper we consider two extensions of the Fay–Herriot model in which the random effects are assumed to be distributed according to either an exponential power (EP) distribution or a skewed EP distribution. We aim to explore the robustness of the Fay–Herriot model for the estimation of individual area means as well as the empirical distribution function of their ‘ensemble’. Our findings, which are based on a simulation experiment, are largely consistent with those of Sinharay and Stern as far as the efficient estimation of individual small area parameters is concerned. However, when estimating the empirical distribution function of the ‘ensemble’ of small area parameters, results are more sensitive to the failure of distributional assumptions.  相似文献   

9.
The two-way two-levels crossed factorial design is a commonly used design by practitioners at the exploratory phase of industrial experiments. The F-test in the usual linear model for analysis of variance (ANOVA) is a key instrument to assess the impact of each factor and of their interactions on the response variable. However, if assumptions such as normal distribution and homoscedasticity of errors are violated, the conventional wisdom is to resort to nonparametric tests. Nonparametric methods, rank-based as well as permutation, have been a subject of recent investigations to make them effective in testing the hypotheses of interest and to improve their performance in small sample situations. In this study, we assess the performances of some nonparametric methods and, more importantly, we compare their powers. Specifically, we examine three permutation methods (Constrained Synchronized Permutations, Unconstrained Synchronized Permutations and Wald-Type Permutation Test), a rank-based method (Aligned Rank Transform) and a parametric method (ANOVA-Type Test). In the simulations, we generate datasets with different configurations of distribution of errors, variance, factor's effect and number of replicates. The objective is to elicit practical advice and guides to practitioners regarding the sensitivity of the tests in the various configurations, the conditions under which some tests cannot be used, the tradeoff between power and type I error, and the bias of the power on one main factor analysis due to the presence of effect of the other factor. A dataset from an industrial engineering experiment for thermoformed packaging production is used to illustrate the application of the various methods of analysis, taking into account the power of the test suggested by the objective of the experiment.  相似文献   

10.
If a model is fitted to empirical data, bias can arise from terms which are not incorporated in the model assumptions. As a consequence the commonly used optimality criteria based on the generalized variance of the estimator of the model parameters may not lead to efficient designs for the statistical analysis. In this note some general aspects of all-bias designs are presented, which were introduced in this context by Box and Draper (1959). Using an interesting correspondence between the points of all-bias designs and the knots of quadrature formulas we establish sufficient conditions such that a given design is an all-bias design. The results are illustrated in the special case of spline regression models. In particular our results generalize recent findings of Woods and Lewis (2006).  相似文献   

11.
Abstract.  The present work focuses on extensions of the posterior predictive p -value (ppp-value) for models with hierarchical structure, designed for testing assumptions made on underlying processes. The ppp-values are popular as tools for model criticism, yet their lack of a common interpretation limit their practical use. We discuss different extensions of ppp-values to hierarchical models, allowing for discrepancy measures that can be used for checking properties of the model at all stages. Through analytical derivations and simulation studies on simple models, we show that similar to the standard ppp-values, these extensions are typically far from uniformly distributed under the model assumptions and can give poor power in a hypothesis testing framework. We propose a calibration of the p -values, making the resulting calibrated p -values uniformly distributed under the model conditions. Illustrations are made through a real example of multinomial regression to age distributions of fish.  相似文献   

12.
It is very well known that analyses for missing data depend on untestable assumptions. As a consequence, in such settings, sensitivity analyses are often sensible. One such class of analyses assesses the dependence of conclusions on an explicit missing value mechanism. Inevitably, there is an association between such dependence and the actual (but unknown) distribution of the missing data. In a particular parametric framework for dropout in this paper, an approach is presented that reduces (but never removes) the impact of incorrect assumptions on the form of the association. It is shown how these models can be formulated and fitted relatively simply using hierarchical likelihood. These are applied directly to an example involving mastitis in dairy cattle, and an extensive simulation study is described to show the properties of the methods.  相似文献   

13.
A variety of statistical regression models have been proposed for the comparison of ROC curves for different markers across covariate groups. Pepe developed parametric models for the ROC curve that induce a semiparametric model for the market distributions to relax the strong assumptions in fully parametric models. We investigate the analysis of the power ROC curve using these ROC-GLM models compared to the parametric exponential model and the estimating equations derived from the usual partial likelihood methods in time-to-event analyses. In exploring the robustness to violations of distributional assumptions, we find that the ROC-GLM provides an extra measure of robustness.  相似文献   

14.
Several models for longitudinal data with nonrandom missingness are available. The selection model of Diggle and Kenward is one of these models. It has been mentioned by many authors that this model depends on untested modelling assumptions, such as the response distribution, from the observed data. So, a sensitivity analysis of the study’s conclusions for such assumptions is needed. The stochastic EM algorithm is proposed and developed to handle continuous longitudinal data with nonrandom intermittent missing values when the responses have non-normal distribution. This is a step in investigating the sensitivity of the parameter estimates to the change of the response distribution. The proposed technique is applied to real data from the International Breast Cancer Study Group.  相似文献   

15.
Summary. We examine three pattern–mixture models for making inference about parameters of the distribution of an outcome of interest Y that is to be measured at the end of a longitudinal study when this outcome is missing in some subjects. We show that these pattern–mixture models also have an interpretation as selection models. Because these models make unverifiable assumptions, we recommend that inference about the distribution of Y be repeated under a range of plausible assumptions. We argue that, of the three models considered, only one admits a parameterization that facilitates the examination of departures from the assumption of sequential ignorability. The three models are nonparametric in the sense that they do not impose restrictions on the class of observed data distributions. Owing to the curse of dimensionality, the assumptions that are encoded in these models are sufficient for identification but not for inference. We describe additional flexible and easily interpretable assumptions under which it is possible to construct estimators that are well behaved with moderate sample sizes. These assumptions define semiparametric models for the distribution of the observed data. We describe a class of estimators which, up to asymptotic equivalence, comprise all the consistent and asymptotically normal estimators of the parameters of interest under the postulated semiparametric models. We illustrate our methods with the analysis of data from a randomized clinical trial of contracepting women.  相似文献   

16.
Clinical studies in overactive bladder have traditionally used analysis of covariance or nonparametric methods to analyse the number of incontinence episodes and other count data. It is known that if the underlying distributional assumptions of a particular parametric method do not hold, an alternative parametric method may be more efficient than a nonparametric one, which makes no assumptions regarding the underlying distribution of the data. Therefore, there are advantages in using methods based on the Poisson distribution or extensions of that method, which incorporate specific features that provide a modelling framework for count data. One challenge with count data is overdispersion, but methods are available that can account for this through the introduction of random effect terms in the modelling, and it is this modelling framework that leads to the negative binomial distribution. These models can also provide clinicians with a clearer and more appropriate interpretation of treatment effects in terms of rate ratios. In this paper, the previously used parametric and non‐parametric approaches are contrasted with those based on Poisson regression and various extensions in trials evaluating solifenacin and mirabegron in patients with overactive bladder. In these applications, negative binomial models are seen to fit the data well. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
Measuring the efficiency of public services: the limits of analysis   总被引:2,自引:0,他引:2  
Summary.  Policy makers are increasingly seeking to develop overall measures of the effi-ciency of public service organizations. For that, the use of 'off-the-shelf' statistical tools such as data envelopment analysis and stochastic frontier analysis have been advocated as tools to measure organizational efficiency. The analytical sophistication of such methods has reached an advanced stage of development. We discuss the context within which such models are deployed, their underlying assumptions and their usefulness for a regulator of public services. Four specific model building issues are discussed: the weights that are attached to public service outputs; the specification of the statistical model; the treatment of environmental influences on performance; the treatment of dynamic effects. The paper concludes with recommendations for policy makers and researchers on the development and use of efficiency measurement techniques.  相似文献   

18.
19.
This paper discusses methods for clustering a continuous covariate in a survival analysis model. The advantages of using a categorical covariate defined from discretizing a continuous covariate (via clustering) is (i) enhanced interpretability of the covariate''s impact on survival and (ii) relaxing model assumptions that are usually required for survival models, such as the proportional hazards model. Simulations and an example are provided to illustrate the methods.  相似文献   

20.
The sensitivity of the power in analysis of variance to the departure from the in-built assumptions other than the normality of errors is discussed in Kanji (1975). To obtain the power values he considered the general linear hypothesis model.
Kanji (1976a) discussed a particular case of the above situation, that is fixed effect model two-way classification. In this paper another particular case, that of the fixed effect model one-way classification is discussed, the main purpose of which is to investigate whether it could show a different picture to the two-way classification, especially for unequal replication. The results so obtained are presented in Tables 1A, 1B and 1C. They indicate that the power value is greatly affected by the inequality of error variances and unequal group sizes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号