首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 812 毫秒
1.
2.
3.
Abstract. Although generalized cross‐validation (GCV) has been frequently applied to select bandwidth when kernel methods are used to estimate non‐parametric mixed‐effect models in which non‐parametric mean functions are used to model covariate effects, and additive random effects are applied to account for overdispersion and correlation, the optimality of the GCV has not yet been explored. In this article, we construct a kernel estimator of the non‐parametric mean function. An equivalence between the kernel estimator and a weighted least square type estimator is provided, and the optimality of the GCV‐based bandwidth is investigated. The theoretical derivations also show that kernel‐based and spline‐based GCV give very similar asymptotic results. This provides us with a solid base to use kernel estimation for mixed‐effect models. Simulation studies are undertaken to investigate the empirical performance of the GCV. A real data example is analysed for illustration.  相似文献   

4.
We introduce two types of graphical log‐linear models: label‐ and level‐invariant models for triangle‐free graphs. These models generalise symmetry concepts in graphical log‐linear models and provide a tool with which to model symmetry in the discrete case. A label‐invariant model is category‐invariant and is preserved after permuting some of the vertices according to transformations that maintain the graph, whereas a level‐invariant model equates expected frequencies according to a given set of permutations. These new models can both be seen as instances of a new type of graphical log‐linear model termed the restricted graphical log‐linear model, or RGLL, in which equality restrictions on subsets of main effects and first‐order interactions are imposed. Their likelihood equations and graphical representation can be obtained from those derived for the RGLL models.  相似文献   

5.
The authors propose graphical and numerical methods for checking the adequacy of the logistic regression model for matched case‐control data. Their approach is based on the cumulative sum of residuals over the covariate or linear predictor. Under the assumed model, the cumulative residual process converges weakly to a centered Gaussian limit whose distribution can be approximated via computer simulation. The observed cumulative residual pattern can then be compared both visually and analytically to a certain number of simulated realizations of the approximate limiting process under the null hypothesis. The proposed techniques allow one to check the functional form of each covariate, the logistic link function as well as the overall model adequacy. The authors assess the performance of the proposed methods through simulation studies and illustrate them using data from a cardiovascular study.  相似文献   

6.
In survey sampling, policymaking regarding the allocation of resources to subgroups (called small areas) or the determination of subgroups with specific properties in a population should be based on reliable estimates. Information, however, is often collected at a different scale than that of these subgroups; hence, the estimation can only be obtained on finer scale data. Parametric mixed models are commonly used in small‐area estimation. The relationship between predictors and response, however, may not be linear in some real situations. Recently, small‐area estimation using a generalised linear mixed model (GLMM) with a penalised spline (P‐spline) regression model, for the fixed part of the model, has been proposed to analyse cross‐sectional responses, both normal and non‐normal. However, there are many situations in which the responses in small areas are serially dependent over time. Such a situation is exemplified by a data set on the annual number of visits to physicians by patients seeking treatment for asthma, in different areas of Manitoba, Canada. In cases where covariates that can possibly predict physician visits by asthma patients (e.g. age and genetic and environmental factors) may not have a linear relationship with the response, new models for analysing such data sets are required. In the current work, using both time‐series and cross‐sectional data methods, we propose P‐spline regression models for small‐area estimation under GLMMs. Our proposed model covers both normal and non‐normal responses. In particular, the empirical best predictors of small‐area parameters and their corresponding prediction intervals are studied with the maximum likelihood estimation approach being used to estimate the model parameters. The performance of the proposed approach is evaluated using some simulations and also by analysing two real data sets (precipitation and asthma).  相似文献   

7.
Over the past years, significant progress has been made in developing statistically rigorous methods to implement clinically interpretable sensitivity analyses for assumptions about the missingness mechanism in clinical trials for continuous and (to a lesser extent) for binary or categorical endpoints. Studies with time‐to‐event outcomes have received much less attention. However, such studies can be similarly challenged with respect to the robustness and integrity of primary analysis conclusions when a substantial number of subjects withdraw from treatment prematurely prior to experiencing an event of interest. We discuss how the methods that are widely used for primary analyses of time‐to‐event outcomes could be extended in a clinically meaningful and interpretable way to stress‐test the assumption of ignorable censoring. We focus on a ‘tipping point’ approach, the objective of which is to postulate sensitivity parameters with a clear clinical interpretation and to identify a setting of these parameters unfavorable enough towards the experimental treatment to nullify a conclusion that was favorable to that treatment. Robustness of primary analysis results can then be assessed based on clinical plausibility of the scenario represented by the tipping point. We study several approaches for conducting such analyses based on multiple imputation using parametric, semi‐parametric, and non‐parametric imputation models and evaluate their operating characteristics via simulation. We argue that these methods are valuable tools for sensitivity analyses of time‐to‐event data and conclude that the method based on piecewise exponential imputation model of survival has some advantages over other methods studied here. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
The analysis of time‐to‐event data typically makes the censoring at random assumption, ie, that—conditional on covariates in the model—the distribution of event times is the same, whether they are observed or unobserved (ie, right censored). When patients who remain in follow‐up stay on their assigned treatment, then analysis under this assumption broadly addresses the de jure, or “while on treatment strategy” estimand. In such cases, we may well wish to explore the robustness of our inference to more pragmatic, de facto or “treatment policy strategy,” assumptions about the behaviour of patients post‐censoring. This is particularly the case when censoring occurs because patients change, or revert, to the usual (ie, reference) standard of care. Recent work has shown how such questions can be addressed for trials with continuous outcome data and longitudinal follow‐up, using reference‐based multiple imputation. For example, patients in the active arm may have their missing data imputed assuming they reverted to the control (ie, reference) intervention on withdrawal. Reference‐based imputation has two advantages: (a) it avoids the user specifying numerous parameters describing the distribution of patients' postwithdrawal data and (b) it is, to a good approximation, information anchored, so that the proportion of information lost due to missing data under the primary analysis is held constant across the sensitivity analyses. In this article, we build on recent work in the survival context, proposing a class of reference‐based assumptions appropriate for time‐to‐event data. We report a simulation study exploring the extent to which the multiple imputation estimator (using Rubin's variance formula) is information anchored in this setting and then illustrate the approach by reanalysing data from a randomized trial, which compared medical therapy with angioplasty for patients presenting with angina.  相似文献   

9.
The authors present an improved ranked set two‐sample Mann‐Whitney‐Wilcoxon test for a location shift between samples from two distributions F and G. They define a function that measures the amount of information provided by each observation from the two samples, given the actual joint ranking of all the units in a set. This information function is used as a guide for improving the Pitman efficacy of the Mann‐Whitney‐Wilcoxon test. When the underlying distributions are symmetric, observations at their mode(s) must be quantified in order to gain efficiency. Analogous results are provided for asymmetric distributions.  相似文献   

10.
The author extends to the Bayesian nonparametric context the multinomial goodness‐of‐fit tests due to Cressie & Read (1984). Her approach is suitable when the model of interest is a discrete distribution. She provides an explicit form for the tests, which are based on power‐divergence measures between a prior Dirichlet process that is highly concentrated around the model of interest and the corresponding posterior Dirichlet process. In addition to providing interesting special cases and useful approximations, she discusses calibration and the choice of test through examples.  相似文献   

11.
In recent years, many vaccines have been developed for the prevention of a variety of diseases. Although the primary objective of vaccination is to prevent disease, vaccination can also reduce the severity of disease in those individuals who develop breakthrough disease. Observations of apparent mitigation of breakthrough disease in vaccine recipients have been reported for a number of vaccine‐preventable diseases such as Herpes Zoster, Influenza, Rotavirus, and Pertussis. The burden‐of‐illness (BOI) score was developed to incorporate the incidence of disease as well as the severity and duration of disease. A severity‐of‐illness score S > 0 is assigned to individuals who develop disease and a score of 0 is assigned to uninfected individuals. In this article, we derive the vaccine efficacy statistic (which is the standard statistic for presenting efficacy outcomes in vaccine clinical trials) based on BOI scores, and we extend the method to adjust for baseline covariates. Also, we illustrate it with data from a clinical trial in which the efficacy of a Herpes Zoster vaccine was evaluated.  相似文献   

12.
Abstract. We propose a non‐parametric change‐point test for long‐range dependent data, which is based on the Wilcoxon two‐sample test. We derive the asymptotic distribution of the test statistic under the null hypothesis that no change occurred. In a simulation study, we compare the power of our test with the power of a test which is based on differences of means. The results of the simulation study show that in the case of Gaussian data, our test has only slightly smaller power minus.3pt than the ‘difference‐of‐means’ test. For heavy‐tailed data, our test outperforms the ‘difference‐of‐means’ test.  相似文献   

13.
Likelihood‐based inference with missing data is challenging because the observed log likelihood is often an (intractable) integration over the missing data distribution, which also depends on the unknown parameter. Approximating the integral by Monte Carlo sampling does not necessarily lead to a valid likelihood over the entire parameter space because the Monte Carlo samples are generated from a distribution with a fixed parameter value. We consider approximating the observed log likelihood based on importance sampling. In the proposed method, the dependency of the integral on the parameter is properly reflected through fractional weights. We discuss constructing a confidence interval using the profile likelihood ratio test. A Newton–Raphson algorithm is employed to find the interval end points. Two limited simulation studies show the advantage of the Wilks inference over the Wald inference in terms of power, parameter space conformity and computational efficiency. A real data example on salamander mating shows that our method also works well with high‐dimensional missing data.  相似文献   

14.
Statistical analyses of crossover clinical trials have mainly focused on assessing the treatment effect, carryover effect, and period effect. When a treatment‐by‐period interaction is plausible, it is important to test such interaction first before making inferences on differences among individual treatments. Considerably less attention has been paid to the treatment‐by‐period interaction, which has historically been aliased with the carryover effect in two‐period or three‐period designs. In this article, from the data of a newly developed four‐period crossover design, we propose a statistical method to compare the effects of two active drugs with respect to two response variables. We study estimation and hypothesis testing considering the treatment‐by‐period interaction. Constrained least squares is used to estimate the treatment effect, period effect, and treatment‐by‐period interaction. For hypothesis testing, we extend a general multivariate method for analyzing the crossover design with multiple responses. Results from simulation studies have shown that this method performs very well. We also illustrate how to apply our method to the real data problem.  相似文献   

15.
The author discusses integer‐valued designs for wavelet estimation of nonparametric response curves in the possible presence of heteroscedastic noise using a modified wavelet version of the Gasser‐Müller kernel estimator or weighted least squares estimation. The designs are constructed using a minimax treatment and the simulated annealing algorithm. The author presents designs for three case studies in experiments for investigating Gompertz's theory on mortality rates, nitrite utilization in bush beans and the impact of crash helmets in motorcycle accidents.  相似文献   

16.
Abstract. Goodness‐of‐fit tests are proposed for the skew‐normal law in arbitrary dimension. In the bivariate case the proposed tests utilize the fact that the moment‐generating function of the skew‐normal variable is quite simple and satisfies a partial differential equation of the first order. This differential equation is estimated from the sample and the test statistic is constructed as an L 2 ‐type distance measure incorporating this estimate. Extension of the procedure to dimension greater than two is suggested whereas an effective bootstrap procedure is used to study the behaviour of the new method with real and simulated data.  相似文献   

17.
Abstract. We propose an extension of graphical log‐linear models to allow for symmetry constraints on some interaction parameters that represent homologous factors. The conditional independence structure of such quasi‐symmetric (QS) graphical models is described by an undirected graph with coloured edges, in which a particular colour corresponds to a set of equality constraints on a set of parameters. Unlike standard QS models, the proposed models apply with contingency tables for which only some variables or sets of the variables have the same categories. We study the graphical properties of such models, including conditions for decomposition of model parameters and of maximum likelihood estimates.  相似文献   

18.
Higher‐order crossover designs have drawn considerable attention in clinical trials, because of their ability to test direct treatment effects in the presence of carry‐over effects. The important question, when applying higher‐order crossover designs in practice, is how to choose a design with both statistical and cost efficiencies from various alternatives. In this paper, we propose a general cost function and compare five statistically optimal or near‐optimal designs with this cost function for a two‐treatment study under different carry‐over models. Based on our study, to achieve both statistical and cost efficiencies, a four‐period, four‐sequence crossover design is generally recommended under the simple carry‐over or no carry‐over models, and a three‐period, two‐sequence crossover design is generally recommended under the steady‐state carry‐over models. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

19.
20.
Abstract. We propose an ?1‐penalized estimation procedure for high‐dimensional linear mixed‐effects models. The models are useful whenever there is a grouping structure among high‐dimensional observations, that is, for clustered data. We prove a consistency and an oracle optimality result and we develop an algorithm with provable numerical convergence. Furthermore, we demonstrate the performance of the method on simulated and a real high‐dimensional data set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号