首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Generalised estimating equations (GEE) for regression problems with vector‐valued responses are examined. When the response vectors are of mixed type (e.g. continuous–binary response pairs), the GEE approach is a semiparametric alternative to full‐likelihood copula methods, and is closely related to Prentice & Zhao's mean‐covariance estimation equations approach. When the response vectors are of the same type (e.g. measurements on left and right eyes), the GEE approach can be viewed as a ‘plug‐in’ to existing methods, such as the vglm function from the state‐of‐the‐art VGAM package in R. In either scenario, the GEE approach offers asymptotically correct inferences on model parameters regardless of whether the working variance–covariance model is correctly or incorrectly specified. The finite‐sample performance of the method is assessed using simulation studies based on a burn injury dataset and a sorbinil eye trial dataset. The method is applied to data analysis examples using the same two datasets, as well as to a trivariate binary dataset on three plant species in the Hunua ranges of Auckland.  相似文献   

2.
This article provides alternative circular smoothing methods in nonparametric estimation of periodic functions. By treating the data as ‘circular’, we solve the “boundary issue” in the nonparametric estimation treating the data as ‘linear’. By redefining the distance metric and signed distance, we modify many estimators used in the situations involving periodic patterns. In the perspective of ‘nonparametric estimation of periodic functions’, we present the examples in nonparametric estimation of (1) a periodic function, (2) multiple periodic functions, (3) an evolving function, (4) a periodically varying-coefficient model and (5) a generalized linear model with periodically varying coefficient. In the perspective of ‘circular statistics’, we provide alternative approaches to calculate the weighted average and evaluate the ‘linear/circular–linear/circular’ association and regression. Simulation studies and an empirical study of electricity price index have been conducted to illustrate and compare our methods with other methods in the literature.  相似文献   

3.
A note on the correlation structure of transformed Gaussian random fields   总被引:1,自引:0,他引:1  
Transformed Gaussian random fields can be used to model continuous time series and spatial data when the Gaussian assumption is not appropriate. The main features of these random fields are specified in a transformed scale, while for modelling and parameter interpretation it is useful to establish connections between these features and those of the random field in the original scale. This paper provides evidence that for many ‘normalizing’ transformations the correlation function of a transformed Gaussian random field is not very dependent on the transformation that is used. Hence many commonly used transformations of correlated data have little effect on the original correlation structure. The property is shown to hold for some kinds of transformed Gaussian random fields, and a statistical explanation based on the concept of parameter orthogonality is provided. The property is also illustrated using two spatial datasets and several ‘normalizing’ transformations. Some consequences of this property for modelling and inference are also discussed.  相似文献   

4.
Models of infectious disease over contact networks offer a versatile means of capturing heterogeneity in populations during an epidemic. Highly connected individuals tend to be infected at a higher rate early during an outbreak than those with fewer connections. A powerful approach based on the probability generating function of the individual degree distribution exists for modelling the mean field dynamics of outbreaks in such a population. We develop the same idea in a stochastic context, by proposing a comprehensive model for 1‐week‐ahead incidence counts. Our focus is inferring contact network (and other epidemic) parameters for some common degree distributions, in the case when the network is non‐homogeneous ‘at random’. Our model is initially set within a susceptible–infectious–removed framework, then extended to the susceptible–infectious–removed–susceptible scenario, and we apply this methodology to influenza A data.  相似文献   

5.
In real‐data analysis, deciding the best subset of variables in regression models is an important problem. Akaike's information criterion (AIC) is often used in order to select variables in many fields. When the sample size is not so large, the AIC has a non‐negligible bias that will detrimentally affect variable selection. The present paper considers a bias correction of AIC for selecting variables in the generalized linear model (GLM). The GLM can express a number of statistical models by changing the distribution and the link function, such as the normal linear regression model, the logistic regression model, and the probit model, which are currently commonly used in a number of applied fields. In the present study, we obtain a simple expression for a bias‐corrected AIC (corrected AIC, or CAIC) in GLMs. Furthermore, we provide an ‘R’ code based on our formula. A numerical study reveals that the CAIC has better performance than the AIC for variable selection.  相似文献   

6.
The broken stick model is a model of the abundance of species in a habitat, and it has been widely extended. In this paper, we present results from exploratory data analysis of this model. To obtain some of the statistics, we formulate the broken stick model as a probability distribution function based on the same model, and we provide an expression for the cumulative distribution function, which is needed to obtain the results from exploratory data analysis. The inequalities we present are useful in ecological studies that apply broken stick models. These results are also useful for testing the goodness of fit of the broken stick model as an alternative to the chi square test, which has often been the main test used. Therefore, these results may be used in several alternative and complementary ways for testing the goodness of fit of the broken stick model.  相似文献   

7.
Cui  Ruifei  Groot  Perry  Heskes  Tom 《Statistics and Computing》2019,29(2):311-333

We consider the problem of causal structure learning from data with missing values, assumed to be drawn from a Gaussian copula model. First, we extend the ‘Rank PC’ algorithm, designed for Gaussian copula models with purely continuous data (so-called nonparanormal models), to incomplete data by applying rank correlation to pairwise complete observations and replacing the sample size with an effective sample size in the conditional independence tests to account for the information loss from missing values. When the data are missing completely at random (MCAR), we provide an error bound on the accuracy of ‘Rank PC’ and show its high-dimensional consistency. However, when the data are missing at random (MAR), ‘Rank PC’ fails dramatically. Therefore, we propose a Gibbs sampling procedure to draw correlation matrix samples from mixed data that still works correctly under MAR. These samples are translated into an average correlation matrix and an effective sample size, resulting in the ‘Copula PC’ algorithm for incomplete data. Simulation study shows that: (1) ‘Copula PC’ estimates a more accurate correlation matrix and causal structure than ‘Rank PC’ under MCAR and, even more so, under MAR and (2) the usage of the effective sample size significantly improves the performance of ‘Rank PC’ and ‘Copula PC.’ We illustrate our methods on two real-world datasets: riboflavin production data and chronic fatigue syndrome data.

  相似文献   

8.
In proteomics, identification of proteins from complex mixtures of proteins extracted from biological samples is an important problem. Among the experimental technologies, mass spectrometry (MS) is the most popular one. Protein identification from MS data typically relies on a ‘two-step’ procedure of identifying the peptide first followed by the separate protein identification procedure next. In this setup, the interdependence of peptides and proteins is neglected resulting in relatively inaccurate protein identification. In this article, we propose a Markov chain Monte Carlo based Bayesian hierarchical model, a first of its kind in protein identification, which integrates the two steps and performs joint analysis of proteins and peptides using posterior probabilities. We remove the assumption of independence of proteins by using clustering group priors to the proteins based on the assumption that proteins sharing the same biological pathway are likely to be present or absent together and are correlated. The complete conditionals of the proposed joint model being tractable, we propose and implement a Gibbs sampling scheme for full posterior inference that provides the estimation and statistical uncertainties of all relevant parameters. The model has better operational characteristics compared to two existing ‘one-step’ procedures on a range of simulation settings as well as on two well-studied datasets.  相似文献   

9.
We consider the situation that repair times of several identically structured technical systems are observed. As an example of such data we discuss the Boeing air conditioner data, consisting of successive failures of the air conditioning system of each member of a fleet of Boeing jet airplanes. The repairing process is assumed to be performed according to a minimal‐repair strategy. This reflects the idea that only those operations are accomplished that are absolutely necessary to restart the system after a failure. The ‘after‐repair‐state’ of the system is the same as it was shortly before the failure. Clearly, the observed repair times contain valuable information about the repair times of an identically structured system put into operation in the future. Thus, for statistical analysis and prediction, it is certainly favourable to take into account all repair times from each system. The resulting pooled sample is used to construct nonparametric prediction intervals for repair times of a future minimal‐repair system. To illustrate our results we apply them to the above‐mentioned data set. As expected, the maximum coverage probabilities of prediction intervals based on two samples exceed those based on one sample. We show that the relative gain for a two‐sample prediction over a one‐sample prediction can be substantial. One of the advantages of the present approach is that it allows nonparametric prediction intervals to be constructed directly. This provides a beneficial alternative to existing nonparametric methods for minimal‐repair systems that construct prediction intervals via the asymptotic distribution of quantile estimators. Moreover, the prediction intervals presented here are exact regardless of the sample size.  相似文献   

10.
In the past, many clinical trials have withdrawn subjects from the study when they prematurely stopped their randomised treatment and have therefore only collected ‘on‐treatment’ data. Thus, analyses addressing a treatment policy estimand have been restricted to imputing missing data under assumptions drawn from these data only. Many confirmatory trials are now continuing to collect data from subjects in a study even after they have prematurely discontinued study treatment as this event is irrelevant for the purposes of a treatment policy estimand. However, despite efforts to keep subjects in a trial, some will still choose to withdraw. Recent publications for sensitivity analyses of recurrent event data have focused on the reference‐based imputation methods commonly applied to continuous outcomes, where imputation for the missing data for one treatment arm is based on the observed outcomes in another arm. However, the existence of data from subjects who have prematurely discontinued treatment but remained in the study has now raised the opportunity to use this ‘off‐treatment’ data to impute the missing data for subjects who withdraw, potentially allowing more plausible assumptions for the missing post‐study‐withdrawal data than reference‐based approaches. In this paper, we introduce a new imputation method for recurrent event data in which the missing post‐study‐withdrawal event rate for a particular subject is assumed to reflect that observed from subjects during the off‐treatment period. The method is illustrated in a trial in chronic obstructive pulmonary disease (COPD) where the primary endpoint was the rate of exacerbations, analysed using a negative binomial model.  相似文献   

11.
The analysis of clinical trials aiming to show symptomatic benefits is often complicated by the ethical requirement for rescue medication when the disease state of patients worsens. In type 2 diabetes trials, patients receive glucose‐lowering rescue medications continuously for the remaining trial duration, if one of several markers of glycemic control exceeds pre‐specified thresholds. This may mask differences in glycemic values between treatment groups, because it will occur more frequently in less effective treatment groups. Traditionally, the last pre‐rescue medication value was carried forward and analyzed as the end‐of‐trial value. The deficits of such simplistic single imputation approaches are increasingly recognized by regulatory authorities and trialists. We discuss alternative approaches and evaluate them through a simulation study. When the estimand of interest is the effect attributable to the treatments initially assigned at randomization, then our recommendation for estimation and hypothesis testing is to treat data after meeting rescue criteria as deterministically ‘missing’ at random, because initiation of rescue medication is determined by observed in‐trial values. An appropriate imputation of values after meeting rescue criteria is then possible either directly through multiple imputation or implicitly with a repeated measures model. Crucially, one needs to jointly impute or model all markers of glycemic control that can lead to the initiation of rescue medication. An alternative for hypothesis testing only are rank tests with outcomes from patients ‘requiring rescue medication’ ranked worst, and non‐rescued patients ranked according to final visit values. However, an appropriate ranking of not observed values may be controversial. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
Approximate Bayesian computation (ABC) is an approach to sampling from an approximate posterior distribution in the presence of a computationally intractable likelihood function. A common implementation is based on simulating model, parameter and dataset triples from the prior, and then accepting as samples from the approximate posterior, those model and parameter pairs for which the corresponding dataset, or a summary of that dataset, is ‘close’ to the observed data. Closeness is typically determined though a distance measure and a kernel scale parameter. Appropriate choice of that parameter is important in producing a good quality approximation. This paper proposes diagnostic tools for the choice of the kernel scale parameter based on assessing the coverage property, which asserts that credible intervals have the correct coverage levels in appropriately designed simulation settings. We provide theoretical results on coverage for both model and parameter inference, and adapt these into diagnostics for the ABC context. We re‐analyse a study on human demographic history to determine whether the adopted posterior approximation was appropriate. Code implementing the proposed methodology is freely available in the R package abctools .  相似文献   

13.
Time series within fields such as finance and economics are often modelled using long memory processes. Alternative studies on the same data can suggest that series may actually contain a ‘changepoint’ (a point within the time series where the data generating process has changed). These models have been shown to have elements of similarity, such as within their spectrum. Without prior knowledge this leads to an ambiguity between these two models, meaning it is difficult to assess which model is most appropriate. We demonstrate that considering this problem in a time-varying environment using the time-varying spectrum removes this ambiguity. Using the wavelet spectrum, we then use a classification approach to determine the most appropriate model (long memory or changepoint). Simulation results are presented across a number of models followed by an application to stock cross-correlations and US inflation. The results indicate that the proposed classification outperforms an existing hypothesis testing approach on a number of models and performs comparatively across others.  相似文献   

14.
15.
The development of models and methods for cure rate estimation has recently burgeoned into an important subfield of survival analysis. Much of the literature focuses on the standard mixture model. Recently, process-based models have been suggested. We focus on several models based on first passage times for Wiener processes. Whitmore and others have studied these models in a variety of contexts. Lee and Whitmore (Stat Sci 21(4):501–513, 2006) give a comprehensive review of a variety of first hitting time models and briefly discuss their potential as cure rate models. In this paper, we study the Wiener process with negative drift as a possible cure rate model but the resulting defective inverse Gaussian model is found to provide a poor fit in some cases. Several possible modifications are then suggested, which improve the defective inverse Gaussian. These modifications include: the inverse Gaussian cure rate mixture model; a mixture of two inverse Gaussian models; incorporation of heterogeneity in the drift parameter; and the addition of a second absorbing barrier to the Wiener process, representing an immunity threshold. This class of process-based models is a useful alternative to the standard model and provides an improved fit compared to the standard model when applied to many of the datasets that we have studied. Implementation of this class of models is facilitated using expectation-maximization (EM) algorithms and variants thereof, including the gradient EM algorithm. Parameter estimates for each of these EM algorithms are given and the proposed models are applied to both real and simulated data, where they perform well.  相似文献   

16.
The two-parameter weighted Lindley distribution is useful for modeling survival data, whereas its maximum likelihood estimators (MLEs) are biased in finite samples. This motivates us to construct nearly unbiased estimators for the unknown parameters. We adopt a “corrective” approach to derive modified MLEs that are bias-free to second order. We also consider an alternative bias-correction mechanism based on Efron’s bootstrap resampling. Monte Carlo simulations are conducted to compare the performance between the proposed and two previous methods in the literature. The numerical evidence shows that the bias-corrected estimators are extremely accurate even for very small sample sizes and are superior than the previous estimators in terms of biases and root mean squared errors. Finally, applications to two real datasets are presented for illustrative purposes.  相似文献   

17.
Finite mixtures of multivariate skew t (MST) distributions have proven to be useful in modelling heterogeneous data with asymmetric and heavy tail behaviour. Recently, they have been exploited as an effective tool for modelling flow cytometric data. A number of algorithms for the computation of the maximum likelihood (ML) estimates for the model parameters of mixtures of MST distributions have been put forward in recent years. These implementations use various characterizations of the MST distribution, which are similar but not identical. While exact implementation of the expectation-maximization (EM) algorithm can be achieved for ‘restricted’ characterizations of the component skew t-distributions, Monte Carlo (MC) methods have been used to fit the ‘unrestricted’ models. In this paper, we review several recent fitting algorithms for finite mixtures of multivariate skew t-distributions, at the same time clarifying some of the connections between the various existing proposals. In particular, recent results have shown that the EM algorithm can be implemented exactly for faster computation of ML estimates for mixtures with unrestricted MST components. The gain in computational time is effected by noting that the semi-infinite integrals on the E-step of the EM algorithm can be put in the form of moments of the truncated multivariate non-central t-distribution, similar to the restricted case, which subsequently can be expressed in terms of the non-truncated form of the central t-distribution function for which fast algorithms are available. We present comparisons to illustrate the relative performance of the restricted and unrestricted models, and demonstrate the usefulness of the recently proposed methodology for the unrestricted MST mixture, by some applications to three real datasets.  相似文献   

18.
We consider a general class of prior distributions for nonparametric Bayesian estimation which uses finite random series with a random number of terms. A prior is constructed through distributions on the number of basis functions and the associated coefficients. We derive a general result on adaptive posterior contraction rates for all smoothness levels of the target function in the true model by constructing an appropriate ‘sieve’ and applying the general theory of posterior contraction rates. We apply this general result on several statistical problems such as density estimation, various nonparametric regressions, classification, spectral density estimation and functional regression. The prior can be viewed as an alternative to the commonly used Gaussian process prior, but properties of the posterior distribution can be analysed by relatively simpler techniques. An interesting approximation property of B‐spline basis expansion established in this paper allows a canonical choice of prior on coefficients in a random series and allows a simple computational approach without using Markov chain Monte Carlo methods. A simulation study is conducted to show that the accuracy of the Bayesian estimators based on the random series prior and the Gaussian process prior are comparable. We apply the method on Tecator data using functional regression models.  相似文献   

19.
A recent analysis of R&D productivity suggests that there are grounds for ‘cautious optimism’ that the industry ‘turned the corner’ in 2008 and is ‘on the comeback trail’. We believe that this analysis is flawed and most probably wrong. We present an alternative analysis of these same data to suggest that the industry is not yet ‘out of the woods’ and suggest that many of the systemic issues affecting pharmaceutical R&D productivity are still being resolved. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

20.
In this ‘Big Data’ era, statisticians inevitably encounter data generated from various disciplines. In particular, advances in bio‐technology have enabled scientists to produce enormous datasets in various biological experiments. In the last two decades, we have seen high‐throughput microarray data resulting from various genomic studies. Recently, next generation sequencing (NGS) technology has been playing an important role in the study of genomic features, resulting in vast amount of NGS data. One frequent application of NGS technology is in the study of DNA copy number variants (CNVs). The resulting NGS read count data are then used by researchers to formulate their various scientific approaches to accurately detect CNVs. Computational and statistical approaches to the detection of CNVs using NGS data are, however, very limited at present. In this review paper, we will focus on read‐depth analysis in CNV detection and give a brief summary of currently used statistical analysis methods in searching for CNVs using NGS data. In addition, based on the review, we discuss the challenges we face and future research directions. The ultimate goal of this review paper is to give a timely exposition of the surveyed statistical methods to researchers in related fields.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号