首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
2.
Statistical procedures for the detection of a change in the dependence structure of a series of multivariate observations are studied in this work. The test statistics that are proposed are $L_1$ , $L_2$ , and $L_{\infty }$ distances computed from vectors of differences of Kendall's tau; two multivariate extensions of Kendall's measure of association are used. Since the distributions of these statistics under the null hypothesis of no change depend on the unknown underlying copula of the vectors, a procedure based on the multiplier central limit theorem is used for the computation of p‐values; the method is shown to be valid both asymptotically and for moderate sample sizes. Alternative versions of the tests that take into account possible breakpoints in the marginal distributions are also investigated. Monte Carlo simulations show that the tests are powerful under many scenarios of change‐point. In addition, two estimators of the time of change are proposed and their efficiency is carefully studied. The methodologies are illustrated on simulated series from the Canadian Regional Climate Model. The Canadian Journal of Statistics 41: 65–82; 2013 © 2012 Statistical Society of Canada  相似文献   

3.
In many experiments, several measurements on the same variable are taken over time, a geographic region, or some other index set. It is often of interest to know if there has been a change over the index set in the parameters of the distribution of the variable. Frequently, the data consist of a sequence of correlated random variables, and there may also be several experimental units under observation, each providing a sequence of data. A problem in ascertaining the boundaries between the layers in geological sedimentary beds is used to introduce the model and then to illustrate the proposed methodology. It is assumed that, conditional on the change point, the data from each sequence arise from an autoregressive process that undergoes a change in one or more of its parameters. Unconditionally, the model then becomes a mixture of nonstationary autoregressive processes. Maximum-likelihood methods are used, and results of simulations to evaluate the performance of these estimators under practical conditions are given.  相似文献   

4.
Estimators derived from the expectation‐maximization (EM) algorithm are not robust since they are based on the maximization of the likelihood function. We propose an iterative proximal‐point algorithm based on the EM algorithm to minimize a divergence criterion between a mixture model and the unknown distribution that generates the data. The algorithm estimates in each iteration the proportions and the parameters of the mixture components in two separate steps. Resulting estimators are generally robust against outliers and misspecification of the model. Convergence properties of our algorithm are studied. The convergence of the introduced algorithm is discussed on a two‐component Weibull mixture entailing a condition on the initialization of the EM algorithm in order for the latter to converge. Simulations on Gaussian and Weibull mixture models using different statistical divergences are provided to confirm the validity of our work and the robustness of the resulting estimators against outliers in comparison to the EM algorithm. An application to a dataset of velocities of galaxies is also presented. The Canadian Journal of Statistics 47: 392–408; 2019 © 2019 Statistical Society of Canada  相似文献   

5.
The authors analyze the L1 performance of wavelet density estimators. They prove that under mild conditions on the family of wavelets, such estimates are universally consistent in the L1 sense.  相似文献   

6.
This paper is concerned with the analysis of a time series comprising the eruption inter‐arrival times of the Old Faithful geyser in 2009. The series is much longer than other well‐documented ones and thus gives a more comprehensive insight into the dynamics of the geyser. Basic hidden Markov models with gamma state‐dependent distributions and several extensions are implemented. In order to better capture the stochastic dynamics exhibited by Old Faithful, the different non‐standard models under consideration seek to increase the flexibility of the basic models in various ways: (i) by allowing non‐geometric distributions for the times spent in the different states; (ii) by increasing the memory of the underlying Markov chain, with or without assuming additional structure implied by mixture transition distribution models; and (iii) by incorporating feedback from the observation process on the latent process. In each case it is shown how the likelihood can be formulated as a matrix product which can be conveniently maximized numerically.  相似文献   

7.
Recent work on point processes includes studying posterior convergence rates of estimating a continuous intensity function. In this article, convergence rates for estimating the intensity function and change‐point are derived for the more general case of a piecewise continuous intensity function. We study the problem of estimating the intensity function of an inhomogeneous Poisson process with a change‐point using non‐parametric Bayesian methods. An Markov Chain Monte Carlo (MCMC) algorithm is proposed to obtain estimates of the intensity function and the change‐point which is illustrated using simulation studies and applications. The Canadian Journal of Statistics 47: 604–618; 2019 © 2019 Statistical Society of Canada  相似文献   

8.
In this paper, we study the asymptotic distributions of MLE and UMVUE of a parametric functionh1, θ2) when sampling from a biparametric uniform distributionU1, θ2). We obtain both limiting distributions as a convolution of exponential distributions, and we observe that the limiting distribution of UMVUE is a shift of the limiting distribution of MLE.  相似文献   

9.
Consider a non‐parametric regression model Y =m (X )+ϵ , where m is an unknown regression function, Y is a real‐valued response variable, X is a real covariate, and ϵ is the error term. In this article, we extend the usual tests for homoscedasticity by developing consistent tests for independence between X and ϵ . Further, we investigate the local power of the proposed tests using Le Cam's contiguous alternatives. An asymptotic power study under local alternatives along with extensive finite sample simulation study shows that the performance of the new tests is competitive with existing ones. Furthermore, the practicality of the new tests is shown using two real data sets.  相似文献   

10.
The exclusion restriction is usually assumed for identifying causal effects in true or only natural randomized experiments with noncompliance. It requires that the assignment to treatment does not have a direct causal effect on the outcome. Despite its importance, the restriction can often be unrealistic, especially in situations of natural experiments. It is shown that, without the exclusion restriction, the parametric model is identified if the outcome distributions of various compliance statuses are in the same parametric class and that class is a linearly independent set over the field of real numbers. However, the relaxation of the exclusion restriction yields a parametric model that is characterized by the presence of mixtures of distributions. This scenario complicates the likelihood‐based estimation procedures because it implies more than one maximum likelihood point. A two‐step estimation procedure based on detecting the root that is closest to the method of moments estimate of the parameter vector is then proposed and analyzed in detail, under normally distributed outcomes. An economic example with real data concerning returns to schooling concludes the paper.  相似文献   

11.
Clinical work is characterised by frequent interjection of external prompts causing clinicians to switch from a primary task to deal with an incoming secondary task, a phenomenon associated with negative effects in experimental studies. This is an important yet underexplored aspect of work in safety critical settings in general, since an increase in task length due to task‐switching implies reduced efficiency, while decreased length suggests hastening to compensate for the increased workload brought by the unexpected secondary tasks, which is a potential safety issue. In such observational settings, longer tasks are naturally more likely to have one or more task‐switching events: a form of length bias. To assess the effect of task‐switching on task completion time, it is necessary to estimate counterfactual task lengths had they not experienced any task‐switching, while also accounting for length bias. This is a problem that appears simple at first, but has several counterintuitive considerations resulting in a uniquely constrained solution space. We review the only existing method based on an assumption that task‐switches occur according to a homogeneous Poisson process. We propose significant extensions to flexibly incorporate heterogeneity that is more representative of task‐switching in real‐world contexts. The techniques are applied to observations of emergency physicians’ workflow in two hospital settings.  相似文献   

12.
In this paper we develop a non‐conventional statistical test for the change‐point in a mean model by making use of an almost‐sure (a.s.) convergence (or strong convergence) result that we obtain, in respect of the difference between the sums of squared residuals under the null and alternative hypotheses. We prove that both types of error probabilities of the new test converge to zero almost surely when the sample size goes to infinity. This result does not hold for any conventional statistical test where the type I error probability, i.e. the significance level or the size, is prescribed at a low but non‐zero level (e.g. 0.05). The test developed is easy to use in practice, and is ready to be generalised to other change‐point models provided that the relevant almost‐sure convergence results are available. We also provide a simulation study in the paper to compare the new and conventional tests under different data scenarios. The results obtained are consistent with our asymptotic study. In addition we provide least squares estimators of those parameters used in the change‐point test together with their almost‐sure convergence properties.  相似文献   

13.
We discuss a class of difference‐based estimators for the autocovariance in nonparametric regression when the signal is discontinuous and the errors form a stationary m‐dependent process. These estimators circumvent the particularly challenging task of pre‐estimating such an unknown regression function. We provide finite‐sample expressions of their mean squared errors for piecewise constant signals and Gaussian errors. Based on this, we derive biased‐optimized estimates that do not depend on the unknown autocovariance structure. Notably, for positively correlated errors, that part of the variance of our estimators that depend on the signal is minimal as well. Further, we provide sufficient conditions for ‐consistency; this result is extended to piecewise Hölder regression with non‐Gaussian errors. We combine our biased‐optimized autocovariance estimates with a projection‐based approach and derive covariance matrix estimates, a method that is of independent interest. An R package, several simulations and an application to biophysical measurements complement this paper.  相似文献   

14.
To analyze interactions in marked spatiotemporal point processes (MSTPPs), we introduce marked second‐order reduced moment measures and K‐functions for inhomogeneous second‐order intensity‐reweighted stationary MSTPPs. These summary statistics, which allow us to quantify dependence between different mark‐based classifications of points, depend on the specific mark space and mark reference measure chosen. Unbiased and consistent minus‐sampling estimators are derived for all statistics considered, and a test for random labeling is indicated. In addition, we treat Voronoi intensity estimators for MSTPPs. These new statistics are finally employed to analyze an Andaman Sea earthquake data set.  相似文献   

15.
Over the past years, significant progress has been made in developing statistically rigorous methods to implement clinically interpretable sensitivity analyses for assumptions about the missingness mechanism in clinical trials for continuous and (to a lesser extent) for binary or categorical endpoints. Studies with time‐to‐event outcomes have received much less attention. However, such studies can be similarly challenged with respect to the robustness and integrity of primary analysis conclusions when a substantial number of subjects withdraw from treatment prematurely prior to experiencing an event of interest. We discuss how the methods that are widely used for primary analyses of time‐to‐event outcomes could be extended in a clinically meaningful and interpretable way to stress‐test the assumption of ignorable censoring. We focus on a ‘tipping point’ approach, the objective of which is to postulate sensitivity parameters with a clear clinical interpretation and to identify a setting of these parameters unfavorable enough towards the experimental treatment to nullify a conclusion that was favorable to that treatment. Robustness of primary analysis results can then be assessed based on clinical plausibility of the scenario represented by the tipping point. We study several approaches for conducting such analyses based on multiple imputation using parametric, semi‐parametric, and non‐parametric imputation models and evaluate their operating characteristics via simulation. We argue that these methods are valuable tools for sensitivity analyses of time‐to‐event data and conclude that the method based on piecewise exponential imputation model of survival has some advantages over other methods studied here. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

16.
This study demonstrates the decomposition of seasonality and long‐term trend in seismological data observed at irregular time intervals. The decomposition was applied to the estimation of earthquake detection capability using cubic B‐splines and a Bayesian approach, which is similar to the seasonal adjustment model frequently used to analyse economic time‐series data. We employed numerical simulation to verify the method and then applied it to real earthquake datasets obtained in and around the northern Honshu island, Japan. With this approach, we obtained the seasonality of the detection capability related to the annual variation of wind speed and the long‐term trend corresponding to the recent improvement of the seismic network in the studied region.  相似文献   

17.
Semiparametric maximum likelihood estimators have recently been proposed for a class of two‐phase, outcome‐dependent sampling models. All of them were “restricted” maximum likelihood estimators, in the sense that the maximization is carried out only over distributions concentrated on the observed values of the covariate vectors. In this paper, the authors give conditions for consistency of these restricted maximum likelihood estimators. They also consider the corresponding unrestricted maximization problems, in which the “absolute” maximum likelihood estimators may then have support on additional points in the covariate space. Their main consistency result also covers these unrestricted maximum likelihood estimators, when they exist for all sample sizes.  相似文献   

18.
We establish a central limit theorem for multivariate summary statistics of nonstationary α‐mixing spatial point processes and a subsampling estimator of the covariance matrix of such statistics. The central limit theorem is crucial for establishing asymptotic properties of estimators in statistics for spatial point processes. The covariance matrix subsampling estimator is flexible and model free. It is needed, for example, to construct confidence intervals and ellipsoids based on asymptotic normality of estimators. We also provide a simulation study investigating an application of our results to estimating functions.  相似文献   

19.
Hypothermia which is induced by reducing core body temperature is a therapeutic tool used to prevent brain damage resulting from physical trauma. However, all physiological systems begin to slow down due to hypothermia and this can result in increased risk of mortality. Therefore quantification of the transition of core body temperature to early hypothermia is of great clinical interest. Conceptually core body temperature may exhibit an either gradual or abrupt transition. Bent‐cable regression is an appealing statistical tool to model such data due to the model's flexibility and readily interpretable regression coefficients. It handles more flexibly models that traditionally have been handled by low‐order polynomial models (for gradual transition) or piecewise linear changepoint models (for abrupt change). We consider a rat model to quantify the temporal trend of core body temperature primarily to address the question: What is the critical time point associated with a breakdown in the compensatory mechanisms following the start of hypothermia therapy? To this end, we develop a Bayesian modelling framework for bent‐cable regression of longitudinal data to simultaneously account for gradual and abrupt transitions. Our analysis reveals that: (i) about 39% of rats exhibit a gradual transition in core body temperature; (ii) the critical time point is approximately the same regardless of transition type; and (iii) both transition types show a significant increase of core body temperature followed by a significant decrease.  相似文献   

20.
Missing data in clinical trials is a well‐known problem, and the classical statistical methods used can be overly simple. This case study shows how well‐established missing data theory can be applied to efficacy data collected in a long‐term open‐label trial with a discontinuation rate of almost 50%. Satisfaction with treatment in chronically constipated patients was the efficacy measure assessed at baseline and every 3 months postbaseline. The improvement in treatment satisfaction from baseline was originally analyzed with a paired t‐test ignoring missing data and discarding the correlation structure of the longitudinal data. As the original analysis started from missing completely at random assumptions regarding the missing data process, the satisfaction data were re‐examined, and several missing at random (MAR) and missing not at random (MNAR) techniques resulted in adjusted estimate for the improvement in satisfaction over 12 months. Throughout the different sensitivity analyses, the effect sizes remained significant and clinically relevant. Thus, even for an open‐label trial design, sensitivity analysis, with different assumptions for the nature of dropouts (MAR or MNAR) and with different classes of models (selection, pattern‐mixture, or multiple imputation models), has been found useful and provides evidence towards the robustness of the original analyses; additional sensitivity analyses could be undertaken to further qualify robustness. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号