首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Motivated by time series of atmospheric concentrations of certain pollutants the authors develop bent‐cable regression for autocorrelated errors. Bent‐cable regression extends the popular piecewise linear (broken‐stick) model, allowing for a smooth change region of any non‐negative width. Here the authors consider autoregressive noise added to a bent‐cable mean structure, with unknown regression and time series parameters. They develop asymptotic theory for conditional least‐squares estimation in a triangular array framework, wherein each segment of the bent cable contains an increasing number of observations while the autoregressive order remains constant as the sample size grows. They explore the theory in a simulation study, develop implementation details, apply the methodology to the motivating pollutant dataset, and provide a scientific interpretation of the bent‐cable change point not discussed previously. The Canadian Journal of Statistics 38: 386–407; 2010 © 2010 Statistical Society of Canada  相似文献   

2.
The authors propose a profile likelihood approach to linear clustering which explores potential linear clusters in a data set. For each linear cluster, an errors‐in‐variables model is assumed. The optimization of the derived profile likelihood can be achieved by an EM algorithm. Its asymptotic properties and its relationships with several existing clustering methods are discussed. Methods to determine the number of components in a data set are adapted to this linear clustering setting. Several simulated and real data sets are analyzed for comparison and illustration purposes. The Canadian Journal of Statistics 38: 716–737; 2010 © 2010 Statistical Society of Canada  相似文献   

3.
This paper studies generalized linear mixed models (GLMMs) for the analysis of geographic and temporal variability of disease rates. This class of models adopts spatially correlated random effects and random temporal components. Spatio‐temporal models that use conditional autoregressive smoothing across the spatial dimension and autoregressive smoothing over the temporal dimension are developed. The model also accommodates the interaction between space and time. However, the effect of seasonal factors has not been previously addressed and in some applications (e.g., health conditions), these effects may not be negligible. The authors incorporate the seasonal effects of month and possibly year as part of the proposed model and estimate model parameters through generalized estimating equations. The model provides smoothed maps of disease risk and eliminates the instability of estimates in low‐population areas while maintaining geographic resolution. They illustrate the approach using a monthly data set of the number of asthma presentations made by children to Emergency Departments (EDs) in the province of Alberta, Canada, during the period 2001–2004. The Canadian Journal of Statistics 38: 698–715; 2010 © 2010 Statistical Society of Canada  相似文献   

4.
Longitudinal surveys have emerged in recent years as an important data collection tool for population studies where the primary interest is to examine population changes over time at the individual level. Longitudinal data are often analyzed through the generalized estimating equations (GEE) approach. The vast majority of existing literature on the GEE method; however, is developed under non‐survey settings and are inappropriate for data collected through complex sampling designs. In this paper the authors develop a pseudo‐GEE approach for the analysis of survey data. They show that survey weights must and can be appropriately accounted in the GEE method under a joint randomization framework. The consistency of the resulting pseudo‐GEE estimators is established under the proposed framework. Linearization variance estimators are developed for the pseudo‐GEE estimators when the finite population sampling fractions are small or negligible, a scenario often held for large‐scale surveys. Finite sample performances of the proposed estimators are investigated through an extensive simulation study using data from the National Longitudinal Survey of Children and Youth. The results show that the pseudo‐GEE estimators and the linearization variance estimators perform well under several sampling designs and for both continuous and binary responses. The Canadian Journal of Statistics 38: 540–554; 2010 © 2010 Statistical Society of Canada  相似文献   

5.
We use the two‐state Markov regime‐switching model to explain the behaviour of the WTI crude‐oil spot prices from January 1986 to February 2012. We investigated the use of methods based on the composite likelihood and the full likelihood. We found that the composite‐likelihood approach can better capture the general structural changes in world oil prices. The two‐state Markov regime‐switching model based on the composite‐likelihood approach closely depicts the cycles of the two postulated states: fall and rise. These two states persist for on average 8 and 15 months, which matches the observed cycles during the period. According to the fitted model, drops in oil prices are more volatile than rises. We believe that this information can be useful for financial officers working in related areas. The model based on the full‐likelihood approach was less satisfactory. We attribute its failure to the fact that the two‐state Markov regime‐switching model is too rigid and overly simplistic. In comparison, the composite likelihood requires only that the model correctly specifies the joint distribution of two adjacent price changes. Thus, model violations in other areas do not invalidate the results. The Canadian Journal of Statistics 41: 353–367; 2013 © 2013 Statistical Society of Canada  相似文献   

6.
Most of the long memory estimators for stationary fractionally integrated time series models are known to experience non‐negligible bias in small and finite samples. Simple moment estimators are also vulnerable to such bias, but can easily be corrected. In this article, the authors propose bias reduction methods for a lag‐one sample autocorrelation‐based moment estimator. In order to reduce the bias of the moment estimator, the authors explicitly obtain the exact bias of lag‐one sample autocorrelation up to the order n−1. An example where the exact first‐order bias can be noticeably more accurate than its asymptotic counterpart, even for large samples, is presented. The authors show via a simulation study that the proposed methods are promising and effective in reducing the bias of the moment estimator with minimal variance inflation. The proposed methods are applied to the northern hemisphere data. The Canadian Journal of Statistics 37: 476–493; 2009 © 2009 Statistical Society of Canada  相似文献   

7.
In recent years analyses of dependence structures using copulas have become more popular than the standard correlation analysis. Starting from Aas et al. ( 2009 ) regular vine pair‐copula constructions (PCCs) are considered the most flexible class of multivariate copulas. PCCs are involved objects but (conditional) independence present in data can simplify and reduce them significantly. In this paper the authors detect (conditional) independence in a particular vine PCC model based on bivariate t copulas by deriving and implementing a reversible jump Markov chain Monte Carlo algorithm. However, the methodology is general and can be extended to any regular vine PCC and to all known bivariate copula families. The proposed approach considers model selection and estimation problems for PCCs simultaneously. The effectiveness of the developed algorithm is shown in simulations and its usefulness is illustrated in two real data applications. The Canadian Journal of Statistics 39: 239–258; 2011 © 2011 Statistical Society of Canada  相似文献   

8.
The authors propose a robust transformation linear mixed‐effects model for longitudinal continuous proportional data when some of the subjects exhibit outlying trajectories over time. It becomes troublesome when including or excluding such subjects in the data analysis results in different statistical conclusions. To robustify the longitudinal analysis using the mixed‐effects model, they utilize the multivariate t distribution for random effects or/and error terms. Estimation and inference in the proposed model are established and illustrated by a real data example from an ophthalmology study. Simulation studies show a substantial robustness gain by the proposed model in comparison to the mixed‐effects model based on Aitchison's logit‐normal approach. As a result, the data analysis benefits from the robustness of making consistent conclusions in the presence of influential outliers. The Canadian Journal of Statistics © 2009 Statistical Society of Canada  相似文献   

9.
The authors propose a two‐state continuous‐time semi‐Markov model for an unobservable alternating binary process. Another process is observed at discrete time points that may misclassify the true state of the process of interest. To estimate the model's parameters, the authors propose a minimum Pearson chi‐square type estimating approach based on approximated joint probabilities when the true process is in equilibrium. Three consecutive observations are required to have sufficient degrees of freedom to perform estimation. The methodology is demonstrated on parasitic infection data with exponential and gamma sojourn time distributions.  相似文献   

10.
The authors derive closed‐form expressions for the full, profile, conditional and modified profile likelihood functions for a class of random growth parameter models they develop as well as Garcia's additive model. These expressions facilitate the determination of parameter estimates for both types of models. The profile, conditional and modified profile likelihood functions are maximized over few parameters to yield a complete set of parameter estimates. In the development of their random growth parameter models the authors specify the drift and diffusion coefficients of the growth parameter process in a natural way which gives interpretive meaning to these coefficients while yielding highly tractable models. They fit several of their random growth parameter models and Garcia's additive model to stock market data, and discuss the results. The Canadian Journal of Statistics 38: 474–487; 2010 © 2010 Statistical Society of Canada  相似文献   

11.
In this article the author investigates the application of the empirical‐likelihood‐based inference for the parameters of varying‐coefficient single‐index model (VCSIM). Unlike the usual cases, if there is no bias correction the asymptotic distribution of the empirical likelihood ratio cannot achieve the standard chi‐squared distribution. To this end, a bias‐corrected empirical likelihood method is employed to construct the confidence regions (intervals) of regression parameters, which have two advantages, compared with those based on normal approximation, that is, (1) they do not impose prior constraints on the shape of the regions; (2) they do not require the construction of a pivotal quantity and the regions are range preserving and transformation respecting. A simulation study is undertaken to compare the empirical likelihood with the normal approximation in terms of coverage accuracies and average areas/lengths of confidence regions/intervals. A real data example is given to illustrate the proposed approach. The Canadian Journal of Statistics 38: 434–452; 2010 © 2010 Statistical Society of Canada  相似文献   

12.
The evaluation of new processor designs is an important issue in electrical and computer engineering. Architects use simulations to evaluate designs and to understand trade‐offs and interactions among design parameters. However, due to the lengthy simulation time and limited resources, it is often practically impossible to simulate a full factorial design space. Effective sampling methods and predictive models are required. In this paper, the authors propose an automated performance predictive approach which employs an adaptive sampling scheme that interactively works with the predictive model to select samples for simulation. These samples are then used to build Bayesian additive regression trees, which in turn are used to predict the whole design space. Both real data analysis and simulation studies show that the method is effective in that, though sampling at very few design points, it generates highly accurate predictions on the unsampled points. Furthermore, the proposed model provides quantitative interpretation tools with which investigators can efficiently tune design parameters in order to improve processor performance. The Canadian Journal of Statistics 38: 136–152; 2010 © 2010 Statistical Society of Canada  相似文献   

13.
Autoregressive models with switching regime are a frequently used class of nonlinear time series models, which are popular in finance, engineering, and other fields. We consider linear switching autoregressions in which the intercept and variance possibly switch simultaneously, while the autoregressive parameters are structural and hence the same in all states, and we propose quasi‐likelihood‐based tests for a regime switch in this class of models. Our motivation is from financial time series, where one expects states with high volatility and low mean together with states with low volatility and higher mean. We investigate the performance of our tests in a simulation study, and give an application to a series of IBM monthly stock returns. The Canadian Journal of Statistics 40: 427–446; 2012 © 2012 Statistical Society of Canada  相似文献   

14.
To enhance modeling flexibility, the authors propose a nonparametric hazard regression model, for which the ordinary and weighted least squares estimation and inference procedures are studied. The proposed model does not assume any parametric specifications on the covariate effects, which is suitable for exploring the nonlinear interactions between covariates, time and some exposure variable. The authors propose the local ordinary and weighted least squares estimators for the varying‐coefficient functions and establish the corresponding asymptotic normality properties. Simulation studies are conducted to empirically examine the finite‐sample performance of the new methods, and a real data example from a recent breast cancer study is used as an illustration. The Canadian Journal of Statistics 37: 659–674; 2009 © 2009 Statistical Society of Canada  相似文献   

15.
This paper is concerned with undoing aliasing effects, which arise from discretely sampling a continuous‐time stochastic process. Such effects are manifested in the frequency‐domain relationships between the sampled and original processes. The authors describe a general technique to undo aliasing effects, given two processes, one being a time‐delayed version of the other. The technique is based on the observations that certain phase information between the two processes is unaffected by sampling, is completely determined by the (known) time delay, and contains sufficient information to undo aliasing effects. The authors illustrate their technique with a simulation example. The theoretical model is motivated by the helioseismological problem of determining modes of solar pressure waves. The authors apply their technique to solar radio data, and conclude that certain low‐frequency modes known in the helioseismology literature are likely the result of aliasing effects. The Canadian Journal of Statistics 38: 116–135; 2010 © 2010 Statistical Society of Canada  相似文献   

16.
We study estimation and feature selection problems in mixture‐of‐experts models. An $l_2$ ‐penalized maximum likelihood estimator is proposed as an alternative to the ordinary maximum likelihood estimator. The estimator is particularly advantageous when fitting a mixture‐of‐experts model to data with many correlated features. It is shown that the proposed estimator is root‐$n$ consistent, and simulations show its superior finite sample behaviour compared to that of the maximum likelihood estimator. For feature selection, two extra penalty functions are applied to the $l_2$ ‐penalized log‐likelihood function. The proposed feature selection method is computationally much more efficient than the popular all‐subset selection methods. Theoretically it is shown that the method is consistent in feature selection, and simulations support our theoretical results. A real‐data example is presented to demonstrate the method. The Canadian Journal of Statistics 38: 519–539; 2010 © 2010 Statistical Society of Canada  相似文献   

17.
Abstract. General autoregressive moving average (ARMA) models extend the traditional ARMA models by removing the assumptions of causality and invertibility. The assumptions are not required under a non‐Gaussian setting for the identifiability of the model parameters in contrast to the Gaussian setting. We study M‐estimation for general ARMA processes with infinite variance, where the distribution of innovations is in the domain of attraction of a non‐Gaussian stable law. Following the approach taken by Davis et al. (1992) and Davis (1996) , we derive a functional limit theorem for random processes based on the objective function, and establish asymptotic properties of the M‐estimator. We also consider bootstrapping the M‐estimator and extend the results of Davis & Wu (1997) to the present setting so that statistical inferences are readily implemented. Simulation studies are conducted to evaluate the finite sample performance of the M‐estimation and bootstrap procedures. An empirical example of financial time series is also provided.  相似文献   

18.
The semi‐Markov process often provides a better framework than the classical Markov process for the analysis of events with multiple states. The purpose of this paper is twofold. First, we show that in the presence of right censoring, when the right end‐point of the support of the censoring time is strictly less than the right end‐point of the support of the semi‐Markov kernel, the transition probability of the semi‐Markov process is nonidentifiable, and the estimators proposed in the literature are inconsistent in general. We derive the set of all attainable values for the transition probability based on the censored data, and we propose a nonparametric inference procedure for the transition probability using this set. Second, the conventional approach to constructing confidence bands is not applicable for the semi‐Markov kernel and the sojourn time distribution. We propose new perturbation resampling methods to construct these confidence bands. Different weights and transformations are explored in the construction. We use simulation to examine our proposals and illustrate them with hospitalization data from a recent cancer survivor study. The Canadian Journal of Statistics 41: 237–256; 2013 © 2013 Statistical Society of Canada  相似文献   

19.
For randomly censored data, the authors propose a general class of semiparametric median residual life models. They incorporate covariates in a generalized linear form while leaving the baseline median residual life function completely unspecified. Despite the non‐identifiability of the survival function for a given median residual life function, a simple and natural procedure is proposed to estimate the regression parameters and the baseline median residual life function. The authors derive the asymptotic properties for the estimators, and demonstrate the numerical performance of the proposed method through simulation studies. The median residual life model can be easily generalized to model other quantiles, and the estimation method can also be applied to the mean residual life model. The Canadian Journal of Statistics 38: 665–679; 2010 © 2010 Statistical Society of Canada  相似文献   

20.
In survey sampling, policymaking regarding the allocation of resources to subgroups (called small areas) or the determination of subgroups with specific properties in a population should be based on reliable estimates. Information, however, is often collected at a different scale than that of these subgroups; hence, the estimation can only be obtained on finer scale data. Parametric mixed models are commonly used in small‐area estimation. The relationship between predictors and response, however, may not be linear in some real situations. Recently, small‐area estimation using a generalised linear mixed model (GLMM) with a penalised spline (P‐spline) regression model, for the fixed part of the model, has been proposed to analyse cross‐sectional responses, both normal and non‐normal. However, there are many situations in which the responses in small areas are serially dependent over time. Such a situation is exemplified by a data set on the annual number of visits to physicians by patients seeking treatment for asthma, in different areas of Manitoba, Canada. In cases where covariates that can possibly predict physician visits by asthma patients (e.g. age and genetic and environmental factors) may not have a linear relationship with the response, new models for analysing such data sets are required. In the current work, using both time‐series and cross‐sectional data methods, we propose P‐spline regression models for small‐area estimation under GLMMs. Our proposed model covers both normal and non‐normal responses. In particular, the empirical best predictors of small‐area parameters and their corresponding prediction intervals are studied with the maximum likelihood estimation approach being used to estimate the model parameters. The performance of the proposed approach is evaluated using some simulations and also by analysing two real data sets (precipitation and asthma).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号