首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Space–time correlation modelling is one of the crucial steps of traditional structural analysis, since space–time models are used for prediction purposes. A comparative study among some classes of space–time covariance functions is proposed. The relevance of choosing a suitable model by taking into account the characteristic behaviour of the models is proved by using a space–time data set of ozone daily averages and the flexibility of the product-sum model is also highlighted through simulated data sets.  相似文献   

2.
Forecasting in economic data analysis is dominated by linear prediction methods where the predicted values are calculated from a fitted linear regression model. With multiple predictor variables, multivariate nonparametric models were proposed in the literature. However, empirical studies indicate the prediction performance of multi-dimensional nonparametric models may be unsatisfactory. We propose a new semiparametric model average prediction (SMAP) approach to analyse panel data and investigate its prediction performance with numerical examples. Estimation of individual covariate effect only requires univariate smoothing and thus may be more stable than previous multivariate smoothing approaches. The estimation of optimal weight parameters incorporates the longitudinal correlation and the asymptotic properties of the estimated results are carefully studied in this paper.  相似文献   

3.
In the Bayesian approach, the Behrens–Fisher problem has been posed as one of estimation for the difference of two means. No Bayesian solution to the Behrens–Fisher testing problem has yet been given due, perhaps, to the fact that the conventional priors used are improper. While default Bayesian analysis can be carried out for estimation purposes, it poses difficulties for testing problems. This paper generates sensible intrinsic and fractional prior distributions for the Behrens–Fisher testing problem from the improper priors commonly used for estimation. It allows us to compute the Bayes factor to compare the null and the alternative hypotheses. This default procedure of model selection is compared with a frequentist test and the Bayesian information criterion. We find discrepancy in the sense that frequentist and Bayesian information criterion reject the null hypothesis for data, that the Bayes factor for intrinsic or fractional priors do not.  相似文献   

4.
In this paper, we propose a method to assess influence in skew-Birnbaum–Saunders regression models, which are an extension based on the skew-normal distribution of the usual Birnbaum–Saunders (BS) regression model. An interesting characteristic that the new regression model has is the capacity of predicting extreme percentiles, which is not possible with the BS model. In addition, since the observed likelihood function associated with the new regression model is more complex than that from the usual model, we facilitate the parameter estimation using a type-EM algorithm. Moreover, we employ influence diagnostic tools that considers this algorithm. Finally, a numerical illustration includes a brief simulation study and an analysis of real data in order to show the proposed methodology.  相似文献   

5.
Dynamic programming (DP) is a fast, elegant method for solving many one-dimensional optimisation problems but, unfortunately, most problems in image analysis, such as restoration and warping, are two-dimensional. We consider three generalisations of DP. The first is iterated dynamic programming (IDP), where DP is used to recursively solve each of a sequence of one-dimensional problems in turn, to find a local optimum. A second algorithm is an empirical, stochastic optimiser, which is implemented by adding progressively less noise to IDP. The final approach replaces DP by a more computationally intensive Forward-Backward Gibbs Sampler, and uses a simulated annealing cooling schedule. Results are compared with existing pixel-by-pixel methods of iterated conditional modes (ICM) and simulated annealing in two applications: to restore a synthetic aperture radar (SAR) image, and to warp a pulsed-field electrophoresis gel into alignment with a reference image. We find that IDP and its stochastic variant outperform the remaining algorithms.  相似文献   

6.
In this article we introduce a general approach to dynamic path analysis. This is an extension of classical path analysis to the situation where variables may be time-dependent and where the outcome of main interest is a stochastic process. In particular we will focus on the survival and event history analysis setting where the main outcome is a counting process. Our approach will be especially fruitful for analyzing event history data with internal time-dependent covariates, where an ordinary regression analysis may fail. The approach enables us to describe how the effect of a fixed covariate partly is working directly and partly indirectly through internal time-dependent covariates. For the sequence of times of event, we define a sequence of path analysis models. At each time of an event, ordinary linear regression is used to estimate the relation between the covariates, while the additive hazard model is used for the regression of the counting process on the covariates. The methodology is illustrated using data from a randomized trial on survival for patients with liver cirrhosis.  相似文献   

7.
A pioneer first enters the market as the monopolist and later experiences the competition when a similar product is brought to the market by the competitor. In 2012, Wang and Xie suggested to decompose the pioneer survival to “monopoly” and “competition” durations and estimate the two survivals of the pioneer individually with the competitor's survival via regression analysis. In their article, several regression analyses were performed to study the effect of order entry to the pioneer and the later entrant in different market status. Using the same datasets from their study, our main interest is to investigate the interdependent relationship between two competitive firms and study whether the market pioneer and the later entrant can benefit from the competition. The major contribution of this article is that the interdependence between two competitive firms is explicitly expressed and three survival durations can be estimated in one model. The proposed method relates the survival times of two competitive firms to pioneer's monopoly time and some observable covariates via proportional hazard model, and incorporates frailty variables to capture the interdependence in the competition. This article demonstrates a new method that formulates the interdependence between competitive firms and shows data analyses in the industries of newspapers and high technology.  相似文献   

8.
In many fields of empirical research one is faced with observations arising from a functional process. If so, classical multivariate methods are often not feasible or appropriate to explore the data at hand and functional data analysis is prevailing. In this paper we present a method for joint modeling of mean and variance in longitudinal data using penalized splines. Unlike previous approaches we model both components simultaneously via rich spline bases. Estimation as well as smoothing parameter selection is carried out using a mixed model framework. The resulting smooth covariance structures are then used to perform principal component analysis. We illustrate our approach by several simulations and an application to financial interest data.  相似文献   

9.
In this paper we review some results that have been derived on record values for some well known probability density functions and based on m records from Kumaraswamy’s distribution we obtain estimators for the two parameters and the future sth record value. These estimates are derived using the maximum likelihood and Bayesian approaches. In the Bayesian approach, the two parameters are assumed to be random variables and estimators for the parameters and for the future sth record value are obtained, when we have observed m past record values, using the well known squared error loss (SEL) function and a linear exponential (LINEX) loss function. The findings are illustrated with actual and computer generated data.  相似文献   

10.
Compositional tables represent a continuous counterpart to well-known contingency tables. Their cells contain quantitatively expressed relative contributions of a whole, carrying exclusively relative information and are popularly represented in proportions or percentages. The resulting factors, corresponding to rows and columns of the table, can be inspected similarly as with contingency tables, e.g. for their mutual independent behaviour. The nature of compositional tables requires a specific geometrical treatment, represented by the Aitchison geometry on the simplex. The properties of the Aitchison geometry allow a decomposition of the original table into its independent and interactive parts. Moreover, the specific case of 2×2 compositional tables allows the construction of easily interpretable orthonormal coordinates (resulting from the isometric logratio transformation) for the original table and its decompositions. Consequently, for a sample of compositional tables both explorative statistical analysis like graphical inspection of the independent and interactive parts or any statistical inference (odds-ratio-like testing of independence) can be performed. Theoretical advancements of the presented approach are demonstrated using two economic applications.  相似文献   

11.
12.
Time-varying coefficient models with autoregressive and moving-average–generalized autoregressive conditional heteroscedasticity structure are proposed for examining the time-varying effects of risk factors in longitudinal studies. Compared with existing models in the literature, the proposed models give explicit patterns for the time-varying coefficients. Maximum likelihood and marginal likelihood (based on a Laplace approximation) are used to estimate the parameters in the proposed models. Simulation studies are conducted to evaluate the performance of these two estimation methods, which is measured in terms of the Kullback–Leibler divergence and the root mean square error. The marginal likelihood approach leads to the more accurate parameter estimates, although it is more computationally intensive. The proposed models are applied to the Framingham Heart Study to investigate the time-varying effects of covariates on coronary heart disease incidence. The Bayesian information criterion is used for specifying the time series structures of the coefficients of the risk factors.  相似文献   

13.
We consider Bayesian analysis of a class of multiple changepoint models. While there are a variety of efficient ways to analyse these models if the parameters associated with each segment are independent, there are few general approaches for models where the parameters are dependent. Under the assumption that the dependence is Markov, we propose an efficient online algorithm for sampling from an approximation to the posterior distribution of the number and position of the changepoints. In a simulation study, we show that the approximation introduced is negligible. We illustrate the power of our approach through fitting piecewise polynomial models to data, under a model which allows for either continuity or discontinuity of the underlying curve at each changepoint. This method is competitive with, or outperform, other methods for inferring curves from noisy data; and uniquely it allows for inference of the locations of discontinuities in the underlying curve.  相似文献   

14.
In recent years there has been a significant development of several procedures to infer about the extremal model that most conveniently describes the distribution function of the underlying population from a data set. The problem of choosing one of the three extremal types, giving preference to the Gumbel model for the null hypothesis, has frequently received the general designation of statistical choice of extremal models and has been handled under different set-ups by numerous authors. Recently, a test procedure, referred by Hasofer and Wang (1992), gave place to a comparison with some of other connected perspectives. Such a topic, jointly with some suggestions for applicability to real data, is the theme of the present paper.  相似文献   

15.
Population size estimation with discrete or nonparametric mixture models is considered, and reliable ways of construction of the nonparametric mixture model estimator are reviewed and set into perspective. Construction of the maximum likelihood estimator of the mixing distribution is done for any number of components up to the global nonparametric maximum likelihood bound using the EM algorithm. In addition, the estimators of Chao and Zelterman are considered with some generalisations of Zelterman’s estimator. All computations are done with CAMCR, a special software developed for population size estimation with mixture models. Several examples and data sets are discussed and the estimators illustrated. Problems using the mixture model-based estimators are highlighted.  相似文献   

16.
This article derives the asymptotic properties of rank-based tests for the covariate effects in rank repeated-measures analysis of covariance (ANCOVA) models (Fan and Zhang 2017 Fan, C., and D. Zhang. 2017. Rank repeated measures analysis of covariance. Communications in Statistics - Theory and Methods 46:115883.[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) employing generalized estimating equation (GEE) techniques. One interested application of the proposed tests is to check the validity of the assumption of homogeneous covariate effects in different levels of the factors. Performance of the proposed tests has been confirmed by simulation studies and illustrated using the famous seizure count data. While the article mainly focuses on interaction tests, the scope of the proposed tests includes testing any contrast of the covariate effect such as the null of no overall covariate effect.  相似文献   

17.
Sodium cromoglicate (SCG) has been available since around 1970 for the treatment of asthma and other allergic disorders in both adults and children. It has been approved for use around the world. Over the period of its development, a number of different formulations were introduced. In 1999, a systematic review of SCG use in childhood asthma was carried out and reported initially as a poster. Further systematic reviews and papers followed from the same authors and finally a Cochrane Collaboration review was published in 2003. All concluded that SCG was ineffective in paediatric asthma. Both the British Thoracic Society Guidelines for the treatment of paediatric asthma and the Model List of Essential Drugs of the WHO now reflect these conclusions. This paper looks carefully at the conclusions of these systematic reviews and raises concerns about the interpretation of the results. These failed to take adequate account of the changes with time in both the formulations used and the age groups examined, and also failed to take adequate note of the totality of information available over all end-points. One primary end-point was based on only four out of the 24 studies included in the review. Rather than having no effect, it is demonstrated that a considerable body of evidence favours SCG compared to placebo and, far from being ineffective, the drug appears to be effective particularly in older children. This article replaces a previously published version. DOI: 10.1002/pst.258.  相似文献   

18.
In recent research, Elliott et al. (1996) Elliott, G. 1996. Efficient tests for an autoregressive unit root. Econometrica, 64: 813836. [Crossref], [Web of Science ®] [Google Scholar] have shown the use of local-to-unity detrending via generalized least squares (GLS) to substantially increase the power of the Dickey–Fuller (1979) unit root test. In this paper the relationship between the extent of detrending undertaken, determined by the detrending parameter &art1;, and the power of the resulting GLS-based Dickey–Fuller (DF-GLS) test is examined. Using Monte Carlo simulation it is shown that the values of &art1; suggested by Elliott et al. (1996) Elliott, G. 1996. Efficient tests for an autoregressive unit root. Econometrica, 64: 813836. [Crossref], [Web of Science ®] [Google Scholar] on the basis of a limiting power function seldom maximize the power of the DF-GLS test for the finite samples encountered in applied research. This result is found to hold for the DF-GLS test including either an intercept or an intercept and a trend term. An empirical examination of the order of integration of the UK household savings ratio illustrates these findings, with the unit root hypothesis rejected using values of &art1; other than that proposed by Elliott et al. (1996) Elliott, G. 1996. Efficient tests for an autoregressive unit root. Econometrica, 64: 813836. [Crossref], [Web of Science ®] [Google Scholar].  相似文献   

19.
Singular spectrum analysis (SSA) is an increasingly popular and widely adopted filtering and forecasting technique which is currently exploited in a variety of fields. Given its increasing application and superior performance in comparison to other methods, it is pertinent to study and distinguish between the two forecasting variations of SSA. These are referred to as Vector SSA (SSA-V) and Recurrent SSA (SSA-R). The general notion is that SSA-V is more robust and provides better forecasts than SSA-R. This is especially true when faced with time series which are non-stationary and asymmetric, or affected by unit root problems, outliers or structural breaks. However, currently there exists no empirical evidence for proving the above notions or suggesting that SSA-V is better than SSA-R. In this paper, we evaluate out-of-sample forecasting capabilities of the optimised SSA-V and SSA-R forecasting algorithms via a simulation study and an application to 100 real data sets with varying structures, to provide a statistically reliable answer to the question of which SSA algorithm is best for forecasting at both short and long run horizons based on several important criteria.  相似文献   

20.
The purpose of this paper is to develop a Bayesian analysis for the right-censored survival data when immune or cured individuals may be present in the population from which the data is taken. In our approach the number of competing causes of the event of interest follows the Conway–Maxwell–Poisson distribution which generalizes the Poisson distribution. Markov chain Monte Carlo (MCMC) methods are used to develop a Bayesian procedure for the proposed model. Also, some discussions on the model selection and an illustration with a real data set are considered.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号