首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In a smoothing spline model with unknown change-points, the choice of the smoothing parameter strongly influences the estimation of the change-point locations and the function at the change-points. In a tumor biology example, where change-points in blood flow in response to treatment were of interest, choosing the smoothing parameter based on minimizing generalized cross-validation (GCV) gave unsatisfactory estimates of the change-points. We propose a new method, aGCV, that re-weights the residual sum of squares and generalized degrees of freedom terms from GCV. The weight is chosen to maximize the decrease in the generalized degrees of freedom as a function of the weight value, while simultaneously minimizing aGCV as a function of the smoothing parameter and the change-points. Compared with GCV, simulation studies suggest that the aGCV method yields improved estimates of the change-point and the value of the function at the change-point.  相似文献   

2.
The empirical likelihood ratio-based semiparametric tests of change-points with epidemic alternatives are constructed and are proved to have the same limiting null distributions as some well-known tests. The maximum empirical likelihood estimates of the change-points and the epidemic duration are shown to be consistent. Data-based model tests are also provided. The method is applied to a stock market price data and the Nile river data.  相似文献   

3.
ABSTRACT

We propose an extension of parametric product partition models. We name our proposal nonparametric product partition models because we associate a random measure instead of a parametric kernel to each set within a random partition. Our methodology does not impose any specific form on the marginal distribution of the observations, allowing us to detect shifts of behaviour even when dealing with heavy-tailed or skewed distributions. We propose a suitable loss function and find the partition of the data having minimum expected loss. We then apply our nonparametric procedure to multiple change-point analysis and compare it with PPMs and with other methodologies that have recently appeared in the literature. Also, in the context of missing data, we exploit the product partition structure in order to estimate the distribution function of each missing value, allowing us to detect change points using the loss function mentioned above. Finally, we present applications to financial as well as genetic data.  相似文献   

4.
In the dynamic financial market, the change of financial asset prices is always described as a certain random events which result in abrupt changes. The random time when the event occurs is called a change point. As the event happens, in order to mitigate property damage the government should increase the macro-control ability. As a result, we need to find a valid statistical model for change point problem to solve it effectively. This paper proposes a semiparametric model for detecting the change points. According to the research of empirical studies and hypothesis testing we acquire the maximum likelihood estimators of change points. We use the loglikelihood ratio to test the multiple change points. We obtain some asymptotic results. The estimated change point is more efficient than the non parametric one through simulation experiments. Real data application illustrates the usage of the model.  相似文献   

5.
In this work, we present a computational method to approximate the occurrence of the change-points in a temporal series consisting of independent and normally distributed observations, with equal mean and two possible variance values. This type of temporal series occurs in the investigation of electric signals associated to rhythmic activity patterns of nerves and muscles of animals, in which the change-points represent the actual moments when the electrical activity passes from a phase of silence to one of activity, or vice versa. We confront the hypothesis that there is no change-point in the temporal series, against the alternative hypothesis that there exists at least one change-point, employing the corresponding likelihood ratio as the test statistic; a computational implementation of the technique of quadratic penalization is employed in order to approximate the quotient of the logarithmic likelihood associated to the set of hypotheses. When the null hypothesis is rejected, the method provides estimations of the localization of the change-points in the temporal series. Moreover, the method proposed in this work employs a posteriori processing in order to avoid the generation of relatively short periods of silence or activity. The method is applied to the determination of change-points in both experimental and synthetic data sets; in either case, the results of our computations are more than satisfactory.  相似文献   

6.
7.
This paper studies the asymptotic properties of a smoothed least absolute deviations estimator in a nonlinear parametric model with multiple change-points occurring at the unknown times with independent and identically distributed errors. The model is nonlinear in the sense that between two successive change-points the regression function is nonlinear into respect to parameters. It is shown via Monte Carlo simulations that its performance is competitive with that of least absolute deviations estimator and it is more efficient than the least squares estimator, particularly in the presence of the outlier points. If the number of change-points is unknown, an estimation criterion for this number is proposed. Interest of this method is that the objective function is approximated by a differentiable function and if the model contains outliers, it detects correctly the location of the change-points.  相似文献   

8.
The skip-lot sampling plans are widely used in industries for quality inspection of products in order to reduce the sampling costs and inspection efforts when products have good quality history. Also, the skip-lot sampling plan concept is sound and useful and it is economically advantageous to use the skip-lot approach in the design of sampling plans. Thus, the skip-lot sampling plans are useful to minimize the cost of the inspection particularly in costly and destructive testing. Hence, a new system of skip-lot sampling plans designated as SkSP-2-R plan is developed in this article by incorporating the idea of resampling procedure in the skip-lot sampling plans of type SkSP-2. A Markov chain formulation and derivation of performance measures for this new plan are presented. The properties and advantages of the SkSP-2-R plan are studied with single sampling plan as the reference plan. The response-to-change characteristics of the SkSP-2-R plan are also investigated, based on the average run length.  相似文献   

9.
The model chi-square that is used in linear structural equation modeling compares the fitted covariance matrix of a target model to an unstructured covariance matrix to assess global fit. For models with nonlinear terms, i.e., interaction or quadratic terms, this comparison is very problematic because these models are not nested within the saturated model that is represented by the unstructured covariance matrix. We propose a novel measure that quantifies the heteroscedasticity of residuals in structural equation models. It is based on a comparison of the likelihood for the residuals under the assumption of heteroscedasticity with the likelihood under the assumption of homoscedasticity. The measure is designed to respond to omitted nonlinear terms in the structural part of the model that result in heteroscedastic residual scores. In a small Monte Carlo study, we demonstrate that the measure appears to detect omitted nonlinear terms reliably when falsely a linear model is analyzed and the omitted nonlinear terms account for substantial nonlinear effects. The results also indicate that the measure did not respond when the correct model or an overparameterized model were used.  相似文献   

10.
In this paper, we are concerned with designing surveys for detecting patches of some exploitable resource, such as a shellfish stock. We take as our objective that, if there is one or more circular patches above some specified size, then the probability of detecting at least one patch should be high. We show how the required sampling intensity can be modified in the light of information about the likely number and size of patches. The results are applied to two surveys for detecting exploitable patches of cockles ( Cerastoderma edule ).  相似文献   

11.
In this paper, we are concerned with designing surveys for detecting patches of some exploitable resource, such as a shellfish stock. We take as our objective that, if there is one or more circular patches above some specified size, then the probability of detecting at least one patch should be high. We show how the required sampling intensity can be modified in the light of information about the likely number and size of patches. The results are applied to two surveys for detecting exploitable patches of cockles ( Cerastoderma edule ).  相似文献   

12.
13.
In this article scan statistics for detecting a local change in variance for two-dimensional normal data are discussed. When the precise size of the rectangular window, where a local change in variance has occurred, is unknown, multiple and variable window scan statistics are proposed. A simulation study is presented to evaluate the performance of the scan statistics investigated in this article via comparison of power. A method for estimating the rectangular region, where a change in variance has occurred, and the size of the change in variance is also discussed.  相似文献   

14.
In many conventional scientific investigations with high or ultra-high dimensional feature spaces, the relevant features, though sparse, are large in number compared with classical statistical problems, and the magnitude of their effects tapers off. It is reasonable to model the number of relevant features as a diverging sequence when sample size increases. In this paper, we investigate the properties of the extended Bayes information criterion (EBIC) (Chen and Chen, 2008) for feature selection in linear regression models with diverging number of relevant features in high or ultra-high dimensional feature spaces. The selection consistency of the EBIC in this situation is established. The application of EBIC to feature selection is considered in a SCAD cum EBIC procedure. Simulation studies are conducted to demonstrate the performance of the SCAD cum EBIC procedure in finite sample cases.  相似文献   

15.
When studying associations between a functional covariate and scalar response using a functional linear model (FLM), scientific knowledge may indicate possible monotonicity of the unknown parameter curve. In this context, we propose an F-type test of monotonicity, based on a full versus reduced nested model structure, where the reduced model with monotonically constrained parameter curve is nested within an unconstrained FLM. For estimation under the unconstrained FLM, we consider two approaches: penalised least-squares and linear mixed model effects estimation. We use a smooth then monotonise approach to estimate the reduced model, within the null space of monotone parameter curves. A bootstrap procedure is used to simulate the null distribution of the test statistic. We present a simulation study of the power of the proposed test, and illustrate the test using data from a head and neck cancer study.  相似文献   

16.
In this paper, a variables tightened-normal-tightened (TNT) two-plan sampling system based on the widely used capability index Cpk is developed for product acceptance determination when the quality characteristic of products has two-sided specification limits and follows a normal distribution. The operating procedure and operating characteristic (OC) function of the variables TNT two-plan sampling system, and the conditions for solving plan parameters are provided. The behavior of OC curves for the variables TNT sampling system under various parameters is also studied, and compared with the variables single tightened inspection plan and single normal inspection plan.  相似文献   

17.
Collings and Margolin(1985) developed a locally most powerful unbiased test for detecting negative binomial departures from a Poisson model, when the variance is a quadratic function of the mean. Kim and Park(1992) developed a locally most powerful unbiased test, when the variance is a linear function of the mean. It is found that a different mean-variance structure of a negative binomial derives a different locally optimal test statistic.

In this paper Collings and Margolin's and Kim and Park's results are unified and extended by developing a test for overdispersion in Poisson model against Katz family of distributions, Our setup has two extensions: First, Katz family of distributions is employed as an extension of the negative binomial distribution. Second, the mean-variance structure of the mixed Poisson model is given by σ2 = μ+cμr for arbitrary but fixed r. We derive a local score test for testing H0 : c = 0. Superiority of a new test is proved by the asymtotic relative efficiency as well as the simulation study.  相似文献   

18.
The aim of this work is to develop a test to distinguish between heavy and super-heavy tailed probability distributions. These classes of distributions are relevant in areas such as telecommunications and insurance risk, among others. By heavy tailed distributions we mean probability distribution functions with polynomially decreasing upper tails (regularly varying tails). The term super-heavy is reserved for right tails decreasing to zero at a slower rate, such as logarithmic, or worse (slowly varying tails). Simulations are presented for several models and an application with telecommunications data is provided.  相似文献   

19.
This note examines the effect of equicorrelation of the observations on Grubbs' (1950) procedure of detecting an outlier in a sample of n independent observations. It is shown that the procedure is robust, in fact the significance level remains unchanged.  相似文献   

20.
In analyzing data from unreplicated factorial designs, the half-normal probability plot is commonly used to screen for the ‘vital few’ effects. Recently, many formal methods have been proposed to overcome the subjectivity of this plot. Lawson (1998) (hereafter denoted as LGB) suggested a hybrid method based on the half-normal probability plot, which is a blend of Lenth (1989) and Loh (1992) method. The method consists of fitting a simple least squares line to the inliers, which are determined by the Lenth method. The effects exceeding the prediction limits based on the fitted line are candidates for the vital few effects. To improve the accuracy of partitioning the effects into inliers and outliers, we propose a modified LGB method (hereafter denoted as the Mod_LGB method), in which more outliers can be classified by using both the Carling’s modification of the box plot (Carling, 2000) and Lenth method. If no outlier exists or there is a wide range in the inliers as determined by the Lenth method, more outliers can be found by the Carling method. A simulation study is conducted in unreplicated 24 designs with the number of active effects ranging from 1 to 6 to compare the efficiency of the Lenth method, original LGB methods, and the proposed modified version of the LGB method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号