首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Some statistics practitioners often ignore the underlying assumptions when analyzing a real data and employ the Nonlinear Least Squares (NLLS) method to estimate the parameters of a nonlinear model. In order to make reliable inferences about the parameters of a model, require that the underlying assumptions, especially the assumption that the errors are independent, are satisfied. However, in a real situation, we may encounter dependent error terms which prone to produce autocorrelated errors. A two-stage estimator (CTS) has been developed to remedy this problem. Nevertheless, it is now evident that the presence of outliers have an unduly effect on the least squares estimates. We expect that the CTS is also easily affected by outliers since it is based on the least squares estimator, which is not robust. In this article, we propose a Robust Two-Stage (RTS) procedure for the estimation of the nonlinear regression parameters in the situation where autocorrelated errors come together with the existence of outliers. The numerical example and simulation study signify that the RTS is more efficient than the NLLS and the CTS methods.  相似文献   

2.
This article considers the two-piece normal-Laplace (TPNL) distribution, a split skew distribution consisting of a normal part, and a Laplace part. The distribution is indexed by three parameters, representing location, scale, and shape. As illustrated with several examples, the TPNL family of distributions provides a useful alternative to other families of asymmetric distributions on the real line. However, because the likelihood function is not well behaved, standard theory of maximum-likelihood (ML) estimation does not apply to the TPNL family. In particular, the likelihood function can have multiple local maxima. We provide a procedure for computing ML estimators, and prove consistency and asymptotic normality of ML estimators, using non standard methods.  相似文献   

3.
Two-phase study designs can reduce cost and other practical burdens associated with large scale epidemiologic studies by limiting ascertainment of expensive covariates to a smaller but informative sub-sample (phase-II) of the main study (phase-I). During the analysis of such studies, however, subjects who are selected at phase-I but not at phase-II, remain informative as they may have partial covariate information. A variety of semi-parametric methods now exist for incorporating such data from phase-I subjects when the covariate information can be summarized into a finite number of strata. In this article, we consider extending the pseudo-score approach proposed by Chatterjee et al. (J Am Stat Assoc 98:158–168, 2003) using a kernel smoothing approach to incorporate information on continuous phase-I covariates. Practical issues and algorithms for implementing the methods using existing software are discussed. A sandwich-type variance estimator based on the influence function representation of the pseudo-score function is proposed. Finite sample performance of the methods are studies using simulated data. Advantage of the proposed smoothing approach over alternative methods that use discretized phase-I covariate information is illustrated using two-phase data simulated within the National Wilms Tumor Study (NWTS).  相似文献   

4.
In this article, we propose a nonparametric procedure to estimate the integrated volatility of an Itô semimartingale in the presence of jumps and microstructure noise. The estimator is based on a combination of the preaveraging method and threshold technique, which serves to remove microstructure noise and jumps, respectively. The estimator is shown to work for both finite and infinite activity jumps. Furthermore, asymptotic properties of the proposed estimator, such as consistency and a central limit theorem, are established. Simulations results are given to evaluate the performance of the proposed method in comparison with other alternative methods.  相似文献   

5.
This article considers likelihood methods for estimating the causal effect of treatment assignment for a two-armed randomized trial assuming all-or-none treatment noncompliance and allowing for subsequent nonresponse. We first derive the observed data likelihood function as a closed form expression of the parameter given the observed data where both response and compliance state are treated as variables with missing values. Then we describe an iterative procedure which maximizes the observed data likelihood function directly to compute a maximum likelihood estimator (MLE) of the causal effect of treatment assignment. Closed form expressions at each iterative step are provided. Finally we compare the MLE with an alternative estimator where the probability distribution of the compliance state is estimated independent of the response and its missingness mechanism. Our work indicates that direct maximum likelihood inference is straightforward for this problem. Extensive simulation studies are provided to examine the finite sample performance of the proposed methods.  相似文献   

6.
This article develops critical values to test the null hypothesis of a unit root against the alternative of stationarity with asymmetric adjustment. Specific attention is paid to threshold and momentum threshold autoregressive processes. The standard Dickey–Fuller tests emerge as a special case. Within a reasonable range of adjustment parameters, the power of the new tests is shown to be greater than that of the corresponding Dickey–Fuller test. The use of the tests is illustrated using the term structure of interest rates. It is shown that the movements toward the long-run equilibrium relationship are best estimated as an asymmetric process.  相似文献   

7.
A test procedure for testing homogeneity of location parameters against simple ordered alternative is proposed for k(k ≥ 2) members of two parameter exponential distribution under unbalanced data and heteroscedasticity of the scale parameters. The relevant one-sided and two-sided simultaneous confidence intervals (SCIs) for all k(k ? 1)/2 ordered pairwise differences of location parameters are also proposed. Simulation-based study revealed that the proposed procedure is better than the recently proposed procedure in terms of power, coverage probability, and average volume of SCIs. The implementation of proposed procedure is demonstrated through real life data.  相似文献   

8.
In this article we discuss the estimation of stochastic volatility (SV) using generalized empirical likelihood/minimum contrast methods based on moment conditionsmodels. We show via Monte Carlo simulations that the proposed methods have superior or equivalent performance to the other alternative methods, and, additionally, they offer robustness properties in the presence of heavy-tailed distributions and outliers.  相似文献   

9.
In the nonparametric setting, the standard bootstrap method is based on the empirical distribution function of a random sample. The author proposes, by means of the empirical likelihood technique, an alternative bootstrap procedure under a nonparametric model in which one has some auxiliary information about the population distribution. By proving the almost sure weak convergence of the modified bootstrapped empirical process, the validity of the proposed bootstrap procedure is established. This new result is used to obtain bootstrap confidence bands for the population distribution function and to perform the bootstrap Kolmogorov test in the presence of auxiliary information. Other applications include bootstrapping means and variances with auxiliary information. Three simulation studies are presented to demonstrate the performance of the proposed bootstrap procedure for small samples.  相似文献   

10.
It is known that several widely used structural change tests have non-monotonic power because the long-run variance is poorly estimated under the alternative hypothesis. In this paper, we propose a modified long-run variance estimator to alleviate this problem. We theoretically show that the tests with our long-run variance estimator are consistent against large multiple structural changes. Simulation results show that the proposed test performs well in finite samples.  相似文献   

11.
In this paper, we consider the inferential procedures for the generalized inverted exponential distribution under progressive first failure censoring. The exact confidence interval for the scale parameter is derived. The generalized confidence intervals (GCIs) for the shape parameter and some commonly used reliability metrics such as the quantile and the reliability function are explored. Then the proposed procedure is extended to the prediction interval for the future measurement. The GCIs for the reliability of the stress-strength model are discussed under both equal scale and unequal scale scenarios. Extensive simulations are used to demonstrate the performance of the proposed GCIs and prediction interval. Finally, an example is used to illustrate the proposed methods.  相似文献   

12.
In a discrete-part manufacturing process, the noise is often described by an IMA(1,1) process and the pure unit delay transfer function is used as the feedback controller to adjust it. The optimal controller for this process is the well-known minimum mean square error (MMSE) controller. The starting level of the IMA(1,1) model is assumed to be on target when it starts. Considering such an impractical assumption, we adopt the starting offset. Since the starting offset is not observable, the MMSE controller does not exist. An alternative to the MMSE controller is the minimum asymptotic mean square error controller, which makes the long-run mean square error minimum.Another concern in this article is the un-stability of the controller, which may produce high adjustment costs and/or may exceed the physical bounds of the process adjustment. These practical barriers will prevent the controller to adjust the process properly. To avoid this dilemma, a resetting design is proposed. That is, the resetting procedure in use of the controller is to adjust the process according to the controller when it remains within the reset limit, and to reset the process, otherwise.The total cost for the manufacturing process is affected by the off-target cost, the adjustment cost, and the reset cost. Proper values for the reset limit are selected to minimize the average cost per reset interval (ACR) considering various process parameters and cost parameters. A time non-homogeneous Markov chain approach is used for calculating the ACR. The effect of adopting the starting offset is also studied here.  相似文献   

13.
Density-based clustering methods hinge on the idea of associating groups to the connected components of the level sets of the density underlying the data, to be estimated by a nonparametric method. These methods claim some desirable properties and generally good performance, but they involve a non-trivial computational effort, required for the identification of the connected regions. In a previous work, the use of spatial tessellation such as the Delaunay triangulation has been proposed, because it suitably generalizes the univariate procedure for detecting the connected components. However, its computational complexity grows exponentially with the dimensionality of data, thus making the triangulation unfeasible for high dimensions. Our aim is to overcome the limitations of Delaunay triangulation. We discuss the use of an alternative procedure for identifying the connected regions associated to the level sets of the density. By measuring the extent of possible valleys of the density along the segment connecting pairs of observations, the proposed procedure shifts the formulation from a space with arbitrary dimension to a univariate one, thus leading benefits both in computation and visualization.  相似文献   

14.
Bootstrap methods for estimating the long-run covariance of stationary functional time series are considered. We introduce a versatile bootstrap method that relies on functional principal component analysis, where principal component scores can be bootstrapped by maximum entropy. Two other bootstrap methods resample error functions, after the dependence structure being modeled linearly by a sieve method or nonlinearly by a functional kernel regression. Through a series of Monte-Carlo simulation, we evaluate and compare the finite-sample performances of these three bootstrap methods for estimating the long-run covariance in a functional time series. Using the intraday particulate matter (\(\hbox {PM}_{10}\)) dataset in Graz, the proposed bootstrap methods provide a way of constructing the distribution of estimated long-run covariance for functional time series.  相似文献   

15.
The article considers the problem of choosing between two (possibly) nonlinear models that have been fitted to the same data using M-estimation methods. An asymptotically normally distributed test statistic that takes into account the fact that the models are fitted robustly is given. The new procedure is compared with other test statistics using a Monte Carlo study. We found that the presence of a competitive model either in the null or the alternative hipothesis affects the distributional properties of the tests, and that in the case that the data contains outlying observations the new procedure had a significantly higher power than the rest of the tests.  相似文献   

16.
This article extends the spatial panel data regression with fixed-effects to the case where the regression function is partially linear and some regressors may be endogenous or predetermined. Under the assumption that the spatial weighting matrix is strictly exogenous, we propose a sieve two stage least squares (S2SLS) regression. Under some sufficient conditions, we show that the proposed estimator for the finite dimensional parameter is root-N consistent and asymptotically normally distributed and that the proposed estimator for the unknown function is consistent and also asymptotically normally distributed but at a rate slower than root-N. Consistent estimators for the asymptotic variances of the proposed estimators are provided. A small scale simulation study is conducted, and the simulation results show that the proposed procedure has good finite sample performance.  相似文献   

17.
ABSTRACT

This article considers the monitoring for variance change in nonparametric regression models. First, the local linear estimator of the regression function is given. A moving square cumulative sum procedure is proposed based on residuals of the estimator. And the asymptotic results of the statistic under the null hypothesis and the alternative hypothesis are obtained. Simulations and Application support our procedure.  相似文献   

18.
The classical autocorrelation function might not be very informative when measuring a dependence in binary time series. Recently, alternative tools, namely the autopersistence functions (APF) and their sample counterparts, the autpersistence graphs (APG), have been proposed for the analysis of dependent dichotomous variables. In this article, we summarize properties of the autopersistence functions for general binary series as well as for some important particular cases. We suggest a normalized version of APF which might be more convenient for a practical use. The asymptotic properties of autopersistence graphs are investigated. The consistency and asymptotic normality is discussed. The theoretical results are illustrated by a simulation study.  相似文献   

19.
This article is devoted to the development of product of spacings estimator for a Progressive hybrid Type-I censoring scheme with binomial removals. The experimental units are assumed to follow inverse Lindley distribution. We propose a Bayes estimator of associated scale parameter based on the product of spacings function and simultaneously compare it with that obtained under a usual Bayesian estimation procedure. The estimators are obtained under the squared error loss function along with corresponding HP intervals evaluated by using the Markov chain Monte-Carlo technique. The classical product of spacings estimator has also been derived and compared with the maximum likelihood estimator in addition to 95% average asymptotic confidence intervals. The applicability of the proposed methods is demonstrated by analysing a real data of guinea pigs affected with tuberculosis for the considered censoring scheme.  相似文献   

20.
It has been suggested that existing estimates of the long-run impact of a surprise move in income may have a substantial upward bias due to the presence of a trend break in postwar U.S. gross national product data. This article shows that the statistical evidence does not warrant abandoning the no-trend-break null hypothesis. A key part of the argument is that conventionally computed p values overstate the likelihood of the trend-break alternative hypothesis. This is because they do not take into account that, in practice, the date is chosen based on pretest examination of the data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号