首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A nonparametric method is considered which yields smoothed estimates of the response probabilities when the response variable is categorical. The method is based on Lauder's (1983) direct kernel estimates which are extended to allow for ordinal kernels. Thus one can make use of the ordinal scale of the response variable. A class of predictive loss functions is introduced on which the cross-validatory choice of smoothing parameters is based. Plots of the smoothed response probabilities may be used to uncover the form of covariate effects  相似文献   

2.
Consider the problem of estimating the mean of a p (≥3)-variate multi-normal distribution with identity variance-covariance matrix and with unweighted sum of squared error loss. A class of minimax, noncomparable (i.e. no estimate in the class dominates any other estimate in the class) estimates is proposed; the class contains rules dominating the simple James-Stein estimates. The estimates are essentially smoothed versions of the scaled, truncated James-Stein estimates studied by Efron and Morris. Explicit and analytically tractable expressions for their risks are obtained and are used to give guidelines for selecting estimates within the class.  相似文献   

3.
We consider asymmetric kernel estimates based on grouped data. We propose an iterated scheme for constructing such an estimator and apply an iterated smoothed bootstrap approach for bandwidth selection. We compare our approach with competing methods in estimating actuarial loss models using both simulations and data studies. The simulation results show that with this new method, the estimated density from grouped data matches the true density more closely than with competing approaches.  相似文献   

4.
In linear quantile regression, the regression coefficients for different quantiles are typically estimated separately. Efforts to improve the efficiency of estimators are often based on assumptions of commonality among the slope coefficients. We propose instead a two-stage procedure whereby the regression coefficients are first estimated separately and then smoothed over quantile level. Due to the strong correlation between coefficient estimates at nearby quantile levels, existing bandwidth selectors will pick bandwidths that are too small. To remedy this, we use 10-fold cross-validation to determine a common bandwidth inflation factor for smoothing the intercept as well as slope estimates. Simulation results suggest that the proposed method is effective in pooling information across quantile levels, resulting in estimates that are typically more efficient than the separately obtained estimates and the interquantile shrinkage estimates derived using a fused penalty function. The usefulness of the proposed method is demonstrated in a real data example.  相似文献   

5.
This paper extends the univariate time series smoothing approach provided by penalized least squares to a multivariate setting, thus allowing for joint estimation of several time series trends. The theoretical results are valid for the general multivariate case, but particular emphasis is placed on the bivariate situation from an applied point of view. The proposal is based on a vector signal-plus-noise representation of the observed data that requires the first two sample moments and specifying only one smoothing constant. A measure of the amount of smoothness of an estimated trend is introduced so that an analyst can set in advance a desired percentage of smoothness to be achieved by the trend estimate. The required smoothing constant is determined by the chosen percentage of smoothness. Closed form expressions for the smoothed estimated vector and its variance-covariance matrix are derived from a straightforward application of generalized least squares, thus providing best linear unbiased estimates for the trends. A detailed algorithm applicable for estimating bivariate time series trends is also presented and justified. The theoretical results are supported by a simulation study and two real applications. One corresponds to Mexican and US macroeconomic data within the context of business cycle analysis, and the other one to environmental data pertaining to a monitored site in Scotland.  相似文献   

6.
This paper studies nonparametric kernel type (smoothed) estimation of quantiles for long memory stationary sequences. The uniform strong consistency and asymptotic normality of the estimates with rates are established. Finite sample behaviors are investigated in a small Monte Carlo simulation study.  相似文献   

7.
It is widely accepted that some financial data exhibit long memory or long dependence, and that the observed data usually possess noise. In the continuous time situation, the factional Brownian motion BH and its extension are an important class of models to characterize the long memory or short memory of data, and Hurst parameter H is an index to describe the degree of dependence. In this article, we estimate the Hurst parameter of a discretely sampled fractional integral process corrupted by noise. We use the preaverage method to diminish the impact of noise, employ the filter method to exclude the strong dependence, and obtain the smoothed data, and estimate the Hurst parameter by the smoothed data. The asymptotic properties such as consistency and asymptotic normality of the estimator are established. Simulations for evaluating the performance of the estimator are conducted. Supplementary materials for this article are available online.  相似文献   

8.
In this paper, we propose a smoothed Q‐learning algorithm for estimating optimal dynamic treatment regimes. In contrast to the Q‐learning algorithm in which nonregular inference is involved, we show that, under assumptions adopted in this paper, the proposed smoothed Q‐learning estimator is asymptotically normally distributed even when the Q‐learning estimator is not and its asymptotic variance can be consistently estimated. As a result, inference based on the smoothed Q‐learning estimator is standard. We derive the optimal smoothing parameter and propose a data‐driven method for estimating it. The finite sample properties of the smoothed Q‐learning estimator are studied and compared with several existing estimators including the Q‐learning estimator via an extensive simulation study. We illustrate the new method by analyzing data from the Clinical Antipsychotic Trials of Intervention Effectiveness–Alzheimer's Disease (CATIE‐AD) study.  相似文献   

9.
In this article, variance stabilizing filters are discussed. A new filter with nice properties is proposed which makes use of moving averages and moving standard deviations, the latter smoothed with the Hodrick-Prescott filter. This filter is compared to a GARCH-type filter. An ARIMA model is estimated for the filtered GDP series, and the parameter estimates are used in forecasting the unfiltered series. These forecasts compare well with those of ARIMA, ARFIMA, and GARCH models based on the unfiltered data. The filter does not color white noise.  相似文献   

10.
Seongyoung Kim 《Statistics》2015,49(6):1189-1203
For categorical data exhibiting nonignorable non-responses, it is well known that maximum likelihood (ML) estimates with a boundary solution are implausible and do not provide a perfect fit to the observed data even for saturated models. We provide the conditions under which ML estimates for the generalized linear model (GLM) with the usual log/logit link function have a boundary solution. These conditions introduce a new GLM with appropriately defined power link functions where its ML estimates resolve the problems arising from a boundary solution and offer useful statistics for identifying the non-response mechanism. This model is applied to a real dataset and compared with Bayesian models.  相似文献   

11.
The problem of selecting the bandwidth for optimal kernel density estimation at a point is considered. A class of local bandwidth selectors which minimize smoothed bootstrap estimates of mean-squared error in density estimation is introduced. It is proved that the bandwidth selectors in the class achieve optimal relative rates of convergence, dependent upon the local smoothness of the target density. Practical implementation of the bandwidth selection methodology is discussed. The use of Gaussian-based kernels to facilitate computation of the smoothed bootstrap estimate of mean-squared error is proposed. The performance of the bandwidth selectors is investigated empirically.  相似文献   

12.
In this article, we study a goodness-of-fit (GOF) test in the presence of length-biased sampling. For this purpose, we introduce a smoothed estimator of distribution function (d.f.) and we investigate its asymptotic behaviors, such as uniform consistency and asymptotic normality. Based on this estimator, we define a one-sample Kolmogorov type of GOF test for length-biased data. We conduct Monte Carlo simulations to evaluate the performance of the proposed test statistic and compare it with the one-sample Kolmogorov type of GOF test obtained by the non smoothed estimator of d.f.  相似文献   

13.
We compare the accuracy of five approaches for contour detection in speckled imagery. Some of these methods take advantage of the statistical properties of speckled data, and all of them employ active contours using B-spline curves. Images obtained with coherent illumination are affected by a noise called speckle, which is inherent to the imaging process. These data have been statistically modeled by a multiplicative model using the G0 distribution, under which regions with different degrees of roughness can be characterized by the value of a parameter. We use this information to find boundaries between regions with different textures. We propose and compare five strategies for boundary detection: three based on the data (maximum discontinuity on raw data, fractal dimension and maximum likelihood) and two based on estimates of the roughness parameter (maximum discontinuity and anisotropic smoothed roughness estimates). In order to compare these strategies, a Monte Carlo experience was performed to assess the accuracy of fitting a curve to a region. The probability of finding the correct edge with less than a specified error is estimated and used to compare the techniques. The two best procedures are then compared in terms of their computational cost and, finally, we show that the maximum likelihood approach on the raw data using the G0 law is the best technique.  相似文献   

14.
A likelihood based approach to obtaining non-parametric estimates of the failure time distribution is developed for the copula based model of Wang et al. (Lifetime Data Anal 18:434–445, 2012) for current status data under dependent observation. Maximization of the likelihood involves a generalized pool-adjacent violators algorithm. The estimator coincides with the standard non-parametric maximum likelihood estimate under an independence model. Confidence intervals for the estimator are constructed based on a smoothed bootstrap. It is also shown that the non-parametric failure distribution is only identifiable if the copula linking the observation and failure time distributions is fully-specified. The method is illustrated on a previously analyzed tumorigenicity dataset.  相似文献   

15.
This paper studies smoothed quantile linear regression models with response data missing at random. Three smoothed quantile empirical likelihood ratios are proposed first and shown to be asymptotically Chi-squared. Then, the confidence intervals for the regression coefficients are constructed without the estimation of the asymptotic covariance. Furthermore, a class of estimators for the regression parameter is presented to derive its asymptotic distribution. Simulation studies are conducted to assess the finite sample performance. Finally, a real-world data set is analyzed to illustrated the effectiveness of the proposed methods.  相似文献   

16.
This paper considers model averaging for the ordered probit and nested logit models, which are widely used in empirical research. Within the frameworks of these models, we examine a range of model averaging methods, including the jackknife method, which is proved to have an optimal asymptotic property in this paper. We conduct a large-scale simulation study to examine the behaviour of these model averaging estimators in finite samples, and draw comparisons with model selection estimators. Our results show that while neither averaging nor selection is a consistently better strategy, model selection results in the poorest estimates far more frequently than averaging, and more often than not, averaging yields superior estimates. Among the averaging methods considered, the one based on a smoothed version of the Bayesian Information criterion frequently produces the most accurate estimates. In three real data applications, we demonstrate the usefulness of model averaging in mitigating problems associated with the ‘replication crisis’ that commonly arises with model selection.  相似文献   

17.
In this paper, we investigate the progress of score difference (between home and away teams) in professional basketball games employing functional data analysis (FDA). The observed score difference is viewed as the realization of the latent intensity process, which is assumed to be continuous. There are two major advantages of modeling the latent score difference intensity process using FDA: (1) it allows for arbitrary dependent structure among score change increments. This removes potential model mis-specifications and accommodates momentum which is often observed in sports games. (2) further statistical inferences using FDA estimates will not suffer from inconsistency due to the issue of having a continuous model yet discretely sampled data. Based on the FDA estimates, we define and numerically characterize momentum in basketball games and demonstrate its importance in predicting game outcomes.  相似文献   

18.
The standard bootstrap and two commonly used types of smoothed bootstrap are investigated. The saddlepoint approximations are used to evaluate the accuracy of the three bootstrap estimates of the density of a sample mean. The optimal choice for the smoothing parameter is obtained when smoothing is useful in reducing the mean squared error.  相似文献   

19.
In 2008, Marsan and Lengliné presented a nonparametric way to estimate the triggering function of a Hawkes process. Their method requires an iterative and computationally intensive procedure which ultimately produces only approximate maximum likelihood estimates (MLEs) whose asymptotic properties are poorly understood. Here, we note a mathematical curiosity that allows one to compute, directly and extremely rapidly, exact MLEs of the nonparametric triggering function. The method here requires that the number q of intervals on which the nonparametric estimate is sought equals the number n of observed points. The resulting estimates have very high variance but may be smoothed to form more stable estimates. The performance and computational efficiency of the proposed method is verified in two disparate, highly challenging simulation scenarios: first to estimate the triggering functions, with simulation-based 95% confidence bands, for earthquakes and their aftershocks in Loma Prieta, California, and second, to characterise triggering in confirmed cases of plague in the United States over the last century. In both cases, the proposed estimator can be used to describe the rate of contagion of the processes in detail, and the computational efficiency of the estimator facilitates the construction of simulation-based confidence intervals.  相似文献   

20.
In some applications it is cost efficient to sample data in two or more stages. In the first stage a simple random sample is drawn and then stratified according to some easily measured attribute. In each subsequent stage a random subset of previously selected units is sampled for more detailed and costly observation, with a unit's sampling probability determined by its attributes as observed in the previous stages. This paper describes multistage sampling designs and estimating equations based on the resulting data. Maximum likelihood estimates (MLEs) and their asymptotic variances are given for designs using parametric models. Horvitz–Thompson estimates are introduced as alternatives to MLEs, their asymptotic distributions are derived and their strengths and weaknesses are evaluated. The designs and the estimates are illustrated with data on corn production.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号