首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Härdle & Marron (1990) treated the problem of semiparametric comparison of nonparametric regression curves by proposing a kernel-based estimator derived by minimizing a version of weighted integrated squared error. The resulting estimators of unknown transformation parameters are n-consistent, which prompts a consideration of issues. of optimality. We show that when the unknown mean function is periodic, an optimal nonparametric estimator may be motivated by an elegantly simple argument based on maximum likelihood estimation in a parametric model with normal errors. Strikingly, the asymptotic variance of an optimal estimator of θ does not depend at all on the manner of estimating error variances, provided they are estimated n-consistently. The optimal kernel-based estimator derived via these considerations is asymptotically equivalent to a periodic version of that suggested by Härdle & Marron, and so the latter technique is in fact optimal in this sense. We discuss the implications of these conclusions for the aperiodic case.  相似文献   

2.
This article considers partially linear single-index models with errors in all variables. By using the Pseudo ? θ method (Liang, Härdle, and Carroll 1999), local linear regression and simulation-extrapolation (SIMEX) technique (Cook and Stefanski 1994), we propose an efficient methodology to estimate the current model. Under certain conditions the asymptotic properties of proposed estimators are obtained. Some simulation experiments and an application are conducted to illustrate our proposed method.  相似文献   

3.
This paper deals with optimal window width choice in on-parametric lag or spectral window estimation of the spectral density of a stationary zero-mean process. Several approaches are reviewed: cross-validation-based methods as described by Hurvich(1985) BelträHo and Bloomfield (1987) and Hurvich and Belträo (1990); an iterative pro-cedure developed by Bühlmann (1996); and a bootstrap approach followed by Franke and Hardle (1992). These methods are compared in terms of the mean square error,the mean square percentage error, and a third measure of the istance between the true spectral density and its estimate. The comparison is based on a simulation study, the simulated processes being in he class of ARMA (5,5) processes. On the basis of simu-lation evidence we suggest to use a slightly modified version of Biihlmann's (1996)iterative method. This paper also makes a minor correction of the bootstrap criterion by Franke and Härdle (1992).  相似文献   

4.
For a nonparametric regression model y = m(x)+e with n independent observations, we analyze a robust method of finding the root of m(x) based on an M-estimation first discussed by Härdle & Gasser (1984). It is shown here that the robustness properties (minimaxity and breakdown function) of such an estimate are quite analogous to those of an M -estimator in the simple location model, but the rate of convergence is somewhat limited due to the nonparametric nature of the problem.  相似文献   

5.
The results of analyzing experimental data using a parametric model may heavily depend on the chosen model for regression and variance functions, moreover also on a possibly underlying preliminary transformation of the variables. In this paper we propose and discuss a complex procedure which consists in a simultaneous selection of parametric regression and variance models from a relatively rich model class and of Box-Cox variable transformations by minimization of a cross-validation criterion. For this it is essential to introduce modifications of the standard cross-validation criterion adapted to each of the following objectives: 1. estimation of the unknown regression function, 2. prediction of future values of the response variable, 3. calibration or 4. estimation of some parameter with a certain meaning in the corresponding field of application. Our idea of a criterion oriented combination of procedures (which usually if applied, then in an independent or sequential way) is expected to lead to more accurate results. We show how the accuracy of the parameter estimators can be assessed by a “moment oriented bootstrap procedure", which is an essential modification of the “wild bootstrap” of Härdle and Mammen by use of more accurate variance estimates. This new procedure and its refinement by a bootstrap based pivot (“double bootstrap”) is also used for the construction of confidence, prediction and calibration intervals. Programs written in Splus which realize our strategy for nonlinear regression modelling and parameter estimation are described as well. The performance of the selected model is discussed, and the behaviour of the procedures is illustrated, e.g., by an application in radioimmunological assay.  相似文献   

6.
This paper presents the problem of prediction of a domain total value based on the general linear model. In many methods presented in the survey sampling literature (e.g. Cassel, Särndal & Wretman, 1977 [Foundations of inference in survey sampling, New York: John Wiley & Sons]; Valliant, Dorfman & Royall, 2000 [Finite population sampling and inference. A prediction approach. New York: John Wiley & Sons]; Rao, 2003 [Small area estimation. New York; John Wiley & Sons]) a common assumption is that for each element of a population the domain to which it belongs is known. This assumption is especially important in the situation when a superpopulation model with auxiliary variables is considered. In this paper a method is proposed for prediction of the domain total when it is not known whether a unit belongs to a given domain or not, or when the information is available only for sampled elements of the population.  相似文献   

7.
Abstract A model is introduced here for multivariate failure time data arising from heterogenous populations. In particular, we consider a situation in which the failure times of individual subjects are often temporally clustered, so that many failures occur during a relatively short age interval. The clustering is modelled by assuming that the subjects can be divided into ‘internally homogenous’ latent classes, each such class being then described by a time‐dependent frailty profile function. As an example, we reanalysed the dental caries data presented earlier in Härkänen et al. [Scand. J. Statist. 27 (2000) 577], as it turned out that our earlier model could not adequately describe the observed clustering.  相似文献   

8.
In this article, we develop a nonparametric estimator for the Hölder constant of a density function. We consider a simulation study to evaluate the performance of the proposal and construct smooth bootstrap confidence intervals. Also, we give a brief review over the impossibility to decide whether a density function is Hölder.  相似文献   

9.
Saunders & Eccleston (1992) and Saunders, Eccleston & Spessa (1992) developed an approach to the design of factorial experiments on continuous processes that allows for the correlation present in such processes. Their methods concentrated on identifying the order of application of treatments in such experiments, assuming that the spacing between experiments is constant. On a continuous process, there is no necessity to maintain equally spaced sampling times. This paper gives an algorithm for choosing the optimal sampling times for a factorial experiment aimed at estimating a particular parameter or set of parameters. It is shown that in practical situations the optimal sampling times give considerable improvements in the accuracy of the parameter estimates.  相似文献   

10.
Modelling and simulation are buzz words in clinical drug development. But is clinical trial simulation (CTS) really a revolutionary technique? There is not much more to CTS than applying standard methods of modelling, statistics and decision theory. However, doing this in a systematic way can mean a significant improvement in pharmaceutical research. This paper describes in simple examples how modelling could be used in clinical development. Four steps are identified: gathering relevant information about a drug and the disease; building a mathematical model; predicting the results of potential future trials; and optimizing clinical trials and the entire clinical programme. We discuss these steps and give a number of examples of model components, demonstrating that relatively unsophisticated models may also prove useful. We stress that modelling and simulation are decision tools and point out the benefits of integrating them with decision analysis. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

11.
Under the hypothesis of white noise, the authors derive the explicit form of the asymptotic representation of linear rank statistics resulting from Hájek's (1968) celebrated projection lemma for linear rank statistics in the so‐called approximate score case. This representation based on Bernstein polynomials is better, in the quadratic mean sense, than the traditional one due to Hájek (1961, 1962). The polynomial representation allows for a new derivation of classical asymptotic results (asymptotic normality, Berry‐Essten bounds). Moreover, a simulation study shows that the quality of the polynomial approximation to the exact finite‐sample distributions of rank statistics is sizeably better than that resulting from the traditional approach.  相似文献   

12.
Efron and Petrosian (1999 Efron, B., Petrosian, V. (1999). Nonparametric methods for doubly truncated data. Journal of the American Statistical Association 94:824834.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) formulated the problem of double truncation and proposed nonparametric methods on testing and estimation. An alternative estimation method was proposed by Shen (2010a Shen, P.S. (2010a). Nonparametric analysis of doubly truncated data. Annals of the Institute of Statistical Mathematics 62:835853.[Crossref], [Web of Science ®] [Google Scholar]), utilizing the inverse-probability-weighting technique. One aim of this paper was to assess the computational complexity of the existing estimation methods. Through a simulation study, we found that these two estimation methods have the same level of computational efficiency. The other aim was to study the noniterative IPW estimator under the condition that truncation variables are independent. The IPW estimator and the interval estimation was proved satisfactory in the simulation study.  相似文献   

13.
ABSTRACT

This article considers the problem of a mean change-point in heavy-tailed dependent observations. A method of change-point estimation by truncating initial process is proposed, which can weaken the affection of outliers. In the infinite variance case, we obtained a generalization Hájek-Rényi type inequality. Consistency and the rate of convergence for the estimated change-point are also established. The results of a simulation study support validity of our method.  相似文献   

14.
Generalized lambda distribution (GLD) is a flexible distribution that can represent a wide variety of distributional shapes. This property of the GLD has made it very popular in simulation input modeling in recent years, and several fitting methods for estimating the parameters of the GLD have been proposed. Nevertheless, there appears to be a lack of insights about the performances of these fitting methods in estimating the parameters of the GLD for a variety of distributional shapes and input data. Our primary goal in this article is to compare the goodness-of-fits of the popular fitting methods in estimating the parameters of the GLD introduced in Freimer et al. (1988 Freimer, M., Mudholkar, G., Kollia, G., Lin, C. (1988). A study of the Generalized Tukey Lambda family. Communications in Statistics-Theory and Methods 17:35473567.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]), i.e., Freimer–Mudholkar–Kollia–Lin (FMKL) GLD, and provide guidelines to the simulation practitioner about when to use each method. We further describe the use of the genetic algorithm for the FMKL GLD, and investigate the performances of the suggested methods in modeling the daily exchange rates of eight currencies.  相似文献   

15.
Model selection in quantile regression models   总被引:1,自引:0,他引:1  
Lasso methods are regularisation and shrinkage methods widely used for subset selection and estimation in regression problems. From a Bayesian perspective, the Lasso-type estimate can be viewed as a Bayesian posterior mode when specifying independent Laplace prior distributions for the coefficients of independent variables [32 T. Park, G. Casella, The Bayesian Lasso, J. Amer. Statist. Assoc. 103 (2008), pp. 681686. doi: 10.1198/016214508000000337[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]]. A scale mixture of normal priors can also provide an adaptive regularisation method and represents an alternative model to the Bayesian Lasso-type model. In this paper, we assign a normal prior with mean zero and unknown variance for each quantile coefficient of independent variable. Then, a simple Markov Chain Monte Carlo-based computation technique is developed for quantile regression (QReg) models, including continuous, binary and left-censored outcomes. Based on the proposed prior, we propose a criterion for model selection in QReg models. The proposed criterion can be applied to classical least-squares, classical QReg, classical Tobit QReg and many others. For example, the proposed criterion can be applied to rq(), lm() and crq() which is available in an R package called Brq. Through simulation studies and analysis of a prostate cancer data set, we assess the performance of the proposed methods. The simulation studies and the prostate cancer data set analysis confirm that our methods perform well, compared with other approaches.  相似文献   

16.
For fixed size sampling designs with high entropy, it is well known that the variance of the Horvitz–Thompson estimator can be approximated by the Hájek formula. The interest of this asymptotic variance approximation is that it only involves the first order inclusion probabilities of the statistical units. We extend this variance formula when the variable under study is functional, and we prove, under general conditions on the regularity of the individual trajectories and the sampling design, that we can get a uniformly convergent estimator of the variance function of the Horvitz–Thompson estimator of the mean function. Rates of convergence to the true variance function are given for the rejective sampling. We deduce, under conditions on the entropy of the sampling design, that it is possible to build confidence bands whose coverage is asymptotically the desired one via simulation of Gaussian processes with variance function given by the Hájek formula. Finally, the accuracy of the proposed variance estimator is evaluated on samples of electricity consumption data measured every half an hour over a period of 1 week.  相似文献   

17.
Constructing confidence intervals (CIs) for a binomial proportion and the difference between two binomial proportions is a fundamental and well-studied problem with respect to the analysis of binary data. In this note, we propose a new bootstrap procedure to estimate the CIs by resampling from a newly developed smooth quantile function in [11 Newcombe, R. G. 1998. Two-sided confidence intervals for the single proportion: Comparison of seven methods. Stat. Med., 17: 857872. (doi:10.1002/(SICI)1097-0258(19980430)17:8<857::AID-SIM777>3.0.CO;2-E)[Crossref], [PubMed], [Web of Science ®] [Google Scholar]] for discrete data. We perform a variety of simulation studies in order to illustrate the strong performance of our approach. The coverage probabilities of our CIs in the one-sample setting are superior than or comparable to other well-known approaches. The true utility of our new and novel approach is in the two-sample setting. For the difference of two proportions, our smooth bootstrap CIs provide better coverage probabilities almost uniformly over the interval (?1, 1), particularly in the tail region as compared than other published methods included in our simulation. We illustrate our methodology via an application to several different binary data sets.  相似文献   

18.
Repeated measurement designs are widely used in medicine, pharmacology, animal sciences, and psychology. In this paper the works of Iqbal and Tahir (2009 Iqbal, I., and M. H. Tahir. 2009. Circular strongly balanced repeated measurements designs. Communications in Statistics—Theory and Methods 38:368696.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) and Iqbal, Tahir, and Ghazali (2010 Iqbal, I., M. H. Tahir, and S. S. A. Ghazali. 2010. Circular first- and second-order balanced repeated measurements designs. Communications in Statistics—Theory and Methods 39:22840.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) are generalized for the construction of circular-balanced and circular strongly balanced repeated measurements designs through the method of cyclic shifts for three periods.  相似文献   

19.
ABSTRACT

Motivated by some recent improvements for mean estimation in finite sampling theory, we propose, in a design-based approach, a new class of ratio-type estimators. The class is initially discussed on the assumption that the study variable has a nonsensitive nature, meaning that it deals with topics that do not generate embarrassment when respondents are directly questioned about them. Under this standard setting, some estimators belonging to the class are shown and the bias, mean square error and minimum mean square error are determined up to the first-order of approximation. The class is subsequently extended to the case where the study variable refers to sensitive issues which produce measurement errors due to nonresponses and/or untruthful reporting. These errors may be reduced by enhancing respondent cooperation through scrambled response methods that mask the true value of the sensitive variable. Hence, four methods (say the additive, multiplicative, mixed and combined additive-multiplicative methods) are discussed for the purposes of the article. Finally, a simulation study is carried out to assess the performance of the proposed class by comparing a number of competing estimators, both in the sensitive and the nonsensitive setting.  相似文献   

20.
In this article, we consider a sample point (t j , s j ) including a value s j  = f(t j ) at height s j and abscissa (time or location) t j . We apply wavelet decomposition by using shifts and dilations of the basic Häar transform and obtain an algorithm to analyze a signal or function f. We use this algorithm in practical to approximating function by numerical example. Some relationships between wavelets coefficients and asymptotic distribution of wavelet coefficients are investigated. At the end, we illustrate the results on simulated data by using MATLAB and R software.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号