首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
We propose correcting for non-compliance in randomized trials by estimating the parameters of a class of semi-parametric failure time models, the rank preserving structural failure time models, using a class of rank estimators. These models are the structural or strong version of the “accelerated failure time model with time-dependent covariates” of Cox and Oakes (1984). In this paper we develop a large sample theory for these estimators, derive the optimal estimator within this class, and briefly consider the construction of “partially adaptive” estimators whose efficiency may approach that of the optimal estimator. We show that in the absence of censoring the optimal estimator attains the semiparametric efficiency bound for the model.  相似文献   

2.
The problem of choosing optimal levels of the acceleration variable for accelerated testing is an important issue in reliability analysis. Most recommendations have focused on minimizing the variance of an estimator of a particular characteristic, such as a percentile, for a specific parametric model. In this paper, a general approach based on “locally penalized” D-optimality (LPD-optimality) is proposed, which simultaneously minimizes the variances of the model parameter estimators. Application of the method is illustrated for inverse Gaussian-accelerated test models fitted to carbon fiber tensile strength data, where the fiber length is the “acceleration variable”.  相似文献   

3.
Semiparametric maximum likelihood estimators have recently been proposed for a class of two‐phase, outcome‐dependent sampling models. All of them were “restricted” maximum likelihood estimators, in the sense that the maximization is carried out only over distributions concentrated on the observed values of the covariate vectors. In this paper, the authors give conditions for consistency of these restricted maximum likelihood estimators. They also consider the corresponding unrestricted maximization problems, in which the “absolute” maximum likelihood estimators may then have support on additional points in the covariate space. Their main consistency result also covers these unrestricted maximum likelihood estimators, when they exist for all sample sizes.  相似文献   

4.
Nearest Neighbor Adjusted Best Linear Unbiased Prediction   总被引:1,自引:0,他引:1  
Statistical inference for linear models has classically focused on either estimation or hypothesis testing of linear combinations of fixed effects or of variance components for random effects. A third form of inference—prediction of linear combinations of fixed and random effects—has important advantages over conventional estimators in many applications. None of these approaches will result in accurate inference if the data contain strong, unaccounted for local gradients, such as spatial trends in field-plot data. Nearest neighbor methods to adjust for such trends have been widely discussed in recent literature. So far, however, these methods have been developed exclusively for classical estimation and hypothesis testing. In this article a method of obtaining nearest neighbor adjusted (NNA) predictors, along the lines of “best linear unbiased prediction,” or BLUP, is developed. A simulation study comparing “NNABLUP” to conventional NNA methods and to non-NNA alternatives suggests considerable potential for improved efficiency.  相似文献   

5.
In extreme value theory, the shape second-order parameter is a quite relevant parameter related to the speed of convergence of maximum values, linearly normalized, towards its limit law. The adequate estimation of this parameter is vital for improving the estimation of the extreme value index, the primary parameter in statistics of extremes. In this article, we consider a recent class of semi-parametric estimators of the shape second-order parameter for heavy right-tailed models. These estimators, based on the largest order statistics, depend on a real tuning parameter, which makes them highly flexible and possibly unbiased for several underlying models. In this article, we are interested in the adaptive choice of such tuning parameter and the number of top order statistics used in the estimation procedure. The performance of the methodology for the adaptive choice of parameters is evaluated through a Monte Carlo simulation study.  相似文献   

6.
Circular covariance matrices play an important role in modeling phenomena in numerous epidemiological, communications and physical contexts. In this article, we propose a parsimonious, autoregressive type of circular covariance structure for modeling correlations between the “siblings” of a “family”. This structure, similar to AR(1) structure used in time series models, involves only two parameters. We derive the maximum likelihood estimators of these parameters, and discuss testing of hypotheses about the autoregressive parameter. Estimation of “parent-sib” correlation, namely, the interclass correlation, is also considered. Estimation of the parameters when there are unequal numbers of siblings in different families is also discussed.  相似文献   

7.
In this article, we develop new bootstrap-based inference for noncausal autoregressions with heavy-tailed innovations. This class of models is widely used for modeling bubbles and explosive dynamics in economic and financial time series. In the noncausal, heavy-tail framework, a major drawback of asymptotic inference is that it is not feasible in practice as the relevant limiting distributions depend crucially on the (unknown) decay rate of the tails of the distribution of the innovations. In addition, even in the unrealistic case where the tail behavior is known, asymptotic inference may suffer from small-sample issues. To overcome these difficulties, we propose bootstrap inference procedures using parameter estimates obtained with the null hypothesis imposed (the so-called restricted bootstrap). We discuss three different choices of bootstrap innovations: wild bootstrap, based on Rademacher errors; permutation bootstrap; a combination of the two (“permutation wild bootstrap”). Crucially, implementation of these bootstraps do not require any a priori knowledge about the distribution of the innovations, such as the tail index or the convergence rates of the estimators. We establish sufficient conditions ensuring that, under the null hypothesis, the bootstrap statistics estimate consistently particular conditionaldistributions of the original statistics. In particular, we show that validity of the permutation bootstrap holds without any restrictions on the distribution of the innovations, while the permutation wild and the standard wild bootstraps require further assumptions such as symmetry of the innovation distribution. Extensive Monte Carlo simulations show that the finite sample performance of the proposed bootstrap tests is exceptionally good, both in terms of size and of empirical rejection probabilities under the alternative hypothesis. We conclude by applying the proposed bootstrap inference to Bitcoin/USD exchange rates and to crude oil price data. We find that indeed noncausal models with heavy-tailed innovations are able to fit the data, also in periods of bubble dynamics. Supplementary materials for this article are available online.  相似文献   

8.
In his 1999 article with Breusch, Qian, and Wyhowski in the Journal of Econometrics, Peter Schmidt introduced the concept of “redundant” moment conditions. Such conditions arise when estimation is based on moment conditions that are valid and can be divided into two subsets: one that identifies the parameters and another that provides no further information. Their framework highlights an important concept in the moment-based estimation literature, namely, that not all valid moment conditions need be informative about the parameters of interest. In this article, we demonstrate the empirical relevance of the concept in the context of the impact of government health expenditure on health outcomes in England. Using a simulation study calibrated to this data, we perform a comparative study of the finite performance of inference procedures based on the Generalized Method of Moment (GMM) and info-metric (IM) estimators. The results indicate that the properties of GMM procedures deteriorate as the number of redundant moment conditions increases; in contrast, the IM methods provide reliable point estimators, but the performance of associated inference techniques based on first order asymptotic theory, such as confidence intervals and overidentifying restriction tests, deteriorates as the number of redundant moment conditions increases. However, for IM methods, it is shown that bootstrap procedures can provide reliable inferences; we illustrate such methods when analysing the impact of government health expenditure on health outcomes in England.  相似文献   

9.
ABSTRACT

The area under the receiver operating characteristic (ROC) curve is a popular summary index that measures the accuracy of a continuous-scale diagnostic test to measure its accuracy. Under certain conditions on estimators of distribution functions, we prove a theorem on strong consistency of the non parametric “plugin” estimators of the area under the ROC curve. Based on this theorem, we construct some new “plugin” consistent estimators. The performance of the non parametric estimators considered is illustrated numerically and the estimators are compared in terms of bias, variance, and mean square error.  相似文献   

10.
Bandwidth plays an important role in determining the performance of nonparametric estimators, such as the local constant estimator. In this article, we propose a Bayesian approach to bandwidth estimation for local constant estimators of time-varying coefficients in time series models. We establish a large sample theory for the proposed bandwidth estimator and Bayesian estimators of the unknown parameters involved in the error density. A Monte Carlo simulation study shows that (i) the proposed Bayesian estimators for bandwidth and parameters in the error density have satisfactory finite sample performance; and (ii) our proposed Bayesian approach achieves better performance in estimating the bandwidths than the normal reference rule and cross-validation. Moreover, we apply our proposed Bayesian bandwidth estimation method for the time-varying coefficient models that explain Okun’s law and the relationship between consumption growth and income growth in the U.S. For each model, we also provide calibrated parametric forms of the time-varying coefficients. Supplementary materials for this article are available online.  相似文献   

11.
Comment     
We propose a sequential test for predictive ability for recursively assessing whether some economic variables have explanatory content for another variable. In the forecasting literature it is common to assess predictive ability by using “one-shot” tests at each estimation period. We show that this practice leads to size distortions, selects overfitted models and provides spurious evidence of in-sample predictive ability, and may lower the forecast accuracy of the model selected by the test. The usefulness of the proposed test is shown in well-known empirical applications to the real-time predictive content of money for output and the selection between linear and nonlinear models.  相似文献   

12.
In many areas of application mixed linear models serve as a popular tool for analyzing highly complex data sets. For inference about fixed effects and variance components, likelihood-based methods such as (restricted) maximum likelihood estimators, (RE)ML, are commonly pursued. However, it is well-known that these fully efficient estimators are extremely sensitive to small deviations from hypothesized normality of random components as well as to other violations of distributional assumptions. In this article, we propose a new class of robust-efficient estimators for inference in mixed linear models. The new three-step estimation procedure provides truncated generalized least squares and variance components' estimators with hard-rejection weights adaptively computed from the data. More specifically, our data re-weighting mechanism first detects and removes within-subject outliers, then identifies and discards between-subject outliers, and finally it employs maximum likelihood procedures on the “clean” data. Theoretical efficiency and robustness properties of this approach are established.  相似文献   

13.
In this article, we consider the problem of estimating certain “parameters” in a mixture of probability measures. We show that a single sample is typically suitable for estimating the component measures, but not suitable for estimating the mixing measures, especially when consistency is required. To have consistent estimators of the mixing measure, several samples with increasing size are needed in general.  相似文献   

14.
The study of the dependence between two medical diagnostic tests is an important issue in health research since it can modify the diagnosis and, therefore, the decision regarding a therapeutic treatment for an individual. In many practical situations, the diagnostic procedure includes the use of two tests, with outcomes on a continuous scale. For final classification, usually there is an additional “gold standard” or reference test. Considering binary test responses, we usually assume independence between tests or a joint binary structure for dependence. In this article, we introduce a simulation study assuming two dependent dichotomized tests using two copula function dependence structures in the presence or absence of verification bias. We compare the test parameter estimators obtained under copula structure dependence with those obtained assuming binary dependence or assuming independent tests.  相似文献   

15.
This paper develops alternatives to maximum likelihood estimators (MLE) for logistic regression models and compares the mean squared error (MSE) of the estimators. The MLE for the vector of underlying success probabilities has low MSE only when the true probabilities are extreme (i.e., near 0 or 1). Extreme probabilities correspond to logistic regression parameter vectors which are large in norm. A competing “restricted” MLE and an empirical version of it are suggested as estimators with better performance than the MLE for central probabilities. An approximate EM-algorithm for estimating the restriction is described. As in the case of normal theory ridge estimators, the proposed estimators are shown to be formally derivable by Bayes and empirical Bayes arguments. The small sample operating characteristics of the proposed estimators are compared to the MLE via a simulation study; both the estimation of individual probabilities and of logistic parameters are considered.  相似文献   

16.
The minimum disparity estimators proposed by Lindsay (1994) for discrete models form an attractive subclass of minimum distance estimators which achieve their robustness without sacrificing first order efficiency at the model. Similarly, disparity test statistics are useful robust alternatives to the likelihood ratio test for testing of hypotheses in parametric models; they are asymptotically equivalent to the likelihood ratio test statistics under the null hypothesis and contiguous alternatives. Despite their asymptotic optimality properties, the small sample performance of many of the minimum disparity estimators and disparity tests can be considerably worse compared to the maximum likelihood estimator and the likelihood ratio test respectively. In this paper we focus on the class of blended weight Hellinger distances, a general subfamily of disparities, and study the effects of combining two different distances within this class to generate the family of “combined” blended weight Hellinger distances, and identify the members of this family which generally perform well. More generally, we investigate the class of "combined and penal-ized" blended weight Hellinger distances; the penalty is based on reweighting the empty cells, following Harris and Basu (1994). It is shown that some members of the combined and penalized family have rather attractive properties  相似文献   

17.
The paper develops a systematic estimation and inference procedure for quantile regression models where there may exist a common threshold effect across different quantile indices. We first propose a sup-Wald test for the existence of a threshold effect, and then study the asymptotic properties of the estimators in a threshold quantile regression model under the shrinking threshold effect framework. We consider several tests for the presence of a common threshold value across different quantile indices and obtain their limiting distributions. We apply our methodology to study the pricing strategy for reputation through the use of a data set from Taobao.com. In our economic model, an online seller maximizes the sum of the profit from current sales and the possible future gain from a targeted higher reputation level. We show that the model can predict a jump in optimal pricing behavior, which is considered as “reputation effect” in this paper. The use of threshold quantile regression model allows us to identify and explore the reputation effect and its heterogeneity in data. We find both reputation effects and common thresholds for a range of quantile indices in seller’s pricing strategy in our application.  相似文献   

18.
Abstract

Experiments in various countries with “last week” and “last month” reference periods for reporting of households’ food consumption have generally found that “week”-based estimates are higher. In India the National Sample Survey (NSS) has consistently found that “week”-based estimates are higher than month-based estimates for a majority of food item groups. But why are week-based estimates higher than month-based estimates? It has long been believed that the reason must be recall lapse, inherent in a long reporting period such as a month. But is household consumption of a habitually consumed item “recalled” in the same way as that of an item of infrequent consumption? And why doesn’t memory lapse cause over-reporting (over-assessment) as often as under-reporting? In this paper, we provide an alternative hypothesis, involving a “quantity floor effect” in reporting behavior, under which “week” may cause over-reporting for many items. We design a test to detect the effect postulated by this hypothesis and carry it out on NSS 68th round HCES data. The test results strongly suggest that our hypothesis provides a better explanation of the difference between week-based and month-based estimates than the recall lapse theory.  相似文献   

19.
In this study, we measure asymmetric negative tail dependence and discuss their statistical properties. In a simulation study, we show the reliability of nonparametric estimators of tail copula to measure not only the common positive lower and upper tail dependence, but also the negative “lower–upper” and “upper–lower” tail dependence. The use of this new framework is illustrated in an application to financial data. We detect the existence of asymmetric negative tail dependence between stock and volatility indices. Many common parametric copula models used in finance fail to capture this characteristic.  相似文献   

20.
Abstract

Statistical distributions are very useful in describing and predicting real world phenomena. In many applied areas there is a clear need for the extended forms of the well-known distributions. Generally, the new distributions are more flexible to model real data that present a high degree of skewness and kurtosis. The choice of the best-suited statistical distribution for modeling data is very important.

In this article, we proposed an extended generalized Gompertz (EGGo) family of EGGo. Certain statistical properties of EGGo family including distribution shapes, hazard function, skewness, limit behavior, moments and order statistics are discussed. The flexibility of this family is assessed by its application to real data sets and comparison with other competing distributions. The maximum likelihood equations for estimating the parameters based on real data are given. The performances of the estimators such as maximum likelihood estimators, least squares estimators, weighted least squares estimators, Cramer-von-Mises estimators, Anderson-Darling estimators and right tailed Anderson-Darling estimators are discussed. The likelihood ratio test is derived to illustrate that the EGGo distribution is better than other nested models in fitting data set or not. We use R software for simulation in order to perform applications and test the validity of this model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号