首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract

This article develops a method to estimate search frictions as well as preference parameters in differentiated product markets. Search costs are nonparametrically identified, which means our method can be used to estimate search costs in differentiated product markets that lack a suitable search cost shifter. We apply our model to the U.S. Medigap insurance market. We find that search costs are substantial: the estimated median cost of searching for an insurer is $30. Using the estimated parameters we find that eliminating search costs could result in price decreases of as much as $71 (or 4.7%), along with increases in average consumer welfare of up to $374.  相似文献   

2.
ABSTRACT

We present a new estimator of extreme quantiles dedicated to Weibull tail distributions. This estimate is based on a consistent estimator of the Weibull tail coefficient. This parameter is defined as the regular variation coefficient of the inverse cumulative hazard function. We give conditions in order to obtain the weak consistency and the asymptotic distribution of the extreme quantiles estimator. Its asymptotic as well as its finite sample performances are compared to classical ones.  相似文献   

3.

We consider a sieve bootstrap procedure to quantify the estimation uncertainty of long-memory parameters in stationary functional time series. We use a semiparametric local Whittle estimator to estimate the long-memory parameter. In the local Whittle estimator, discrete Fourier transform and periodogram are constructed from the first set of principal component scores via a functional principal component analysis. The sieve bootstrap procedure uses a general vector autoregressive representation of the estimated principal component scores. It generates bootstrap replicates that adequately mimic the dependence structure of the underlying stationary process. We first compute the estimated first set of principal component scores for each bootstrap replicate and then apply the semiparametric local Whittle estimator to estimate the memory parameter. By taking quantiles of the estimated memory parameters from these bootstrap replicates, we can nonparametrically construct confidence intervals of the long-memory parameter. As measured by coverage probability differences between the empirical and nominal coverage probabilities at three levels of significance, we demonstrate the advantage of using the sieve bootstrap compared to the asymptotic confidence intervals based on normality.

  相似文献   

4.
ABSTRACT

The standard kernel estimator of copula densities suffers from boundary biases and inconsistency due to unbounded densities. Transforming the domain of estimation into an unbounded one remedies both problems, but also introduces an unbounded multiplier that may produce erratic boundary behaviors in the final density estimate. We propose an improved transformation-kernel estimator that employs a smooth tapering device to counter the undesirable influence of the multiplier. We establish the theoretical properties of the new estimator and its automatic higher-order improvement under Gaussian copulas. We present two practical methods of smoothing parameter selection. Extensive Monte Carlo simulations demonstrate the competence of the proposed estimator in terms of global and tail performance. Two real-world examples are provided. Supplementary materials for this article are available online.  相似文献   

5.
Efforts to control medical care costs depend critically on how individuals respond to prices. I estimate the price elasticity of expenditure on medical care using a censored quantile instrumental variable (CQIV) estimator. CQIV allows estimates to vary across the conditional expenditure distribution, relaxes traditional censored model assumptions, and addresses endogeneity with an instrumental variable. My instrumental variable strategy uses a family member’s injury to induce variation in an individual’s own price. Across the conditional deciles of the expenditure distribution, I find elasticities that vary from ?0.76 to ?1.49, which are an order of magnitude larger than previous estimates. Supplementary materials for this article are available online.  相似文献   

6.
This article provides a method to estimate search costs in a differentiated product environment in which consumers are uncertain about the utility distribution. Consumers learn about the utility distribution by Bayesian updating their Dirichlet process prior beliefs. The model provides expressions for bounds on the search costs that can rationalize observed search and purchasing behavior. Using individual-specific data on web browsing and purchasing behavior for MP3 players sold online we show how to use these bounds to estimate search costs as well as the parameters of the utility distribution. Our estimates indicate that search costs are sizable. We show that ignoring consumer learning while searching can lead to severely biased search cost and elasticity estimates.  相似文献   

7.
For manifest variables with additive noise and for a given number of latent variables with an assumed distribution, we propose to nonparametrically estimate the association between latent and manifest variables. Our estimation is a two step procedure: first it employs standard factor analysis to estimate the latent variables as theoretical quantiles of the assumed distribution; second, it employs the additive models’ backfitting procedure to estimate the monotone nonlinear associations between latent and manifest variables. The estimated fit may suggest a different latent distribution or point to nonlinear associations. We show on simulated data how, based on mean squared errors, the nonparametric estimation improves on factor analysis. We then employ the new estimator on real data to illustrate its use for exploratory data analysis.  相似文献   

8.
ABSTRACT

We propose a new estimator for the spot covariance matrix of a multi-dimensional continuous semimartingale log asset price process, which is subject to noise and nonsynchronous observations. The estimator is constructed based on a local average of block-wise parametric spectral covariance estimates. The latter originate from a local method of moments (LMM), which recently has been introduced by Bibinger et al.. We prove consistency and a point-wise stable central limit theorem for the proposed spot covariance estimator in a very general setup with stochastic volatility, leverage effects, and general noise distributions. Moreover, we extend the LMM estimator to be robust against autocorrelated noise and propose a method to adaptively infer the autocorrelations from the data. Based on simulations we provide empirical guidance on the effective implementation of the estimator and apply it to high-frequency data of a cross-section of Nasdaq blue chip stocks. Employing the estimator to estimate spot covariances, correlations, and volatilities in normal but also unusual periods yields novel insights into intraday covariance and correlation dynamics. We show that intraday (co-)variations (i) follow underlying periodicity patterns, (ii) reveal substantial intraday variability associated with (co-)variation risk, and (iii) can increase strongly and nearly instantaneously if new information arrives. Supplementary materials for this article are available online.  相似文献   

9.
Abstract

Nonparametric regression is a standard statistical tool with increased importance in the Big Data era. Boundary points pose additional difficulties but local polynomial regression can be used to alleviate them. Local linear regression, for example, is easy to implement and performs quite well both at interior and boundary points. Estimating the conditional distribution function and/or the quantile function at a given regressor point is immediate via standard kernel methods but problems ensue if local linear methods are to be used. In particular, the distribution function estimator is not guaranteed to be monotone increasing, and the quantile curves can “cross.” In the article at hand, a simple method of correcting the local linear distribution estimator for monotonicity is proposed, and its good performance is demonstrated via simulations and real data examples. Supplementary materials for this article are available online.  相似文献   

10.
ABSTRACT

A quantile autoregresive model is a useful extension of classical autoregresive models as it can capture the influences of conditioning variables on the location, scale, and shape of the response distribution. However, at the extreme tails, standard quantile autoregression estimator is often unstable due to data sparsity. In this article, assuming quantile autoregresive models, we develop a new estimator for extreme conditional quantiles of time series data based on extreme value theory. We build the connection between the second-order conditions for the autoregression coefficients and for the conditional quantile functions, and establish the asymptotic properties of the proposed estimator. The finite sample performance of the proposed method is illustrated through a simulation study and the analysis of U.S. retail gasoline price.  相似文献   

11.
王亚峰 《统计研究》2012,29(2):88-93
本文发展了一个针对样本选择模型的两阶段半参数估计量,其首先在第一阶段基于对数欧几里得分布差异测度估计离散选择概率,进而在第二阶段利用非参数sieve方法估计一个包含参数和非参数部分的部分线性模型以得到模型参数的估计。相对于文献中已有的半参数估计量,该估计量的计算更加简便,且计算负担相对较小。我们说明了该半参数估计量的一致性和渐近正态性,同时给出了其渐近方差的计算公式。蒙特卡洛模拟结果符合我们的理论结论。  相似文献   

12.
This article is an empirical application of the search model with an unknown distribution, as introduced by Rothschild in 1974. For searchers who hold Dirichlet priors, we develop a novel characterization of optimal search behavior. Our solution delivers easily computable formulas for the ex-ante purchase probabilities as outcomes of search, as required by discrete-choice-based estimation. Using our method, we investigate the consequences of consumer learning on the properties of search-generated demand. Holding search costs constant, the search model from a known distribution predicts larger price elasticities, mainly for the lower-priced products. We estimate a search model with Dirichlet priors, on a dataset of prices and market shares of S&P 500 mutual funds. We find that the assumption of no uncertainty in consumer priors leads to substantial biases in search cost estimates.  相似文献   

13.
There are a large number of different definitions used for sample quantiles in statistical computer packages. Often within the same package one definition will be used to compute a quantile explicitly, while other definitions may be used when producing a boxplot, a probability plot, or a QQ plot. We compare the most commonly implemented sample quantile definitions by writing them in a common notation and investigating their motivation and some of their properties. We argue that there is a need to adopt a standard definition for sample quantiles so that the same answers are produced by different packages and within each package. We conclude by recommending that the median-unbiased estimator be used because it has most of the desirable properties of a quantile estimator and can be defined independently of the underlying distribution.  相似文献   

14.
Control charts are used to detect changes in a process. Once a change is detected, knowledge of the change point would simplify the search for and identification of the special cause. Consequently, having an estimate of the process change point following a control chart signal would be useful to process analysts. Change-point methods for the uncorrelated process have been studied extensively in the literature; however, less attention has been given to change-point methods for autocorrelated processes. Autocorrelation is common in practice and is often modeled via the class of autoregressive moving average (ARMA) models. In this article, a maximum likelihood estimator for the time of step change in the mean of covariance-stationary processes that fall within the general ARMA framework is developed. The estimator is intended to be used as an “add-on” following a signal from a phase II control chart. Considering first-order pure and mixed ARMA processes, Monte Carlo simulation is used to evaluate the performance of the proposed change-point estimator across a range of step change magnitudes following a genuine signal from a control chart. Results indicate that the estimator provides process analysts with an accurate and useful estimate of the last sample obtained from the unchanged process. Additionally, results indicate that if a change-point estimator designed for the uncorrelated process is applied to an autocorrelated process, the performance of the estimator can suffer dramatically.  相似文献   

15.
Xue H  Miao H  Wu H 《Annals of statistics》2010,38(4):2351-2387
This article considers estimation of constant and time-varying coefficients in nonlinear ordinary differential equation (ODE) models where analytic closed-form solutions are not available. The numerical solution-based nonlinear least squares (NLS) estimator is investigated in this study. A numerical algorithm such as the Runge-Kutta method is used to approximate the ODE solution. The asymptotic properties are established for the proposed estimators considering both numerical error and measurement error. The B-spline is used to approximate the time-varying coefficients, and the corresponding asymptotic theories in this case are investigated under the framework of the sieve approach. Our results show that if the maximum step size of the p-order numerical algorithm goes to zero at a rate faster than n(-1/(p∧4)), the numerical error is negligible compared to the measurement error. This result provides a theoretical guidance in selection of the step size for numerical evaluations of ODEs. Moreover, we have shown that the numerical solution-based NLS estimator and the sieve NLS estimator are strongly consistent. The sieve estimator of constant parameters is asymptotically normal with the same asymptotic co-variance as that of the case where the true ODE solution is exactly known, while the estimator of the time-varying parameter has the optimal convergence rate under some regularity conditions. The theoretical results are also developed for the case when the step size of the ODE numerical solver does not go to zero fast enough or the numerical error is comparable to the measurement error. We illustrate our approach with both simulation studies and clinical data on HIV viral dynamics.  相似文献   

16.
Value at Risk (VaR) forecasts can be produced from conditional autoregressive VaR models, estimated using quantile regression. Quantile modeling avoids a distributional assumption, and allows the dynamics of the quantiles to differ for each probability level. However, by focusing on a quantile, these models provide no information regarding expected shortfall (ES), which is the expectation of the exceedances beyond the quantile. We introduce a method for predicting ES corresponding to VaR forecasts produced by quantile regression models. It is well known that quantile regression is equivalent to maximum likelihood based on an asymmetric Laplace (AL) density. We allow the density's scale to be time-varying, and show that it can be used to estimate conditional ES. This enables a joint model of conditional VaR and ES to be estimated by maximizing an AL log-likelihood. Although this estimation framework uses an AL density, it does not rely on an assumption for the returns distribution. We also use the AL log-likelihood for forecast evaluation, and show that it is strictly consistent for the joint evaluation of VaR and ES. Empirical illustration is provided using stock index data. Supplementary materials for this article are available online.  相似文献   

17.
ABSTRACT

Nonstandard mixtures are those that result from a mixture of a discrete and a continuous random variable. They arise in practice, for example, in medical studies of exposure. Here, a random variable that models exposure might have a discrete mass point at no exposure, but otherwise may be continuous. In this article we explore estimating the distribution function associated with such a random variable from a nonparametric viewpoint. We assume that the locations of the discrete mass points are known so that we will be able to apply a classical nonparametric smoothing approach to the problem. The proposed estimator is a mixture of an empirical distribution function and a kernel estimate of a distribution function. A simple theoretical argument reveals that existing bandwidth selection algorithms can be applied to the smooth component of this estimator as well. The proposed approach is applied to two example sets of data.  相似文献   

18.
An empirical distribution function estimator for the difference of order statistics from two independent populations can be used for inference between quantiles from these populations. The inferential properties of the approach are evaluated in a simulation study where different sample sizes, theoretical distributions, and quantiles are studied. Small to moderate sample sizes, tail quantiles, and quantiles which do not coincide with the expectation of an order statistic are identified as problematic for appropriate Type I error control.  相似文献   

19.
In biostatistical applications interest often focuses on the estimation of the distribution of time T between two consecutive events. If the initial event time is observed and the subsequent event time is only known to be larger or smaller than an observed monitoring time C, then the data conforms to the well understood singly-censored current status model, also known as interval censored data, case I. Additional covariates can be used to allow for dependent censoring and to improve estimation of the marginal distribution of T. Assuming a wrong model for the conditional distribution of T, given the covariates, will lead to an inconsistent estimator of the marginal distribution. On the other hand, the nonparametric maximum likelihood estimator of FT requires splitting up the sample in several subsamples corresponding with a particular value of the covariates, computing the NPMLE for every subsample and then taking an average. With a few continuous covariates the performance of the resulting estimator is typically miserable. In van der Laan, Robins (1996) a locally efficient one-step estimator is proposed for smooth functionals of the distribution of T, assuming nothing about the conditional distribution of T, given the covariates, but assuming a model for censoring, given the covariates. The estimators are asymptotically linear if the censoring mechanism is estimated correctly. The estimator also uses an estimator of the conditional distribution of T, given the covariates. If this estimate is consistent, then the estimator is efficient and if it is inconsistent, then the estimator is still consistent and asymptotically normal. In this paper we show that the estimators can also be used to estimate the distribution function in a locally optimal way. Moreover, we show that the proposed estimator can be used to estimate the distribution based on interval censored data (T is now known to lie between two observed points) in the presence of covariates. The resulting estimator also has a known influence curve so that asymptotic confidence intervals are directly available. In particular, one can apply our proposal to the interval censored data without covariates. In Geskus (1992) the information bound for interval censored data with two uniformly distributed monitoring times at the uniform distribution (for T has been computed. We show that the relative efficiency of our proposal w.r.t. this optimal bound equals 0.994, which is also reflected in finite sample simulations. Finally, the good practical performance of the estimator is shown in a simulation study. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

20.
Abstract

In survival or reliability data analysis, it is often useful to estimate the quantiles of the lifetime distribution, such as the median time to failure. Different nonparametric methods can construct confidence intervals for the quantiles of the lifetime distributions, some of which are implemented in commonly used statistical software packages. We here investigate the performance of different interval estimation procedures under a variety of settings with different censoring schemes. Our main objectives in this paper are to (i) evaluate the performance of confidence intervals based on the transformation approach commonly used in statistical software, (ii) introduce a new density-estimation-based approach to obtain confidence intervals for survival quantiles, and (iii) compare it with the transformation approach. We provide a comprehensive comparative study and offer some useful practical recommendations based on our results. Some numerical examples are presented to illustrate the methodologies developed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号