首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
ABSTRACT

It has been shown that equilibrium restrictions in a search model can be used to identify quantiles of the search cost distribution from observedprices alone. These quantiles can be difficult to estimate in practice. This article uses a minimum distance approach to estimate them that is easy to compute. A version of our estimator is a solution to a nonlinear least-square problem that can be straightforwardly programmed on softwares such as STATA. We show our estimator is consistent and has an asymptotic normal distribution. Its distribution can be consistently estimated by a bootstrap. Our estimator can be used to estimate the cost distribution nonparametrically on a larger support when prices from heterogenous markets are available. We propose a two-step sieve estimator for that case. The first step estimates quantiles from each market. They are used in the second step as generated variables to perform nonparametric sieve estimation. We derive the uniform rate of convergence of the sieve estimator that can be used to quantify the errors incurred from interpolating data across markets. To illustrate we use online bookmaking odds for English football leagues’ matches (as prices) and find evidence that suggests search costs for consumers have fallen following a change in the British law that allows gambling operators to advertise more widely. Supplementary materials for this article are available online.  相似文献   

2.
This article provides a method to estimate search costs in a differentiated product environment in which consumers are uncertain about the utility distribution. Consumers learn about the utility distribution by Bayesian updating their Dirichlet process prior beliefs. The model provides expressions for bounds on the search costs that can rationalize observed search and purchasing behavior. Using individual-specific data on web browsing and purchasing behavior for MP3 players sold online we show how to use these bounds to estimate search costs as well as the parameters of the utility distribution. Our estimates indicate that search costs are sizable. We show that ignoring consumer learning while searching can lead to severely biased search cost and elasticity estimates.  相似文献   

3.
Measuring dependence in multivariate time series is tantamount to modeling its dynamic structure in space and time. In risk management, the nonnormal behavior of most financial time series calls for non-Gaussian dependences. The correct modeling of non-Gaussian dependences is, therefore, a key issue in the analysis of multivariate time series. In this article we use copula functions with adaptively estimated time-varying parameters for modeling the distribution of returns. Furthermore, we apply copulae to the estimation of Value-at-Risk of portfolios and show their better performance over the RiskMetrics approach.  相似文献   

4.
Using survey data, we characterize directly the impact of expected business conditions on expected excess stock returns. Expected business conditions consistently affect expected excess returns in a counter-cyclical fashion. Moreover, inclusion of expected business conditions in otherwise-standard predictive return regressions substantially reduce the explanatory power of the conventional financial predictors, including the dividend yield, default premium, and term premium, while simultaneously increasing R2. Expected business conditions retain predictive power even when including the key nonfinancial predictor, the generalized consumption/wealth ratio. We argue that time-varying expected business conditions likely capture time-varying risk, whereas time-varying consumption/wealth may capture time-varying risk aversion.  相似文献   

5.
This article is an empirical application of the search model with an unknown distribution, as introduced by Rothschild in 1974. For searchers who hold Dirichlet priors, we develop a novel characterization of optimal search behavior. Our solution delivers easily computable formulas for the ex-ante purchase probabilities as outcomes of search, as required by discrete-choice-based estimation. Using our method, we investigate the consequences of consumer learning on the properties of search-generated demand. Holding search costs constant, the search model from a known distribution predicts larger price elasticities, mainly for the lower-priced products. We estimate a search model with Dirichlet priors, on a dataset of prices and market shares of S&P 500 mutual funds. We find that the assumption of no uncertainty in consumer priors leads to substantial biases in search cost estimates.  相似文献   

6.
ABSTRACT

Economic statistical designs aim at minimizing the cost of process monitoring when a specific scenario or a set of estimated process and cost parameters is given. But, in practice the process may be affected by more than one scenario which may lead to severe cost penalties if the wrong design is used. Here, we investigate the robust economic statistical design (RESD) of the T2 chart in an attempt to reduce these cost penalties when there are multiple scenarios. Our method is to employ the genetic algorithm (GA) optimization method to minimize the total expected monitoring cost across all distinct scenarios. We illustrate the effectiveness of the method using two numerical examples. Simulation studies indicate that robust economic statistical designs should be encouraged in practice.  相似文献   

7.
Abstract

In this paper, we propose a hybrid method to estimate the baseline hazard for Cox proportional hazard model. In the proposed method, the nonparametric estimate of the survival function by Kaplan Meier, and the parametric estimate of the logistic function in the Cox proportional hazard by partial likelihood method are combined to estimate a parametric baseline hazard function. We compare the estimated baseline hazard using the proposed method and the Cox model. The results show that the estimated baseline hazard using hybrid method is improved in comparison with estimated baseline hazard using the Cox model. The performance of each method is measured based on the estimated parameters of the baseline distribution as well as goodness of fit of the model. We have used real data as well as simulation studies to compare performance of both methods. Monte Carlo simulations carried out in order to evaluate the performance of the proposed method. The results show that the proposed hybrid method provided better estimate of the baseline in comparison with the estimated values by the Cox model.  相似文献   

8.
ABSTRACT

We propose an extension of parametric product partition models. We name our proposal nonparametric product partition models because we associate a random measure instead of a parametric kernel to each set within a random partition. Our methodology does not impose any specific form on the marginal distribution of the observations, allowing us to detect shifts of behaviour even when dealing with heavy-tailed or skewed distributions. We propose a suitable loss function and find the partition of the data having minimum expected loss. We then apply our nonparametric procedure to multiple change-point analysis and compare it with PPMs and with other methodologies that have recently appeared in the literature. Also, in the context of missing data, we exploit the product partition structure in order to estimate the distribution function of each missing value, allowing us to detect change points using the loss function mentioned above. Finally, we present applications to financial as well as genetic data.  相似文献   

9.
10.
Abstract

In this article, we propose a penalized local log-likelihood method to locally select the number of components in non parametric finite mixture of regression models via proportion shrinkage method. Mean functions and variance functions are estimated simultaneously. We show that the number of components can be estimated consistently, and further establish asymptotic normality of functional estimates. We use a modified EM algorithm to estimate the unknown functions. Simulations are conducted to demonstrate the performance of the proposed method. We illustrate our method via an empirical analysis of the housing price index data of United States.  相似文献   

11.
It is well known that Yates' algorithm can be used to estimate the effects in a factorial design. We develop a modification of this algorithm and call it modified Yates' algorithm and its inverse. We show that the intermediate steps in our algorithm have a direct interpretation as estimated level-specific mean values and effects. Also we show how Yates' or our modified algorithm can be used to construct the blocks in a 2 k factorial design and to generate the layout sheet of a 2 k−p fractional factorial design and the confounding pattern in such a design. In a final example we put together all these methods by generating and analysing a 26-2 design with 2 blocks.  相似文献   

12.
We consider the problem of estimating R=P(Y<X) when X and Y are independent Burr-type X random variables. We assume that the sample from each population contains one spurious observation. Bayes estimates are derived for exchangeable and identifiable cases. Monte Carlo simulation is carried out to compare the bias and the expected loss of R.  相似文献   

13.
Recently, a shift-independent information measure known as generalized cumulative entropy of order n (GCEn) was proposed by Kayal (2016 Kayal, S. 2016. On generalized cumulative entropies. Probability in Engineering and Informational Sciences 30:64062. [Google Scholar]). In this communication, we propose a shift-dependent version of GCEn. Various properties including the effect of transformations, bounds etc. have been discussed. Several relationships of the shift-dependent GCEn with some well-known reliability measures are studied. Few characterization results are obtained. We derive an estimator for the proposed measure via empirical distribution function approach. Large sample properties of the estimator are studied when independent observations are taken from a Weibull distribution.  相似文献   

14.
In this article, we propose the two control charts, i.e. the ‘VMAX Group Runs’ (VMAX-GR) and ‘VMAX Modified Group Runs’ (VMAX-MGR) control charts based on the bivariate normal processes, for monitoring the covariance matrix. The proposed charts give the faster detection of a process change and have better diagnostic feature. It is verified that the VMAX-GR and the VMAX-MGR charts give a significant reduction in the out-of-control ‘Average Run Length’ (ARL) in the zero state, as well as in the steady state, as compared to the synthetic control chart based on the VMAX statistic and the generalized variance |S| chart.  相似文献   

15.
ABSTRACT

We propose a semiparametric approach to estimate the existence and location of a statistical change-point to a nonlinear multivariate time series contaminated with an additive noise component. In particular, we consider a p-dimensional stochastic process of independent multivariate normal observations where the mean function varies smoothly except at a single change-point. Our approach involves conducting a Bayesian analysis on the empirical detail coefficients of the original time series after a wavelet transform. If the mean function of our time series can be expressed as a multivariate step function, we find our Bayesian-wavelet method performs comparably with classical parametric methods such as maximum likelihood estimation. The advantage of our multivariate change-point method is seen in how it applies to a much larger class of mean functions that require only general smoothness conditions.  相似文献   

16.
In this article we consider a control chart based on the sample variances of two quality characteristics. The points plotted on the chart correspond to the maximum value of these two statistics. The main reason to consider the proposed chart instead of the generalized variance | S | chart is its better diagnostic feature, that is, with the new chart it is easier to relate an out-of-control signal to the variables whose parameters have moved away from their in-control values. We study the control chart efficiency considering different shifts in the covariance matrix. In this way, we obtain the average run length (ARL) that measures the effectiveness of a control chart in detecting process shifts. The proposed chart always detects process disturbances faster than the generalized variance | S | chart. The same is observed when the size of the samples is variable, except in a few cases in which the size of the samples switches between small size and very large size.  相似文献   

17.
Abstract

This article proposes a new method for estimating heterogeneous externalities in policy analysis when social interactions take the linear-in-means form. We establish that the parameters of interest can be identified and consistently estimated using specific functions of the share of the eligible population. We also study the finite sample performance of the proposed estimators using Monte Carlo simulations. The method is illustrated using data on the PROGRESA program. We find that more than 50% of the effects of the program on schooling attendance are due to externalities, which are heterogeneous within and between poor and nonpoor households.  相似文献   

18.
A vector of k positive coordinates lies in the k-dimensional simplex when the sum of all coordinates in the vector is constrained to equal 1. Sampling distributions efficiently on the simplex can be difficult because of this constraint. This paper introduces a transformed logit-scale proposal for Markov Chain Monte Carlo that naturally adjusts step size based on the position in the simplex. This enables efficient sampling on the simplex even when the simplex is high dimensional and/or includes coordinates of differing orders of magnitude. Implementation of this method is shown with the SALTSampler R package and comparisons are made to other simpler sampling schemes to illustrate the improvement in performance this method provides. A simulation of a typical calibration problem also demonstrates the utility of this method.  相似文献   

19.
《随机性模型》2013,29(3):469-496
We consider a single-commodity, discrete-time, multiperiod (sS)-policy inventory model with backlog. The cost function may contain holding, shortage, and fixed ordering costs. Holding and shortage costs may be nonlinear. We show that the resulting inventory process is quasi-regenerative, i.e., admits a cycle decomposition and indicates how to estimate the performance by Monte Carlo simulation. By using a conditioning method, the push-out technique, and the change-of-measure method, estimates of the whole response surface (i.e., the steady-state performance in dependence of the parameters s and S) and its derivatives can be found. Estimates for the optimal (sS) policy can be calculated then by numerical optimization.  相似文献   

20.
ABSTRACT

Recently, researchers have tried to design the T2 chart economically to achieve the minimum possible quality cost; however, when T2 chart is designed, it is important to consider multiple scenarios. This research presents the robust economic designs of the T2 chart where there is more than one scenario. An illustrative example is used to demonstrate the effect of the model parameters on the optimal designs. The genetic algorithm optimization method is employed to obtain the optimal designs. Simulation studies show that the robust economic designs of T2 chart are more effective than traditional economic design in practice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号