首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
马键  王美今 《统计研究》2010,27(6):87-94
 最近实证博弈研究的迅速发展为分析市场中的策略互动、进行政策分析与反事实实验提供了有效的工具。本文提出一种估计不完全信息连续策略博弈的两阶段方法,它可以处理私有信息的影响。第一阶段通过非参数分位数回归估计局中人的策略与期望支付函数;第二阶段利用贝叶斯——纳什均衡不等式构造模拟最小距离估计量,最终获得结构参数的估计。数值模拟显示本方法有良好的小样本表现。与现有文献的嵌套固定点方法相比,本方法不需计算均衡,极大地降低了计算量,并减轻了多重均衡的干扰。本方法既可以用于估计离散状态博弈,也适用于连续状态博弈。  相似文献   

2.
潘凌云  董竹 《统计研究》2021,38(7):100-111
在经济下行压力不断加大的形势下,如何“稳就业”成为了各界关注的热点问题。本文以2008 年推行的薪酬抵税政策作为“准自然实验”捕捉企业税负的外生变化,并以2000-2018 年A 股上市公司数据作为研究样本,利用双重差分法考察了税收激励与企业雇佣行为之间的因果关系。实证结果表明:薪酬抵税政策带来了额外的税收优惠,进而促进了企业的雇佣需求,在有效排除共同趋势、预期效应、政策叠加效应以及反向因果问题后,这一结论依然稳健;异质性检验中发现,薪酬抵税政策对企业雇佣的激励效应在民营企业,成本转嫁能力较弱、劳动密集度较高以及处于市场化程度较高区域的企业中表现更加明显,这意味着企业对劳动力成本的变化越敏感,这一政策的作用效果就越好;薪酬抵税政策显著地提升了企业的经营绩效。本文的研究有助于深入理解税收影响企业行为的微观机理,对于政府利用税收政策扩大就业具有重要启示。  相似文献   

3.
The Birnbaum-Saunders distribution is a fatigue life distribution that was derived from a model assuming that failure is due to the development and growth of a dominant crack. This distribution has been shown to be applicable not only for fatigue analysis but also in other areas of engineering science. Because of its increasing use, it would be desirable to obtain expressions for the expected value of different powers of this distribution.

In this article, the moment-generating function for the sinh-normal distribution is derived. It is shown that this moment-generating function can be used to obtain both integer and fractional moments for the Birnbaum-Saunders distribution. Thus it is now possible to obtain an expression for the expected value of the square root of a Birnbaum-Saunders random variable. A general expression for integer noncentral moments for the Birnbaum-Saunders distribution is derived using the moment-generating function of the sinh-normal distribution. Also included is an approximation of the moment-generating function that can be used fcx small values of the shape parameter.  相似文献   

4.
Determination of preventive maintenance is an important issue for systems under degradation. A typical maintenance policy calls for complete preventive repair actions at pre-scheduled times and minimal repair actions whenever a failure occurs. Under minimal repair, failures are modeled according to a non homogeneous Poisson process. A perfect preventive maintenance restores the system to the as good as new condition. The motivation for this article was a maintenance data set related to power switch disconnectors. Two different types of failures could be observed for these systems according to their causes. The major difference between these types of failures is their costs. Assuming that the system will be in operation for an infinite time, we find the expected cost per unit of time for each preventive maintenance policy and hence obtain the optimal strategy as a function of the processes intensities. Assuming a parametrical form for the intensity function, large sample estimates for the optimal maintenance check points are obtained and discussed.  相似文献   

5.
对中国GDP增长率建立以可预期到的货币冲击、未预期到的正向货币冲击和未预期到的负向货币冲击滞后三期为转移变量的LSTAR模型,拟合效果良好,分析不同类型的货币冲击对产出的非线性和非对称性影响,给出可预期到的货币冲击、未预期到的正向货币冲击和未预期到的负向货币冲击的阀值,分别为20.03%、2%和1.58%,说明不同类型的货币冲击对产出呈现不同的非对称性影响,强弱机制的转换区间存在差异,且负向货币冲击的阀值小于正向货币冲击的阀值。研究结果表明中国的货币政策存在显著的非线性和非对称性特征,且紧缩性货币政策比扩张性货币政策更有效。  相似文献   

6.
Summary.  In the statistical and economics literature on lotteries, the problem of designing attractive games has been studied by using models in which sales are a function of the structure of prizes. Recently the prize structure has been proxied by using the moments of the prize distribution. Such modelling is a vital input into the process of designing appealing new lottery games that can generate large revenues for good causes. We show how conscious selection, the process by which lottery players choose numbers non-randomly, complicates the multivariate distribution of prize winners by introducing massive overdispersion of numbers of winners, and large correlations between the numbers of different types of prize winner. Although it is possible intuitively to reach a qualitative understanding of the data, an a priori model does not fit well. We therefore construct an empirical model of the joint distribution of prize winners and use it to calculate the moments of ticket value as a function of sales. The new model gives much higher estimates of ticket value moments, particularly skewness, than previously obtained. Our results will have consequences for policy decisions regarding game design. A spin-off result is that, on the basis of the results of model fitting, lottery players may increase the expected value of their ticket by strategically choosing numbers which are less popular with other lottery players.  相似文献   

7.
Many clinical research studies evaluate a time‐to‐event outcome, illustrate survival functions, and conventionally report estimated hazard ratios to express the magnitude of the treatment effect when comparing between groups. However, it may not be straightforward to interpret the hazard ratio clinically and statistically when the proportional hazards assumption is invalid. In some recent papers published in clinical journals, the use of restricted mean survival time (RMST) or τ ‐year mean survival time is discussed as one of the alternative summary measures for the time‐to‐event outcome. The RMST is defined as the expected value of time to event limited to a specific time point corresponding to the area under the survival curve up to the specific time point. This article summarizes the necessary information to conduct statistical analysis using the RMST, including the definition and statistical properties of the RMST, adjusted analysis methods, sample size calculation, information fraction for the RMST difference, and clinical and statistical meaning and interpretation. Additionally, we discuss how to set the specific time point to define the RMST from two main points of view. We also provide developed SAS codes to determine the sample size required to detect an expected RMST difference with appropriate power and reconstruct individual survival data to estimate an RMST reference value from a reported survival curve.  相似文献   

8.
The term “intercurrent events” has recently been used to describe events in clinical trials that may complicate the definition and calculation of the treatment effect estimand. This paper focuses on the use of an attributable estimand to address intercurrent events. Those events that are considered to be adversely related to randomized treatment (eg, discontinuation due to adverse events or lack of efficacy) are considered attributable and handled with a composite estimand strategy, while a hypothetical estimand strategy is used for intercurrent events not considered to be related to randomized treatment (eg, unrelated adverse events). We explore several options for how to implement this approach and compare them to hypothetical “efficacy” and treatment policy estimand strategies through a series of simulation studies whose design is inspired by recent trials in chronic obstructive pulmonary disease (COPD), and we illustrate through an analysis of a recently completed COPD trial.  相似文献   

9.
In this paper we consider the situation where we know the sum of n independent observations from the same probability distribution. We investigate how to empirically determine the marginal probability distributions of the different order statistics conditional upon knowing the sum. This research was motivated by explorations in process improvement where we know the total expected value or variance of a key measure of an n-step process and would like to estimate the proportion of the expected value or variance that is contributed by the most important step (i.e. the single step having the largest expected value or variance), the two most important steps, etc. Both graphical and tabular results are presented for exponential, gamma and normal distributions.  相似文献   

10.
Likelihood is widely used in statistical applications, both for the full parameter by obvious direct calculation and for component interest parameters by recent asymptotic theory. Often, however, we want more detailed information concerning an inference procedure, information such as say the distribution function of a measure of departure which would then permit power calculations or a detailed display of p-values for a range of parameter values. We investigate how such distribution function approximations can be obtained from minimal information concerning the likelihood function, a minimum that is often available in many applications. The resulting expressions clearly indicate the source of the various ingredients from likelihood, and they also provide a basis for understanding how nonnormality of the likelihood function affects related p-values. Moreover they provide the basis for removing a computational singularity that arises near the maximum likelihood value when recently developed significance function formulas are used.  相似文献   

11.
Consider a linear regression model with independent normally distributed errors. Suppose that the scalar parameter of interest is a specified linear combination of the components of the regression parameter vector. Also suppose that we have uncertain prior information that a parameter vector, consisting of specified distinct linear combinations of these components, takes a given value. Part of our evaluation of a frequentist confidence interval for the parameter of interest is the scaled expected length, defined to be the expected length of this confidence interval divided by the expected length of the standard confidence interval for this parameter, with the same confidence coefficient. We say that a confidence interval for the parameter of interest utilizes this uncertain prior information if (a) the scaled expected length of this interval is substantially less than one when the prior information is correct, (b) the maximum value of the scaled expected length is not too large and (c) this confidence interval reverts to the standard confidence interval, with the same confidence coefficient, when the data happen to strongly contradict the prior information. We present a new confidence interval for a scalar parameter of interest, with specified confidence coefficient, that utilizes this uncertain prior information. A factorial experiment with one replicate is used to illustrate the application of this new confidence interval.  相似文献   

12.
The main purposes of this paper are to derive Bayesian acceptance sampling plans regarding the number of defects per unit of product, and to illustrate how to apply the methodology to the paper pulp industry. The sampling plans are obtained following an economic criterion: minimize the expected total cost of quality. It has been assumed that the number of defects per unit of product follows a Poisson distribution with process average 5 , whose prior information is described either for a gamma or for a non- informative distribution. The expected total cost of quality is composed of three independent components: inspection, acceptance and rejection. Both quadratic and step-loss functions have been used to quantify the cost incurred for the acceptance of a lot containing units with defects. Combining the prior information on 5 with the loss functions, four different sampling plans are obtained. When the quadratic-loss function is used, an analytical relation between the optimum settings of the sample size and the acceptance number is derived. The robustness analysis indicates that the sampling plans obtained are robust with respect to the prior distribution of the process average as well as to the misspecification of its mean and variance.  相似文献   

13.
 财政赤字可持续性检验往往采用线性协整技术来验证跨期预算约束是否成立,但这一检验方法是基于财政政策效应是线性效应理论之上的。在现实中,财政政策既具有凯恩斯效应也具有非凯恩斯效应,财政政策效应是非线性的,财政收支的调整过程也是非线性非对称的。用传统的线性协整技术难以描述财政赤字可持续性过程,本文分析探讨一种用于揭示非平稳时间序列非线性调整过程的模型——两机制门限协整模型,深入研究了该模型的参数估计、检验统计量,并通过自助法(bootstrap)模拟计算其检验统计量临界值及P值。最后利用该模型,揭示了我国财政收支调整是非线性调整过程,并证实了我国财政赤字具有可持续性,但财政赤字规模不应进一步扩大。  相似文献   

14.
Quantile regression provides a flexible platform for evaluating covariate effects on different segments of the conditional distribution of response. As the effects of covariates may change with quantile level, contemporaneously examining a spectrum of quantiles is expected to have a better capacity to identify variables with either partial or full effects on the response distribution, as compared to focusing on a single quantile. Under this motivation, we study a general adaptively weighted LASSO penalization strategy in the quantile regression setting, where a continuum of quantile index is considered and coefficients are allowed to vary with quantile index. We establish the oracle properties of the resulting estimator of coefficient function. Furthermore, we formally investigate a Bayesian information criterion (BIC)-type uniform tuning parameter selector and show that it can ensure consistent model selection. Our numerical studies confirm the theoretical findings and illustrate an application of the new variable selection procedure.  相似文献   

15.
This article proposes an adaptive sequential preventive maintenance (PM) policy for which an improvement factor is newly introduced to measure the PM effect at each PM. For this model, the PM actions are conducted at different time intervals so that an adaptive method needs to be utilized to determine the optimal PM times minimizing the expected cost rate per unit time. At each PM, the hazard rate is reduced by an amount affected by the improvement factor which depends on the number of PM's preceding the current one. We derive mathematical formulas to evaluate the expected cost rate per unit time by incorporating the PM cost, repair cost, and replacement cost. Assuming that the failure times follow a Weibull distribution, we propose an optimal sequential PM policy by minimizing the expected cost rate. Furthermore, we consider Bayesian aspects for the sequential PM policy to discuss its optimality. The effect of some parameters and the functional forms of improvement factor on the optimal PM policy is measured numerically by sensibility analysis and some numerical examples are presented for illustrative purposes.  相似文献   

16.
The draft addendum to the ICH E9 regulatory guideline asks for explicit definition of the treatment effect to be estimated in clinical trials. The draft guideline also introduces the concept of intercurrent events to describe events that occur post‐randomisation that may affect efficacy assessment. Composite estimands allow incorporation of intercurrent events in the definition of the endpoint. A common example of an intercurrent event is discontinuation of randomised treatment and use of a composite strategy would assess treatment effect based on a variable that combines the outcome variable of interest with discontinuation of randomised treatment. Use of a composite estimand may avoid the need for imputation which would be required by a treatment policy estimand. The draft guideline gives the example of a binary approach for specifying a composite estimand. When the variable is measured on a non‐binary scale, other options are available where the intercurrent event is given an extreme unfavourable value, for example comparison of median values or analysis based on categories of response. This paper reviews approaches to deriving a composite estimand and contrasts the use of this estimand to the treatment policy estimand. The benefits of using each strategy are discussed and examples of the use of the different approaches are given for a clinical trial in nasal polyposis and a steroid reduction trial in severe asthma.  相似文献   

17.
A Bayesian least squares approach is taken here to estimate certain parameters in generalized linear models for dichotomous response data. The method requires that only first and second moments of the probability distribution representing prior information be specified* Examples are presented to illustrate situations having direct estimates as well as those which require approximate or iterative solutions.  相似文献   

18.
牛晓健  陶川 《统计研究》2011,28(4):11-16
 经济对外开放度的提高会使一国货币政策的有效性面临挑战,那么这种影响机制是通过何种方式发生的呢?又是如何传导的呢?本文以货币政策信贷观的银行贷款渠道为理论基础,定量解析了2005年人民币汇率形成机制改革以来外汇占款对其调控效果的影响。文章运用时间序列分析方法在国内首次构建了一个SVAR模型并通过约束识别了其中外汇占款的结构冲击,进而定量解析了外汇占款从货币到信贷这一传导环节对央行货币政策调控的影响程度,脉冲响应函数和方差分解均表明外汇占款的增加在长期内对基础货币、广义货币和金融机构贷款有扩张效应,且金融机构贷款余额对基础货币余额的弹性其中很大一部分是外汇占款引致的。这表明,由于存在三元悖论,开放条件下我国的货币政策的独立性受到较大影响,货币政策的实施效果受到因外部经济失衡所导致的外汇占款的影响,因此,货币政策的实施应当充分关注外汇占款的扩张性效应,并采取有效措施提高货币政策的有效性。  相似文献   

19.
Abstract

The method of tail functions is applied to confidence estimation of the exponential mean in the presence of prior information. It is shown how the “ordinary” confidence interval can be generalized using a class of tail functions and then engineered for optimality, in the sense of minimizing prior expected length over that class, whilst preserving frequentist coverage. It is also shown how to derive the globally optimal interval, and how to improve on this using tail functions when criteria other than length are taken into consideration. Probabilities of false coverage are reported for some of the intervals under study, and the theory is illustrated by application to confidence estimation of a reliability coefficient based on some survival data.  相似文献   

20.
When the results of biological experiments are tested for a possible difference between treatment and control groups, the inference is only valid if based upon a model that fits the experimental results satisfactorily. In dominant-lethal testing, foetal death has previously been assumed to follow a variety of models, including a Poisson, Binomial, Beta-binomial and various mixture models. However, discriminating between models has always been a particularly difficult problem. In this paper, we consider the data from 6 separate dominant-lethal assay experiments and discriminate between the competing models which could be used to describe them. We adopt a Bayesian approach and illustrate how a variety of different models may be considered, using Markov chain Monte Carlo (MCMC) simulation techniques and comparing the results with the corresponding maximum likelihood analyses. We present an auxiliary variable method for determining the probability that any particular data cell is assigned to a given component in a mixture and we illustrate the value of this approach. Finally, we show how the Bayesian approach provides a natural and unique perspective on the model selection problem via reversible jump MCMC and illustrate how probabilities associated with each of the different models may be calculated for each data set. In terms of estimation we show how, by averaging over the different models, we obtain reliable and robust inference for any statistic of interest.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号