首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
This paper considers tests of the parameter on an endogenous variable in an instrumental variables regression model. The focus is on determining tests that have some optimal power properties. We start by considering a model with normally distributed errors and known error covariance matrix. We consider tests that are similar and satisfy a natural rotational invariance condition. We determine a two‐sided power envelope for invariant similar tests. This allows us to assess and compare the power properties of tests such as the conditional likelihood ratio (CLR), the Lagrange multiplier, and the Anderson–Rubin tests. We find that the CLR test is quite close to being uniformly most powerful invariant among a class of two‐sided tests. The finite‐sample results of the paper are extended to the case of unknown error covariance matrix and possibly nonnormal errors via weak instrument asymptotics. Strong instrument asymptotic results also are provided because we seek tests that perform well under both weak and strong instruments.  相似文献   

2.
This paper surveys the use of stochastic dominance to decision making under uncertainty. The first part presents the relevant definitions and some properties of distributions satisfying one of the stochastic dominance conditions. These properties include restrictions on moments, an invariance property, and properties of random variables related by an exact formula.The second part contains some applications of the stochastic dominance method and especially the problem of selecting optimal portfolios. Most of the results in this section deal with conditions that make diversification an optimal strategy.  相似文献   

3.
Using the intuition that financial markets transfer risks in business time, “market microstructure invariance” is defined as the hypotheses that the distributions of risk transfers (“bets”) and transaction costs are constant across assets when measured per unit of business time. The invariance hypotheses imply that bet size and transaction costs have specific, empirically testable relationships to observable dollar volume and volatility. Portfolio transitions can be viewed as natural experiments for measuring transaction costs, and individual orders can be treated as proxies for bets. Empirical tests based on a data set of 400,000+ portfolio transition orders support the invariance hypotheses. The constants calibrated from structural estimation imply specific predictions for the arrival rate of bets (“market velocity”), the distribution of bet sizes, and transaction costs.  相似文献   

4.
This paper develops a general method for constructing exactly similar tests based on the conditional distribution of nonpivotal statistics in a simultaneous equations model with normal errors and known reduced‐form covariance matrix. These tests are shown to be similar under weak‐instrument asymptotics when the reduced‐form covariance matrix is estimated and the errors are non‐normal. The conditional test based on the likelihood ratio statistic is particularly simple and has good power properties. Like the score test, it is optimal under the usual local‐to‐null asymptotics, but it has better power when identification is weak.  相似文献   

5.
This paper develops a new concept of separability with overlapping groups—latent separability. This is shown to provide a useful empirical and theoretical framework for investigating the grouping of goods and prices. It is a generalization of weak separability in which goods are allowed to enter more than one group and where the composition of groups is identified by the choice of group specific exclusive goods. Latent separability is shown to be equivalent to weak separability in latent rather than purchased goods and provides a relationship between separability and household production theory. For the popular class of linear, almost ideal and translog demand models and their generalizations, we provide a method for choosing the number of homothetic separable groups. A detailed method for exploring the composition of the separable groups is also presented. These methods are applied to a long time series of British individual household data on the consumption of twenty two nondurable and service goods.  相似文献   

6.
ABSTRACT

This one-year follow-up study among 1,421 male nurses from seven European countries tested the validity of the Effort-Reward Imbalance (ERI) model in predicting prospective vital exhaustion and work-home interference. We hypothesised that effort and lack of reward would have both main and interactive effects on future outcomes. Results of structural equation modelling (SEM) showed that effort was positively related to exhaustion and work-home interference, both simultaneously and over time. Lack of reward predicted increased exhaustion at follow-up, but effort-reward imbalance did not influence the outcomes. Additionally, Time 1 exhaustion predicted increased work-home interference and exhaustion at follow-up. These results do not support the ERI model, which postulates a primacy of effort-reward imbalance over main effects. Instead, the findings are in line with dual path models of job stress and work-home interference. Multi-group SEM showed partial cross-cultural metric invariance for the ERI measure of effort, but the ERI measure of rewards showed no metric measurement invariance, indicating its meaning is qualitatively different across cultures. Nevertheless, the main conclusions were markedly similar for each national sub-sample. We discuss the theoretical and practical implications of our study.  相似文献   

7.
This paper provides a first order asymptotic theory for generalized method of moments (GMM) estimators when the number of moment conditions is allowed to increase with the sample size and the moment conditions may be weak. Examples in which these asymptotics are relevant include instrumental variable (IV) estimation with many (possibly weak or uninformed) instruments and some panel data models that cover moderate time spans and have correspondingly large numbers of instruments. Under certain regularity conditions, the GMM estimators are shown to converge in probability but not necessarily to the true parameter, and conditions for consistent GMM estimation are given. A general framework for the GMM limit distribution theory is developed based on epiconvergence methods. Some illustrations are provided, including consistent GMM estimation of a panel model with time varying individual effects, consistent limited information maximum likelihood estimation as a continuously updated GMM estimator, and consistent IV structural estimation using large numbers of weak or irrelevant instruments. Some simulations are reported.  相似文献   

8.
Incidents can be defined as low-probability, high-consequence events and lesser events of the same type. Lack of data on extremely large incidents makes it difficult to determine distributions of incident size that reflect such disasters, even though they represent the great majority of total losses. If the form of the incident size distribution can be determined, then predictive Bayesian methods can be used to assess incident risks from limited available information. Moreover, incident size distributions have generally been observed to have scale invariant, or power law, distributions over broad ranges. Scale invariance in the distributions of sizes of outcomes of complex dynamical systems has been explained based on mechanistic models of natural and built systems, such as models of self-organized criticality. In this article, scale invariance is shown to result also as the maximum Shannon entropy distribution of incident sizes arising as the product of arbitrary functions of cause sizes. Entropy is shown by simulation and derivation to be maximized as a result of dependence, diversity, abundance, and entropy of multiplicative cause sizes. The result represents an information-theoretic explanation of invariance, parallel to those of mechanistic models. For example, distributions of incident size resulting from 30 partially dependent causes are shown to be scale invariant over several orders of magnitude. Empirical validation of power law distributions of incident size is reviewed, and the Pareto (power law) distribution is validated against oil spill, hurricane, and insurance data. The applicability of the Pareto distribution, in particular, for assessment of total losses over a planning period is discussed. Results justify the use of an analytical, predictive Bayesian version of the Pareto distribution, derived previously, to assess incident risk from available data.  相似文献   

9.
《Omega》2005,33(2):107-118
Organizations increasingly evaluate information technology in terms of its impact on the individual and his/her work. Academics have shown increased attention to developing measures of technology success in terms of impact on work. One of these efforts produced an instrument that measures how extensively information technology applications impact task productivity, task innovation, customer satisfaction, and management control. These constructs reflect perceived usefulness of information technology application for work. This paper reports on confirmatory analysis and factorial invariance tests of the impact of information technology instrument. The recommended four-factor instrument contains 12 items, seems to be factorially invariant across two samples from the US and Mexico and across two managerial levels, and has high reliability and validity.  相似文献   

10.
This 5-year follow-up study investigated the structure and the factorial invariance of the 13-item sense of coherence (SOC) scale (Antonovsky, 1987a) in two employment groups (unemployment/lay-off experiences vs. continuous full-time employment) and across two measurement times. In addition, the stability of SOC between these two employment groups was compared. The postal questionnaire data was collected twice, in 1992 and in 1997. The participants were Finnish technical designers (N=352) aged between 25 and 40 years in 1992. A total of 51% of the investigated participants had been employed full-time during the 5-year follow-up period and 49% had been unemployed and/or laid off for a total period of at least one month during the follow-up. The confirmatory factor analysis indicated that the SOC scale measured one general second-order SOC factor consisting of three, first-order factors of meaningfulness, comprehensibility, and manageability. The results also indicated that the scale was best used as an 11-item measure. The factorial invariance of the scale across time and across the two employment groups was supported by the data. Unexpectedly, the stability of SOC did not differ between the two employment groups. However, those participants who had experienced unemployment and/or been laid off during the follow-up period had a weaker SOC at both measurement times than those who had been employed throughout the follow-up.  相似文献   

11.
This 5-year follow-up study investigated the structure and the factorial invariance of the 13-item sense of coherence (SOC) scale (Antonovsky, ) in two employment groups (unemployment/lay-off experiences vs. continuous full-time employment) and across two measurement times. In addition, the stability of SOC between these two employment groups was compared. The postal questionnaire data was collected twice, in 1992 and in 1997. The participants were Finnish technical designers (N=352) aged between 25 and 40 years in 1992. A total of 51% of the investigated participants had been employed full-time during the 5-year follow-up period and 49% had been unemployed and/or laid off for a total period of at least one month during the follow-up. The confirmatory factor analysis indicated that the SOC scale measured one general second-order SOC factor consisting of three, first-order factors of meaningfulness, comprehensibility, and manageability. The results also indicated that the scale was best used as an 11-item measure. The factorial invariance of the scale across time and across the two employment groups was supported by the data. Unexpectedly, the stability of SOC did not differ between the two employment groups. However, those participants who had experienced unemployment and/or been laid off during the follow-up period had a weaker SOC at both measurement times than those who had been employed throughout the follow-up.  相似文献   

12.
基于广义谱的石油市场弱式有效检验   总被引:1,自引:0,他引:1  
对市场有效性的认识在市场分析中处于基础地位,石油市场有效性检验不仅可以为油价预测提供理论上的支持,也为比较不同石油市场的信息效率提供了依据.本文采用2001年1月到2008年7月的日度数据,运用广义谱方法检验了世界主要石油市场的弱式有效市场假说,这种方法考虑了高频时间序列的特征事实,可以检测出序列存在的线性和非线性序列依赖,允许存在未知形式的条件异方差,并且可以检验所有的滞后阶数.检验结果表明,欧美石油市场达到了弱式有效,OPEC和国内石油市场尚未达到弱式有效.文章从市场的交易制度、市场参与者结构分析了成因.  相似文献   

13.
本文针对存在买进卖出差价和交易费两种摩擦形式的金融市场,利用凸分析、非线性规划等优化工具,给出了弱无套利的一个本质刻画,以及有关状态价格和弱无套利的一系列结果,推广了以往文献中的许多已知结果。  相似文献   

14.
Several linear programming methods have been suggested as discrimination procedures. A least absolute deviations regression procedure is developed here which is simpler to use and does not suffer from any lack of invariance. A simulation study shows it to be at least as effective as any of the methods previously discussed for normal and heavy-tailed distributions.  相似文献   

15.
This paper studies the behavior, under local misspecification, of several confidence sets (CSs) commonly used in the literature on inference in moment (in)equality models. We propose the amount of asymptotic confidence size distortion as a criterion to choose among competing inference methods. This criterion is then applied to compare across test statistics and critical values employed in the construction of CSs. We find two important results under weak assumptions. First, we show that CSs based on subsampling and generalized moment selection (Andrews and Soares (2010)) suffer from the same degree of asymptotic confidence size distortion, despite the fact that asymptotically the latter can lead to CSs with strictly smaller expected volume under correct model specification. Second, we show that the asymptotic confidence size of CSs based on the quasi‐likelihood ratio test statistic can be an arbitrary small fraction of the asymptotic confidence size of CSs based on the modified method of moments test statistic.  相似文献   

16.
As key components of Davis's technology acceptance model (TAM), the perceived usefulness and perceived ease-of-use instruments are widely accepted among the MIS research community as tools for evaluating information system applications and predicting usage. Despite this wide acceptance, a series of incremental cross-validation studies have produced conflicting and equivocal results that do not provide guidance for researchers or practitioners who might use the TAM for decision making. Using a sample of 902 “initial exposure” responses, this research conducts: (1) a confirmatory factor analysis to assess the validity and reliability of the original instruments proposed by Davis, and (2) a multigroup invariance analysis to assess the equivalence of these instruments across subgroups based on type of application, experience with computing, and gender. In contrast to the mixed results of prior cross-validation efforts, the results of this confirmatory study provide strong support for the validity and reliability of Davis's sixitem perceived usefulness and six-item ease-of-use instruments. The multigroup invariance analysis suggests the usefulness and ease-of-use instruments have invariant true scores across most, but not all, subgroups. With notable exemptions for word processing applications and users with no prior computing experience, this research provides evidence that the item-factor loadings (true scores) are invariant across spread sheet, database, and graphic applications. The implications of the results for managerial decision making are discussed.  相似文献   

17.
The axiom of weak disposability is frequently imposed in data envelopment analysis (DEA) models involving undesirable outputs such as pollution. This paper sheds new light on the economic interpretation of weak disposability by developing dual formulations of the weakly disposable DEA technology. We find that the economic implications of weak disposability on the multiplier DEA problem are two-fold: (1) the shadow prices of the undesirable outputs can be positive or negative, and (2) the economic loss of the benchmark cannot exceed the suck cost of the inputs. We interpret the second implications as a limited liability condition. The dual formulations developed in this paper enable one to estimate shadow prices of the undesirable outputs using the weakly disposable technology. The insights gained are illustrated by a numerical example and an empirical application to the US power plants.  相似文献   

18.
This paper derives asymptotic power envelopes for tests of the unit root hypothesis in a zero‐mean AR(1) model. The power envelopes are derived using the limits of experiments approach and are semiparametric in the sense that the underlying error distribution is treated as an unknown infinite‐dimensional nuisance parameter. Adaptation is shown to be possible when the error distribution is known to be symmetric and to be impossible when the error distribution is unrestricted. In the latter case, two conceptually distinct approaches to nuisance parameter elimination are employed in the derivation of the semiparametric power bounds. One of these bounds, derived under an invariance restriction, is shown by example to be sharp, while the other, derived under a similarity restriction, is conjectured not to be globally attainable.  相似文献   

19.
The basic purpose of probabilistic risk analysis is to make inferences about the probabilities of various postulated events, with an account of all relevant information such as prior knowledge and operating experience with the specific system under study, as well as experience with other similar systems. Estimation of the failure rate of a Poisson-type system leads to an especially simple Bayesian solution in closed form if the prior probability implied by the invariance properties of the problem is properly taken into account. This basic simplicity persists if a more realistic prior, representing order of magnitude knowledge of the rate parameter, is employed instead. Moreover, the more realistic prior allows direct incorporation of experience gained from other similar systems, without need to postulate a statistical model for an underlying ensemble. The analytic formalism is applied to actual nuclear reactor data.  相似文献   

20.
Patrick Rivett 《Omega》1977,5(4):367-379
This paper develops and expands the use of multidimension scaling techniques (MDSCAL) as applied in the two separate fields of psychological testing and archaeology to the problem of multiple criteria decision making. Other work by the author published elsewhere shows that it is feasible to use MDSCAL for drawing maps of separate policies using very weak input information from which deductions as to most preferred and least preferred policies may be drawn. An application of this method is made, to show its use and a comparison made with the utility approach. The final, and main, part of the paper examines the robustness of the method for both deterministic and probabilistic input criteria. In this examination it is seen that the mapping method performs very well in picking up extremes of preference even under severe tests of robustness.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号