首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
It is well known that the finite‐sample properties of tests of hypotheses on the co‐integrating vectors in vector autoregressive models can be quite poor, and that current solutions based on Bartlett‐type corrections or bootstrap based on unrestricted parameter estimators are unsatisfactory, in particular in those cases where also asymptotic χ2 tests fail most severely. In this paper, we solve this inference problem by showing the novel result that a bootstrap test where the null hypothesis is imposed on the bootstrap sample is asymptotically valid. That is, not only does it have asymptotically correct size, but, in contrast to what is claimed in existing literature, it is consistent under the alternative. Compared to the theory for bootstrap tests on the co‐integration rank (Cavaliere, Rahbek, and Taylor, 2012), establishing the validity of the bootstrap in the framework of hypotheses on the co‐integrating vectors requires new theoretical developments, including the introduction of multivariate Ornstein–Uhlenbeck processes with random (reduced rank) drift parameters. Finally, as documented by Monte Carlo simulations, the bootstrap test outperforms existing methods.  相似文献   

2.
With the cointegration formulation of economic long–run relations the test for cointegrating rank has become a useful econometric tool. The limit distribution of the test is often a poor approximation to the finite sample distribution and it is therefore relevant to derive an approximation to the expectation of the likelihood ratio test for cointegration in the vector autoregressive model in order to improve the finite sample properties. The correction factor depends on moments of functions of the random walk, which are tabulated by simulation, and functions of the parameters, which are estimated. From this approximation we propose a correction factor with the purpose of improving the small sample performance of the test. The correction is found explicitly in a number of simple models and its usefulness is illustrated by some simulation experiments.  相似文献   

3.
In this paper a bootstrap algorithm for a reduced rank vector autoregressive model with a restricted linear trend and independent, identically distributed errors is analyzed. For testing the cointegration rank, the asymptotic distribution under the hypothesis is the same as for the usual likelihood ratio test, so that the bootstrap is consistent. It is furthermore shown that a bootstrap procedure for determining the rank is asymptotically consistent in the sense that the probability of choosing the rank smaller than the true one converges to zero.  相似文献   

4.
This note studies some seemingly anomalous results that arise in possibly misspecified, reduced‐rank linear asset‐pricing models estimated by the continuously updated generalized method of moments. When a spurious factor (that is, a factor that is uncorrelated with the returns on the test assets) is present, the test for correct model specification has asymptotic power that is equal to the nominal size. In other words, applied researchers will erroneously conclude that the model is correctly specified even when the degree of misspecification is arbitrarily large. The rejection probability of the test for overidentifying restrictions typically decreases further in underidentified models where the dimension of the null space is larger than 1.  相似文献   

5.
We study three contractual arrangements—co‐development, licensing, and co‐development with opt‐out options—for the joint development of new products between a small and financially constrained innovator firm and a large technology company, as in the case of a biotech innovator and a major pharma company. We formulate our arguments in the context of a two‐stage model, characterized by technical risk and stochastically changing cost and revenue projections. The model captures the main disadvantages of traditional co‐development and licensing arrangements: in co‐development the small firm runs a risk of running out of capital as future costs rise, while licensing for milestone and royalty (M&R) payments, which eliminates the latter risk, introduces inefficiency, as profitable projects might be abandoned. Counter to intuition we show that the biotech's payoff in a licensing contract is not monotonically increasing in the M&R terms. We also show that an option clause in a co‐development contract that gives the small firm the right but not the obligation to opt out of co‐development and into a pre‐agreed licensing arrangement avoids the problems associated with fully committed co‐development or licensing: the probability that the small firm will run out of capital is greatly reduced or completely eliminated and profitable projects are never abandoned.  相似文献   

6.
A challenge with multiple chemical risk assessment is the need to consider the joint behavior of chemicals in mixtures. To address this need, pharmacologists and toxicologists have developed methods over the years to evaluate and test chemical interaction. In practice, however, testing of chemical interaction more often comprises ad hoc binary combinations and rarely examines higher order combinations. One explanation for this practice is the belief that there are simply too many possible combinations of chemicals to consider. Indeed, under stochastic conditions the possible number of chemical combinations scales geometrically as the pool of chemicals increases. However, the occurrence of chemicals in the environment is determined by factors, economic in part, which favor some chemicals over others. We investigate methods from the field of biogeography, originally developed to study avian species co‐occurrence patterns, and adapt these approaches to examine chemical co‐occurrence. These methods were applied to a national survey of pesticide residues in 168 child care centers from across the country. Our findings show that pesticide co‐occurrence in the child care center was not random but highly structured, leading to the co‐occurrence of specific pesticide combinations. Thus, ecological studies of species co‐occurrence parallel the issue of chemical co‐occurrence at specific locations. Both are driven by processes that introduce structure in the pattern of co‐occurrence. We conclude that the biogeographical tools used to determine when this structure occurs in ecological studies are relevant to evaluations of pesticide mixtures for exposure and risk assessment.  相似文献   

7.
The manufacturing complexity of many high‐tech products results in a substantial variation in the quality of the units produced. After manufacturing, the units are classified into vertically differentiated products. These products are typically obtained in uncontrollable fractions, leading to mismatches between their demand and supply. We focus on product stockouts due to the supply–demand mismatches. Existing literature suggests that when faced with product stockouts, firms should satisfy all unmet demand of a low‐end product by downgrading excess units of a high‐end product (downward substitution). However, this policy may be suboptimal if it is likely that low‐end customers will substitute with a higher quality product and pay the higher price (upward substitution). In this study, we investigate whether and how much downward substitution firms should perform. We also investigate whether and how much low‐end inventory firms should withhold to strategically divert some low‐end demand to the high‐end product. We first establish the existence of regions of co‐production technology and willingness of customers to substitute upward where firms adopt different substitution/withholding strategies. Then, we develop a managerial framework to determine the optimal selling strategy during the life cycle of technology products as profit margins shrink, manufacturing technology improves, and more capacity becomes available. Consistent trends exist for exogenous and endogenous prices.  相似文献   

8.
This paper investigates asymptotic properties of the maximum likelihood estimator and the quasi‐maximum likelihood estimator for the spatial autoregressive model. The rates of convergence of those estimators may depend on some general features of the spatial weights matrix of the model. It is important to make the distinction with different spatial scenarios. Under the scenario that each unit will be influenced by only a few neighboring units, the estimators may have ‐rate of convergence and be asymptotically normal. When each unit can be influenced by many neighbors, irregularity of the information matrix may occur and various components of the estimators may have different rates of convergence.  相似文献   

9.
In a call center, staffing decisions must be made before the call arrival rate is known with certainty. Once the arrival rate becomes known, the call center may be over‐staffed, in which case staff are being paid to be idle, or under‐staffed, in which case many callers hang‐up in the face of long wait times. Firms that have chosen to keep their call center operations in‐house can mitigate this problem by co‐sourcing; that is, by sometimes outsourcing calls. Then, the required staffing N depends on how the firm chooses which calls to outsource in real time, after the arrival rate realizes and the call center operates as a M/M/N + M queue with an outsourcing option. Our objective is to find a joint policy for staffing and call outsourcing that minimizes the long‐run average cost of this two‐stage stochastic program when there is a linear staffing cost per unit time and linear costs associated with abandonments and outsourcing. We propose a policy that uses a square‐root safety staffing rule, and outsources calls in accordance with a threshold rule that characterizes when the system is “too crowded.” Analytically, we establish that our proposed policy is asymptotically optimal, as the mean arrival rate becomes large, when the level of uncertainty in the arrival rate is of the same order as the inherent system fluctuations in the number of waiting customers for a known arrival rate. Through an extensive numerical study, we establish that our policy is extremely robust. In particular, our policy performs remarkably well over a wide range of parameters, and far beyond where it is proved to be asymptotically optimal.  相似文献   

10.
The current growth of the service sector in global economies is unparalleled in human history—by scale and speed of labor migration. Even large manufacturing firms are seeing dramatic shifts in percent revenue derived from services. The need for service innovations to fuel further economic growth and to raise the quality and productivity levels of services has never been greater. Services are moving to center stage in the global arena, especially knowledge‐intensive business services aimed at business performance transformation. One challenge to systematic service innovation is the interdisciplinary nature of service, integrating technology, business, social, and client (demand) innovations. This paper describes the emergence of service science, a new interdisciplinary area of study that aims to address the challenge of becoming more systematic about innovating in service.  相似文献   

11.
This paper examines the problem of testing and confidence set construction for one‐dimensional functions of the coefficients in autoregressive (AR(p)) models with potentially persistent time series. The primary example concerns inference on impulse responses. A new asymptotic framework is suggested and some new theoretical properties of known procedures are demonstrated. I show that the likelihood ratio (LR) and LR± statistics for a linear hypothesis in an AR(p) can be uniformly approximated by a weighted average of local‐to‐unity and normal distributions. The corresponding weights depend on the weight placed on the largest root in the null hypothesis. The suggested approximation is uniform over the set of all linear hypotheses. The same family of distributions approximates the LR and LR± statistics for tests about impulse responses, and the approximation is uniform over the horizon of the impulse response. I establish the size properties of tests about impulse responses proposed by Inoue and Kilian (2002) and Gospodinov (2004), and theoretically explain some of the empirical findings of Pesavento and Rossi (2007). An adaptation of the grid bootstrap for impulse response functions is suggested and its properties are examined.  相似文献   

12.
We study a supply chain in which a consumer goods manufacturer sells its product through a retailer. The retailer undertakes promotional expenditures, such as advertising, to increase sales and to compete against other retailer(s). The manufacturer supports the retailer’s promotional expenditure through a cooperative advertising program by reimbursing a portion (called the subsidy rate) of the retailer’s promotional expenditure. To determine the subsidy rate, we formulate a Stackelberg differential game between the manufacturer and the retailer, and a Nash differential subgame between the retailer and the competing retailer(s). We derive the optimal feedback promotional expenditures of the retailers and the optimal feedback subsidy rate of the manufacturer, and show how they are influenced by market parameters. An important finding is that the manufacturer should support its retailer only when a subsidy threshold is crossed. The impact of competition on this threshold is nonmonotone. Specifically, the manufacturer offers more support when its retailer competes with one other retailer but its support starts decreasing with the presence of additional retailers. In the case where the manufacturer sells through all retailers, we show under certain assumptions that it should support only one dominant retailer. We also describe how we can incorporate retail price competition into the model.  相似文献   

13.
Much recent attention in industrial practice has been centered on the question of which activities a manufacturing firm should complete for itself and for which it should rely on outside suppliers. This issue, generally labeled the “make‐buy” decision, has received substantial theoretical and empirical attention. In this paper, we broaden the scope of the make‐buy decision to include product design decisions, as well as production decisions. First, we examine independently the decisions of whether to internalize design and production, and then we consider how design and production organizational decisions are interdependent. The specific research questions we address are: (1) How can design and production sourcing decisions be described in richer terms than “make” and “buy”? (2) Do existing theories of vertical integration apply to product design activities as well as production decisions? (3) What is the relationship between the organization of design and the organization of production? (4) What organizational forms for design and production are seen in practice? After developing theoretical arguments and a conceptual framework, we explore these ideas empirically through an analysis of design and production sourcing decisions for bicycle frames in the U. S. mountain bicycle industry.  相似文献   

14.
The asymptotic refinements attributable to the block bootstrap for time series are not as large as those of the nonparametric iid bootstrap or the parametric bootstrap. One reason is that the independence between the blocks in the block bootstrap sample does not mimic the dependence structure of the original sample. This is the join‐point problem. In this paper, we propose a method of solving this problem. The idea is not to alter the block bootstrap. Instead, we alter the original sample statistics to which the block bootstrap is applied. We introduce block statistics that possess join‐point features that are similar to those of the block bootstrap versions of these statistics. We refer to the application of the block bootstrap to block statistics as the block–block bootstrap. The asymptotic refinements of the block–block bootstrap are shown to be greater than those obtained with the block bootstrap and close to those obtained with the nonparametric iid bootstrap and parametric bootstrap.  相似文献   

15.
This paper establishes the higher‐order equivalence of the k‐step bootstrap, introduced recently by Davidson and MacKinnon (1999), and the standard bootstrap. The k‐step bootstrap is a very attractive alternative computationally to the standard bootstrap for statistics based on nonlinear extremum estimators, such as generalized method of moment and maximum likelihood estimators. The paper also extends results of Hall and Horowitz (1996) to provide new results regarding the higher‐order improvements of the standard bootstrap and the k‐step bootstrap for extremum estimators (compared to procedures based on first‐order asymptotics). The results of the paper apply to Newton‐Raphson (NR), default NR, line‐search NR, and Gauss‐Newton k‐step bootstrap procedures. The results apply to the nonparametric iid bootstrap and nonoverlapping and overlapping block bootstraps. The results cover symmetric and equal‐tailed two‐sided t tests and confidence intervals, one‐sided t tests and confidence intervals, Wald tests and confidence regions, and J tests of over‐identifying restrictions.  相似文献   

16.
This paper develops a generalization of the widely used difference‐in‐differences method for evaluating the effects of policy changes. We propose a model that allows the control and treatment groups to have different average benefits from the treatment. The assumptions of the proposed model are invariant to the scaling of the outcome. We provide conditions under which the model is nonparametrically identified and propose an estimator that can be applied using either repeated cross section or panel data. Our approach provides an estimate of the entire counterfactual distribution of outcomes that would have been experienced by the treatment group in the absence of the treatment and likewise for the untreated group in the presence of the treatment. Thus, it enables the evaluation of policy interventions according to criteria such as a mean–variance trade‐off. We also propose methods for inference, showing that our estimator for the average treatment effect is root‐N consistent and asymptotically normal. We consider extensions to allow for covariates, discrete dependent variables, and multiple groups and time periods.  相似文献   

17.
This paper uses “revealed probability trade‐offs” to provide a natural foundation for probability weighting in the famous von Neumann and Morgenstern axiomatic set‐up for expected utility. In particular, it shows that a rank‐dependent preference functional is obtained in this set‐up when the independence axiom is weakened to stochastic dominance and a probability trade‐off consistency condition. In contrast with the existing axiomatizations of rank‐dependent utility, the resulting axioms allow for complete flexibility regarding the outcome space. Consequently, a parameter‐free test/elicitation of rank‐dependent utility becomes possible. The probability‐oriented approach of this paper also provides theoretical foundations for probabilistic attitudes towards risk. It is shown that the preference conditions that characterize the shape of the probability weighting function can be derived from simple probability trade‐off conditions.  相似文献   

18.
This paper establishes that instruments enable the identification of nonparametric regression models in the presence of measurement error by providing a closed form solution for the regression function in terms of Fourier transforms of conditional expectations of observable variables. For parametrically specified regression functions, we propose a root n consistent and asymptotically normal estimator that takes the familiar form of a generalized method of moments estimator with a plugged‐in nonparametric kernel density estimate. Both the identification and the estimation methodologies rely on Fourier analysis and on the theory of generalized functions. The finite‐sample properties of the estimator are investigated through Monte Carlo simulations.  相似文献   

19.
This paper considers the problem of choosing the number of bootstrap repetitions B for bootstrap standard errors, confidence intervals, confidence regions, hypothesis tests, p‐values, and bias correction. For each of these problems, the paper provides a three‐step method for choosing B to achieve a desired level of accuracy. Accuracy is measured by the percentage deviation of the bootstrap standard error estimate, confidence interval length, test's critical value, test's p‐value, or bias‐corrected estimate based on B bootstrap simulations from the corresponding ideal bootstrap quantities for which B=. The results apply quite generally to parametric, semiparametric, and nonparametric models with independent and dependent data. The results apply to the standard nonparametric iid bootstrap, moving block bootstraps for time series data, parametric and semiparametric bootstraps, and bootstraps for regression models based on bootstrapping residuals. Monte Carlo simulations show that the proposed methods work very well.  相似文献   

20.
We develop a framework to assess how successfully standard time series models explain low‐frequency variability of a data series. The low‐frequency information is extracted by computing a finite number of weighted averages of the original data, where the weights are low‐frequency trigonometric series. The properties of these weighted averages are then compared to the asymptotic implications of a number of common time series models. We apply the framework to twenty U.S. macroeconomic and financial time series using frequencies lower than the business cycle.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号