首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents a new test for fractionally integrated (FI) processes. In particular, we propose a testing procedure in the time domain that extends the well–known Dickey–Fuller approach, originally designed for the I(1) versus I(0) case, to the more general setup of FI(d0) versus FI(d1), with d1<d0. When d0=1, the proposed test statistics are based on the OLS estimator, or its t–ratio, of the coefficient on Δd1yt−1 in a regression of Δyt on Δd1yt−1 and, possibly, some lags of Δyt. When d1 is not taken to be known a priori, a pre–estimation of d1 is needed to implement the test. We show that the choice of any T1/2–consistent estimator of d1∈[0 ,1) suffices to make the test feasible, while achieving asymptotic normality. Monte–Carlo simulations support the analytical results derived in the paper and show that proposed tests fare very well, both in terms of power and size, when compared with others available in the literature. The paper ends with two empirical applications.  相似文献   

2.
There is considerable debate as to the most appropriate metric for characterizing the mortality impacts of air pollution. Life expectancy has been advocated as an informative measure. Although the life‐table calculus is relatively straightforward, it becomes increasingly cumbersome when repeated over large numbers of geographic areas and for multiple causes of death. Two simplifying assumptions were evaluated: linearity of the relation between excess rate ratio and change in life expectancy, and additivity of cause‐specific life‐table calculations. We employed excess rate ratios linking PM2.5 and mortality from cerebrovascular disease, chronic obstructive pulmonary disease, ischemic heart disease, and lung cancer derived from a meta‐analysis of worldwide cohort studies. As a sensitivity analysis, we employed an integrated exposure response function based on the observed risk of PM2.5 over a wide range of concentrations from ambient exposure, indoor exposure, second‐hand smoke, and personal smoking. Impacts were estimated in relation to a change in PM2.5 from 19.5 μg/m3 estimated for Toronto to an estimated natural background concentration of 1.8 μg/m3. Estimated changes in life expectancy varied linearly with excess rate ratios, but at higher values the relationship was more accurately represented as a nonlinear function. Changes in life expectancy attributed to specific causes of death were additive with maximum error of 10%. Results were sensitive to assumptions about the air pollution concentration below which effects on mortality were not quantified. We have demonstrated valid approximations comprising expression of change in life expectancy as a function of excess mortality and summation across multiple causes of death.  相似文献   

3.
We provide a tractable characterization of the sharp identification region of the parameter vector θ in a broad class of incomplete econometric models. Models in this class have set‐valued predictions that yield a convex set of conditional or unconditional moments for the observable model variables. In short, we call these models with convex moment predictions. Examples include static, simultaneous‐move finite games of complete and incomplete information in the presence of multiple equilibria; best linear predictors with interval outcome and covariate data; and random utility models of multinomial choice in the presence of interval regressors data. Given a candidate value for θ, we establish that the convex set of moments yielded by the model predictions can be represented as the Aumann expectation of a properly defined random set. The sharp identification region of θ, denoted ΘI, can then be obtained as the set of minimizers of the distance from a properly specified vector of moments of random variables to this Aumann expectation. Algorithms in convex programming can be exploited to efficiently verify whether a candidate θ is in ΘI. We use examples analyzed in the literature to illustrate the gains in identification and computational tractability afforded by our method.  相似文献   

4.
There is evidence that people do not fully take into account how other people's actions depend on these other people's information. This paper defines and applies a new equilibrium concept in games with private information, cursed equilibrium, which assumes that each player correctly predicts the distribution of other players' actions, but underestimates the degree to which these actions are correlated with other players' information. We apply the concept to common‐values auctions, where cursed equilibrium captures the widely observed phenomenon of the winner's curse, and to bilateral trade, where cursedness predicts trade in adverse‐selections settings for which conventional analysis predicts no trade. We also apply cursed equilibrium to voting and signalling models. We test a single‐parameter variant of our model that embeds Bayesian Nash equilibrium as a special case and find that parameter values that correspond to cursedness fit a broad range of experimental datasets better than the parameter value that corresponds to Bayesian Nash equilibrium.  相似文献   

5.
Population and diary sampling methods are employed in exposure models to sample simulated individuals and their daily activity on each simulation day. Different sampling methods may lead to variations in estimated human exposure. In this study, two population sampling methods (stratified‐random and random‐random) and three diary sampling methods (random resampling, diversity and autocorrelation, and Markov‐chain cluster [MCC]) are evaluated. Their impacts on estimated children's exposure to ambient fine particulate matter (PM2.5) are quantified via case studies for children in Wake County, NC for July 2002. The estimated mean daily average exposure is 12.9 μg/m3 for simulated children using the stratified population sampling method, and 12.2 μg/m3 using the random sampling method. These minor differences are caused by the random sampling among ages within census tracts. Among the three diary sampling methods, there are differences in the estimated number of individuals with multiple days of exposures exceeding a benchmark of concern of 25 μg/m3 due to differences in how multiday longitudinal diaries are estimated. The MCC method is relatively more conservative. In case studies evaluated here, the MCC method led to 10% higher estimation of the number of individuals with repeated exposures exceeding the benchmark. The comparisons help to identify and contrast the capabilities of each method and to offer insight regarding implications of method choice. Exposure simulation results are robust to the two population sampling methods evaluated, and are sensitive to the choice of method for simulating longitudinal diaries, particularly when analyzing results for specific microenvironments or for exposures exceeding a benchmark of concern.  相似文献   

6.
In this paper we show the sufficient conditions for the decomposition of the complete bipartite graphs K 2m,2n and K 2n+1,2n+1F into cycles of two different lengths 4 and 2t, t>2, where F is a 1-factor of K 2n+1,2n+1. After that we prove that the results are true for t=5 and 6. Dedicated to Frank K. Hwang on the occasion of his 65th birthday. An erratum to this article can be found at  相似文献   

7.
We prove a Folk Theorem for asynchronously repeated games in which the set of players who can move in period t, denoted by It, is a random variable whose distribution is a function of the past action choices of the players and the past realizations of Iτ's, τ=1, 2,…,t−1. We impose a condition, the finite periods of inaction (FPI) condition, which requires that the number of periods in which every player has at least one opportunity to move is bounded. Given the FPI condition together with the standard nonequivalent utilities (NEU) condition, we show that every feasible and strictly individually rational payoff vector can be supported as a subgame perfect equilibrium outcome of an asynchronously repeated game.  相似文献   

8.
Pesticide risk assessment for food products involves combining information from consumption and concentration data sets to estimate a distribution for the pesticide intake in a human population. Using this distribution one can obtain probabilities of individuals exceeding specified levels of pesticide intake. In this article, we present a probabilistic, Bayesian approach to modeling the daily consumptions of the pesticide Iprodione though multiple food products. Modeling data on food consumption and pesticide concentration poses a variety of problems, such as the large proportions of consumptions and concentrations that are recorded as zero, and correlation between the consumptions of different foods. We consider daily food consumption data from the Netherlands National Food Consumption Survey and concentration data collected by the Netherlands Ministry of Agriculture. We develop a multivariate latent‐Gaussian model for the consumption data that allows for correlated intakes between products. For the concentration data, we propose a univariate latent‐t model. We then combine predicted consumptions and concentrations from these models to obtain a distribution for individual daily Iprodione exposure. The latent‐variable models allow for both skewness and large numbers of zeros in the consumption and concentration data. The use of a probabilistic approach is intended to yield more robust estimates of high percentiles of the exposure distribution than an empirical approach. Bayesian inference is used to facilitate the treatment of data with a complex structure.  相似文献   

9.
Finding the anti-block vital edge of a shortest path between two nodes   总被引:1,自引:1,他引:0  
Let P G (s,t) denote a shortest path between two nodes s and t in an undirected graph G with nonnegative edge weights. A detour at a node uP G (s,t)=(s,…,u,v,…,t) is defined as a shortest path P Ge (u,t) from u to t which does not make use of (u,v). In this paper, we focus on the problem of finding an edge e=(u,v)∈P G (s,t) whose removal produces a detour at node u such that the ratio of the length of P Ge (u,t) to the length of P G (u,t) is maximum. We define such an edge as an anti-block vital edge (AVE for short), and show that this problem can be solved in O(mn) time, where n and m denote the number of nodes and edges in the graph, respectively. Some applications of the AVE for two special traffic networks are shown. This research is supported by NSF of China under Grants 70471035, 70525004, 701210001 and 60736027, and PSF of China under Grant 20060401003.  相似文献   

10.
Finding an anti-risk path between two nodes in undirected graphs   总被引:1,自引:0,他引:1  
Given a weighted graph G=(V,E) with a source s and a destination t, a traveler has to go from s to t. However, some of the edges may be blocked at certain times, and the traveler only observes that upon reaching an adjacent site of the blocked edge. Let ℘={P G (s,t)} be the set of all paths from s to t. The risk of a path is defined as the longest travel under the assumption that any edge of the path may be blocked. The paper will propose the Anti-risk Path Problem of finding a path P G (s,t) in ℘ such that it has minimum risk. We will show that this problem can be solved in O(mn+n 2log n) time suppose that at most one edge may be blocked, where n and m denote the number of vertices and edges in G, respectively. This research is supported by NSF of China under Grants 70525004, 60736027, 70121001 and Postdoctoral Science Foundation of China under Grant 20060401003.  相似文献   

11.
We empirically investigate how time reductions in particular product development stages impact market value. Using longitudinal project data from 107 firms, we compare stage times prior to and following investments in new product development process changes. Our analysis reveals a predominance of focus on time reduction in the late stages of product development. We also find support for the existence of an inverted‐U relationship between market performance and time reductions for some of these stages: beta testing and technical implementation. Therefore, while time reductions can improve time to market, we observe a clear limit to the benefits associated with stage time reductions at particular stages. We also investigate the role of strategic contextual factors such as the extent to which a firm's patented innovations rely upon a variety, as opposed to a limited range, of diverse technology classes. The extent of this technology‐span impacts optimal stage time reductions. We perform an in‐depth post hoc analysis with a small set of firms to uncover how they should invest in stage time reduction given our empirical results. The post hoc analysis highlights that some firms are likely overinvesting in stage time reductions and destroying market value.  相似文献   

12.
Physiological daily inhalation rates reported in our previous study for normal‐weight subjects 2.6–96 years old were compared to inhalation data determined in free‐living overweight/obese individuals (n = 661) aged 5–96 years. Inhalation rates were also calculated in normal‐weight (n = 408), overweight (n = 225), and obese classes 1, 2, and 3 adults (n = 134) aged 20–96 years. These inhalation values were based on published indirect calorimetry measurements (n = 1,069) and disappearance rates of oral doses of water isotopes (i.e., 2H2O and H218O) monitored by gas isotope ratio mass spectrometry usually in urine samples for an aggregate period of over 16,000 days. Ventilatory equivalents for overweight/obese subjects at rest and during their aggregate daytime activities (28.99 ± 6.03 L to 34.82 ± 8.22 L of air inhaled/L of oxygen consumed; mean ±  SD) were determined and used for calculations of inhalation rates. The interindividual variability factor calculated as the ratio of the highest 99th percentile to the lowest 1st percentile of daily inhalation rates is higher for absolute data expressed in m3/day (26.7) compared to those of data in m3/kg‐day (12.2) and m3/m2‐day (5.9). Higher absolute rates generally found in overweight/obese individuals compared to their normal‐weight counterparts suggest higher intakes of air pollutants (in μg/day) for the former compared to the latter during identical exposure concentrations and conditions. Highest absolute mean (24.57 m3/day) and 99th percentile (55.55 m3/day) values were found in obese class 2 adults. They inhale on average 8.21 m3 more air per day than normal‐weight adults.  相似文献   

13.
This paper provides computationally intensive, yet feasible methods for inference in a very general class of partially identified econometric models. Let P denote the distribution of the observed data. The class of models we consider is defined by a population objective function Q(θ, P) for θΘ. The point of departure from the classical extremum estimation framework is that it is not assumed that Q(θ, P) has a unique minimizer in the parameter space Θ. The goal may be either to draw inferences about some unknown point in the set of minimizers of the population objective function or to draw inferences about the set of minimizers itself. In this paper, the object of interest is Θ0(P)=argminθΘQ(θ, P), and so we seek random sets that contain this set with at least some prespecified probability asymptotically. We also consider situations where the object of interest is the image of Θ0(P) under a known function. Random sets that satisfy the desired coverage property are constructed under weak assumptions. Conditions are provided under which the confidence regions are asymptotically valid not only pointwise in P, but also uniformly in P. We illustrate the use of our methods with an empirical study of the impact of top‐coding outcomes on inferences about the parameters of a linear regression. Finally, a modest simulation study sheds some light on the finite‐sample behavior of our procedure.  相似文献   

14.
Many approaches to estimation of panel models are based on an average or integrated likelihood that assigns weights to different values of the individual effects. Fixed effects, random effects, and Bayesian approaches all fall into this category. We provide a characterization of the class of weights (or priors) that produce estimators that are first‐order unbiased. We show that such bias‐reducing weights will depend on the data in general unless an orthogonal reparameterization or an essentially equivalent condition is available. Two intuitively appealing weighting schemes are discussed. We argue that asymptotically valid confidence intervals can be read from the posterior distribution of the common parameters when N and T grow at the same rate. Next, we show that random effects estimators are not bias reducing in general and we discuss important exceptions. Moreover, the bias depends on the Kullback–Leibler distance between the population distribution of the effects and its best approximation in the random effects family. Finally, we show that, in general, standard random effects estimation of marginal effects is inconsistent for large T, whereas the posterior mean of the marginal effect is large‐T consistent, and we provide conditions for bias reduction. Some examples and Monte Carlo experiments illustrate the results.  相似文献   

15.
The availability of high frequency financial data has generated a series of estimators based on intra‐day data, improving the quality of large areas of financial econometrics. However, estimating the standard error of these estimators is often challenging. The root of the problem is that traditionally, standard errors rely on estimating a theoretically derived asymptotic variance, and often this asymptotic variance involves substantially more complex quantities than the original parameter to be estimated. Standard errors are important: they are used to assess the precision of estimators in the form of confidence intervals, to create “feasible statistics” for testing, to build forecasting models based on, say, daily estimates, and also to optimize the tuning parameters. The contribution of this paper is to provide an alternative and general solution to this problem, which we call Observed Asymptotic Variance. It is a general nonparametric method for assessing asymptotic variance (AVAR). It provides consistent estimators of AVAR for a broad class of integrated parameters Θ = ∫ θt dt, where the spot parameter process θ can be a general semimartingale, with continuous and jump components. The observed AVAR is implemented with the help of a two‐scales method. Its construction works well in the presence of microstructure noise, and when the observation times are irregular or asynchronous in the multivariate case. The methodology is valid for a wide variety of estimators, including the standard ones for variance and covariance, and also for more complex estimators, such as, of leverage effects, high frequency betas, and semivariance.  相似文献   

16.
We develop results for the use of Lasso and post‐Lasso methods to form first‐stage predictions and estimate optimal instruments in linear instrumental variables (IV) models with many instruments, p. Our results apply even when p is much larger than the sample size, n. We show that the IV estimator based on using Lasso or post‐Lasso in the first stage is root‐n consistent and asymptotically normal when the first stage is approximately sparse, that is, when the conditional expectation of the endogenous variables given the instruments can be well‐approximated by a relatively small set of variables whose identities may be unknown. We also show that the estimator is semiparametrically efficient when the structural error is homoscedastic. Notably, our results allow for imperfect model selection, and do not rely upon the unrealistic “beta‐min” conditions that are widely used to establish validity of inference following model selection (see also Belloni, Chernozhukov, and Hansen (2011b)). In simulation experiments, the Lasso‐based IV estimator with a data‐driven penalty performs well compared to recently advocated many‐instrument robust procedures. In an empirical example dealing with the effect of judicial eminent domain decisions on economic outcomes, the Lasso‐based IV estimator outperforms an intuitive benchmark. Optimal instruments are conditional expectations. In developing the IV results, we establish a series of new results for Lasso and post‐Lasso estimators of nonparametric conditional expectation functions which are of independent theoretical and practical interest. We construct a modification of Lasso designed to deal with non‐Gaussian, heteroscedastic disturbances that uses a data‐weighted 1‐penalty function. By innovatively using moderate deviation theory for self‐normalized sums, we provide convergence rates for the resulting Lasso and post‐Lasso estimators that are as sharp as the corresponding rates in the homoscedastic Gaussian case under the condition that logp = o(n1/3). We also provide a data‐driven method for choosing the penalty level that must be specified in obtaining Lasso and post‐Lasso estimates and establish its asymptotic validity under non‐Gaussian, heteroscedastic disturbances.  相似文献   

17.
Using many moment conditions can improve efficiency but makes the usual generalized method of moments (GMM) inferences inaccurate. Two‐step GMM is biased. Generalized empirical likelihood (GEL) has smaller bias, but the usual standard errors are too small in instrumental variable settings. In this paper we give a new variance estimator for GEL that addresses this problem. It is consistent under the usual asymptotics and, under many weak moment asymptotics, is larger than usual and is consistent. We also show that the Kleibergen (2005) Lagrange multiplier and conditional likelihood ratio statistics are valid under many weak moments. In addition, we introduce a jackknife GMM estimator, but find that GEL is asymptotically more efficient under many weak moments. In Monte Carlo examples we find that t‐statistics based on the new variance estimator have nearly correct size in a wide range of cases.  相似文献   

18.
This paper establishes the higher‐order equivalence of the k‐step bootstrap, introduced recently by Davidson and MacKinnon (1999), and the standard bootstrap. The k‐step bootstrap is a very attractive alternative computationally to the standard bootstrap for statistics based on nonlinear extremum estimators, such as generalized method of moment and maximum likelihood estimators. The paper also extends results of Hall and Horowitz (1996) to provide new results regarding the higher‐order improvements of the standard bootstrap and the k‐step bootstrap for extremum estimators (compared to procedures based on first‐order asymptotics). The results of the paper apply to Newton‐Raphson (NR), default NR, line‐search NR, and Gauss‐Newton k‐step bootstrap procedures. The results apply to the nonparametric iid bootstrap and nonoverlapping and overlapping block bootstraps. The results cover symmetric and equal‐tailed two‐sided t tests and confidence intervals, one‐sided t tests and confidence intervals, Wald tests and confidence regions, and J tests of over‐identifying restrictions.  相似文献   

19.
Maximizing Profits of Routing in WDM Networks   总被引:1,自引:0,他引:1  
Let G = (V, E) be a ring (or chain) network representing an optical wavelength division multiplexing (WDM) network with k channels, where each edge ej has an integer capacity cj. A request si,ti is a pair of two nodes in G. Given m requests si,ti, i = 1, 2, ..., m, each with a profit value pi, we would like to design/route a k-colorable set of paths for some (may not be all) of the m requests such that each edge ej in G is used at most cj times and the total profit of the set of designed paths is maximized. Here two paths cannot have the same color (channel) if they share some common edge(s).This problem arises in optical communication networks. In this paper, we present a polynomial-time algorithm to solve the problem when G is a chain. When G is a ring, however, the optimization problem is NP-hard (Wan and Liu, 1998), we present a 2-approximation algorithm based on our solution to the chain network. Similarly, some results in a bidirected chain and a bidirected ring are obtained.  相似文献   

20.
We consider the bootstrap unit root tests based on finite order autoregressive integrated models driven by iid innovations, with or without deterministic time trends. A general methodology is developed to approximate asymptotic distributions for the models driven by integrated time series, and used to obtain asymptotic expansions for the Dickey–Fuller unit root tests. The second‐order terms in their expansions are of stochastic orders Op(n−1/4) and Op(n−1/2), and involve functionals of Brownian motions and normal random variates. The asymptotic expansions for the bootstrap tests are also derived and compared with those of the Dickey–Fuller tests. We show in particular that the bootstrap offers asymptotic refinements for the Dickey–Fuller tests, i.e., it corrects their second‐order errors. More precisely, it is shown that the critical values obtained by the bootstrap resampling are correct up to the second‐order terms, and the errors in rejection probabilities are of order o(n−1/2) if the tests are based upon the bootstrap critical values. Through simulations, we investigate how effective is the bootstrap correction in small samples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号