首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We devise a convenient way to estimate stochastic volatility and its volatility. Our method is applicable to both cross-sectional and time series data, and both high-frequency and low-frequency data. Moreover, this method, when applied to cross-sectional data (a collection of risky assets, portfolio), provides a great simplification in the sense that estimating the volatility of the portfolio does not require an estimation of a volatility matrix (the volatilities of the individual assets in the portfolio and their correlations). Furthermore, there is no need to generate volatility data.  相似文献   

2.
One way to obtain panel-like information on household wealth is to ask households about changes in their asset holdings. Yet the reliability of retrospective data is unclear, considering the potential for recall error. This article examines the reliability of retrospective reporting, using data from the 1983–1989 Survey of Consumer Finances. We find substantial inconsistencies between reported net investments in assets with measured changes in holdings. Inconsistencies are less severe for salient transactions like home sales and more severe for aggregated items like financial assets.  相似文献   

3.
It is shown that members of a class of two-level nonorthogonal resolution IV designs with n factors are strongly resolvable search designs when k, the maximum number of two-factor interactions thought possible, equals one; weakly resolvable when k = 2 except when the number of factors is 6; and may not be weakly resolvable when k≥ 3.  相似文献   

4.
This paper proposes a number of procedures for developing new biased estimators of the seemingly unrelated regression (SUR) parameters, when the explanatory variables are affected by multicollinearity. Several ridge parameters are proposed and then compared in terms of the trace mean squared error (TMSE) and (PR) criteria. The PR criterion is the proportion of replication (out of 1,000) for which the SUR version of the generalized least squares (SGLS) estimator has a smaller TMSE than others. The study was performed using Monte Carlo simulations where the number of equations in the system, the number of observations, the correlation among equations, and the correlation between explanatory variables have been varied. For each model, we performed 1,000 replications. Our results show that under certain conditions some of the proposed SUR ridge parameters, (R Sgeom , R Skmed , R Sqarith , and R Sqmax ), performed well when compared, in terms of TMSE and PR criteria, with other proposed and popular existing ridge parameters. In large samples and when the collinearity between the explanatory variables is not high, the unbiased SUR estimator (SGLS), performed better than the other ridge parameters.  相似文献   

5.
The proportional odds model (POM) is commonly used in regression analysis to predict the outcome for an ordinal response variable. The maximum likelihood estimation (MLE) approach is typically used to obtain the parameter estimates. The likelihood estimates do not exist when the number of parameters, p, is greater than the number of observations n. The MLE also does not exist if there are no overlapping observations in the data. In a situation where the number of parameters is less than the sample size but p is approaching to n, the likelihood estimates may not exist, and if they exist they may have quite large standard errors. An estimation method is proposed to address the last two issues, i.e. complete separation and the case when p approaches n, but not the case when p>n. The proposed method does not use any penalty term but uses pseudo-observations to regularize the observed responses by downgrading their effect so that they become close to the underlying probabilities. The estimates can be computed easily with all commonly used statistical packages supporting the fitting of POMs with weights. Estimates are compared with MLE in a simulation study and an application to the real data.  相似文献   

6.
We develop a finite-sample procedure to test the mean-variance efficiency and spanning hypotheses, without imposing any parametric assumptions on the distribution of model disturbances. In so doing, we provide an exact distribution-free method to test uniform linear restrictions in multivariate linear regression models. The framework allows for unknown forms of nonnormalities as well as time-varying conditional variances and covariances among the model disturbances. We derive exact bounds on the null distribution of joint F statistics to deal with the presence of nuisance parameters, and we show how to implement the resulting generalized nonparametric bounds tests with Monte Carlo resampling techniques. In sharp contrast to the usual tests that are not even computable when the number of test assets is too large, the power of the proposed test procedure potentially increases along both the time and cross-sectional dimensions.  相似文献   

7.
Abstract

The paper is concerned with an acceptance sampling problem under destructive inspections for one-shot systems. The systems may fail at random times while they are operating (as the systems are considered to be operating when storage begins), and these failures can only be identified by inspection. Thus, n samples are randomly selected from N one-shot systems for periodic destructive inspection. After storage time T, the N systems are replaced if the number of working systems is less than a pre-specified threshold k. The primary purpose of this study is to determine the optimal number of samples n*, extracted from the N for destructive detection and the optimal acceptance number k*, in the sample under the constraint of the system interval availability, to minimize the expected cost rate. Numerical experiments are studied to investigate the effect of the parameters in sampling inspection on the optimal solutions.  相似文献   

8.
This article considers fixed effects (FE) estimation for linear panel data models under possible model misspecification when both the number of individuals, n, and the number of time periods, T, are large. We first clarify the probability limit of the FE estimator and argue that this probability limit can be regarded as a pseudo-true parameter. We then establish the asymptotic distributional properties of the FE estimator around the pseudo-true parameter when n and T jointly go to infinity. Notably, we show that the FE estimator suffers from the incidental parameters bias of which the top order is O(T? 1), and even after the incidental parameters bias is completely removed, the rate of convergence of the FE estimator depends on the degree of model misspecification and is either (nT)? 1/2 or n? 1/2. Second, we establish asymptotically valid inference on the (pseudo-true) parameter. Specifically, we derive the asymptotic properties of the clustered covariance matrix (CCM) estimator and the cross-section bootstrap, and show that they are robust to model misspecification. This establishes a rigorous theoretical ground for the use of the CCM estimator and the cross-section bootstrap when model misspecification and the incidental parameters bias (in the coefficient estimate) are present. We conduct Monte Carlo simulations to evaluate the finite sample performance of the estimators and inference methods, together with a simple application to the unemployment dynamics in the U.S.  相似文献   

9.
To bootstrap a regression problem, pairs of response and explanatory variables or residuals can be resam‐pled, according to whether we believe that the explanatory variables are random or fixed. In the latter case, different residuals have been proposed in the literature, including the ordinary residuals (Efron 1979), standardized residuals (Bickel & Freedman 1983) and Studentized residuals (Weber 1984). Freedman (1981) has shown that the bootstrap from ordinary residuals is asymptotically valid when the number of cases increases and the number of variables is fixed. Bickel & Freedman (1983) have shown the asymptotic validity for ordinary residuals when the number of variables and the number of cases both increase, provided that the ratio of the two converges to zero at an appropriate rate. In this paper, the authors introduce the use of BLUS (Best Linear Unbiased with Scalar covariance matrix) residuals in bootstrapping regression models. The main advantage of the BLUS residuals, introduced in Theil (1965), is that they are uncorrelated. The main disadvantage is that only np residuals can be computed for a regression problem with n cases and p variables. The asymptotic results of Freedman (1981) and Bickel & Freedman (1983) for the ordinary (and standardized) residuals are generalized to the BLUS residuals. A small simulation study shows that even though only np residuals are available, in small samples bootstrapping BLUS residuals can be as good as, and sometimes better than, bootstrapping from standardized or Studentized residuals.  相似文献   

10.
RATES OF CONVERGENCE IN SEMI-PARAMETRIC MODELLING OF LONGITUDINAL DATA   总被引:2,自引:0,他引:2  
We consider the problem of semi-parametric regression modelling when the data consist of a collection of short time series for which measurements within series are correlated. The objective is to estimate a regression function of the form E[Y(t) | x] =x'ß+μ(t), where μ(.) is an arbitrary, smooth function of time t, and x is a vector of explanatory variables which may or may not vary with t. For the non-parametric part of the estimation we use a kernel estimator with fixed bandwidth h. When h is chosen without reference to the data we give exact expressions for the bias and variance of the estimators for β and μ(t) and an asymptotic analysis of the case in which the number of series tends to infinity whilst the number of measurements per series is held fixed. We also report the results of a small-scale simulation study to indicate the extent to which the theoretical results continue to hold when h is chosen by a data-based cross-validation method.  相似文献   

11.
This article presents the results of a simulation study of variable selection in a multiple regression context that evaluates the frequency of selecting noise variables and the bias of the adjusted R 2 of the selected variables when some of the candidate variables are authentic. It is demonstrated that for most samples a large percentage of the selected variables is noise, particularly when the number of candidate variables is large relative to the number of observations. The adjusted R 2 of the selected variables is highly inflated.  相似文献   

12.
Despite the simplicity of the Bernoulli process, developing good confidence interval procedures for its parameter—the probability of success p—is deceptively difficult. The binary data yield a discrete number of successes from a discrete number of trials, n. This discreteness results in actual coverage probabilities that oscillate with the n for fixed values of p (and with p for fixed n). Moreover, this oscillation necessitates a large sample size to guarantee a good coverage probability when p is close to 0 or 1.

It is well known that the Wilson procedure is superior to many existing procedures because it is less sensitive to p than any other procedures, therefore it is less costly. The procedures proposed in this article work as well as the Wilson procedure when 0.1 ≤p ≤ 0.9, and are even less sensitive (i.e., more robust) than the Wilson procedure when p is close to 0 or 1. Specifically, when the nominal coverage probability is 0.95, the Wilson procedure requires a sample size 1, 021 to guarantee that the coverage probabilities stay above 0.92 for any 0.001 ≤ min {p, 1 ?p} <0.01. By contrast, our procedures guarantee the same coverage probabilities but only need a sample size 177 without increasing either the expected interval width or the standard deviation of the interval width.  相似文献   

13.
This paper is concerned with the pricing of American options by simulation methods. In the traditional methods, in order to determine when to exercise, we have to store the simulated asset prices at all time steps on all paths. If N time steps and M paths are used, then the storage requirement is O(MN). In this paper, we present a simulation method for pricing American options where the number of storage required only grows like O(M). The only additional computational cost is that we have to generate each random number twice instead of once. For machines with limited memory, we can now use a larger N to improve the accuracy in pricing the options.  相似文献   

14.
Optimal block designs in small blocks are explored under the A-, E- and D-criteria when the treatments have a natural ordering and interest lies in comparing consecutive pairs of treatments. We first formulate the problem via approximate theory which leads to a convenient multiplicative algorithm for obtaining A-optimal design measures. This, in turn, yields highly efficient exact designs, under the A-criterion, even when the number of blocks is rather small. Moreover, our approach is seen to allow nesting of such efficient exact designs which is an advantage when the resources for the experiment are available in possibly several stages. Illustrative examples are given and tables of A-optimal design measures are provided. Approximate theory is also seen to yield analytical results on E- and D-optimal design measures.  相似文献   

15.
ABSTRACT

The study of r-out-of-n systems is of utmost importance in reliability theory. In this note, we study closure of different partial orders under the formation of r-out-of-N and (N ? s)-out-of-N systems when the number of components N, forming the system, is a random variable having support {k, k + 1,…}, where k is a fixed positive integer, r ∈ {1,…, k} and s ∈ {0, 1,…, k ? 1}. This generalizes quite a few results already known in the literature. We also study the closure of different partial orders when two systems are formed out of different random number of components.  相似文献   

16.
We consider an exact factor model with integrated factors and propose an LM-type test for unit roots in the idiosyncratic component. We show that, for a fixed number of panel individuals (N) and when the number of time points (T) tends to infinity, the limiting distribution of the LM-type statistic is a weighted sum of independent Chi-square variables with one degree of freedom, and when T tends to infinity followed by N tending to infinity, the limiting distribution is standard normal. The results should contribute to the challenging task of deriving likelihood-based unit-root tests in dynamic factor models.  相似文献   

17.
Approximations to the distribution of a discrete random variable originating from the classical occupancy problem are explored. The random variable X of interest is defined to be how many of N elements selected by or assigned to K individuals when each of the N elements is equally likely to be chosen by or assigned to any of the K individuals. Assuming that N represents the number of cells and each of the K individuals is placed in exactly one of the cells, X can also be defined as the number of cells occupied by the Kindividuals. In the literature, various asymptotic results for the distributions of X and (N ? X) are given; however, no guidelines are specified with respect to their utilization. In this article, these approximations are explored for various values of K and N, and rules of thumb are given for their appropriate use.  相似文献   

18.
In this paper variance balanced ternary designs are constructed in unequal block sizes for situations when suitable BIB designs do not exist for a given number of treatments because of the constraints bk=vr,and λ(v - 1) =r(k- 1).  相似文献   

19.
Three new test statistics are introduced for correlated categorical data in stratified R×C tables. They are similar in form to the standard generalized Cochran-Mantel-Haenszel statistics but modified to handle correlated outcomes. Two of these statistics are asymptotically valid in both many-strata (sparse data) and large-strata limiting models. The third one is designed specifically for the many-strata case but is valid even with a small number of strata. This latter statistic is also appropriate when strata are assumed to be random.  相似文献   

20.
It has been modeled for several replacement policies in literatures that the whole life cycle or operating interval of an operating unit should be finite rather than infinite as is done with the traditional method. However, it is more natural to consider the case in which the finite life cycle is a fluctuated parameter that could be used to estimate replacement times, which will be taken up in this article. For this, we first formulate a general model in which the unit is replaced at random age U, random time Y for the first working number, random life cycle S, or at failure X, whichever occurs first. The following models included in the general model, such that replacement done at age T when variable U is a degenerate distribution, and replacement done at working numbers N summed by number N of variable Y, are optimized. We obtain the total expected cost until replacement and the expected replacement cost rate for each model. Optimal age T, working number N, and a pair of (T, N) are discussed analytically and computed numerically.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号