首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Non-central chi-squared distribution plays a vital role in statistical testing procedures. Estimation of the non-centrality parameter provides valuable information for the power calculation of the associated test. We are interested in the statistical inference property of the non-centrality parameter estimate based on one observation (usually a summary statistic) from a truncated chi-squared distribution. This work is motivated by the application of the flexible two-stage design in case–control studies, where the sample size needed for the second stage of a two-stage study can be determined adaptively by the results of the first stage. We first study the moment estimate for the truncated distribution and prove its existence, uniqueness, and inadmissibility and convergence properties. We then define a new class of estimates that includes the moment estimate as a special case. Among this class of estimates, we recommend to use one member that outperforms the moment estimate in a wide range of scenarios. We also present two methods for constructing confidence intervals. Simulation studies are conducted to evaluate the performance of the proposed point and interval estimates.  相似文献   

2.
Kurt Br  nn  s  J  rgen Hellstr  m 《Econometric Reviews》2001,20(4):425-443
The integer-valued AR1 model is generalized to encompass some of the more likely features of economic time series of count data. The generalizations come at the price of loosing exact distributional properties. For most specifications the first and second order both conditional and unconditional moments can be obtained. Hence estimation, testing and forecasting are feasible and can be based on least squares or GMM techniques. An illustration based on the number of plants within an industrial sector is considered.  相似文献   

3.
The concept of fractional cointegration, whereby deviations from an equilibrium relationship follow a fractionally integrated process, has attracted some attention of late. The extended concept allows cointegration to be associated with mean reversion in the error, rather than requiring the more stringent condition of stationarity. This paper presents a Bayesian method for conducting inference about fractional cointegration. The method is based on an approximation of the exact likelihood, with a Jeffreys prior being used to offset identification problems. Numerical results are produced via a combination of Markov chain Monte Carlo algorithms. The procedure is applied to several purchasing power parity relations, with substantial evidence found in favor of parity reversion.  相似文献   

4.
Summary The paper first provides a short review of the most common microeconometric models including logit, probit, discrete choice, duration models, models for count data and Tobit-type models. In the second part we consider the situation that the micro data have undergone some anonymization procedure which has become an important issue since otherwise confidentiality would not be guaranteed. We shortly describe the most important approaches for data protection which also can be seen as creating errors of measurement by purpose. We also consider the possibility of correcting the estimation procedure while taking into account the anonymization procedure. We illustrate this for the case of binary data which are anonymized by ‘post-randomization’ and which are used in a probit model. We show the effect of ‘naive’ estimation, i. e. when disregarding the anonymization procedure. We also show that a ‘corrected’ estimate is available which is satisfactory in statistical terms. This is also true if parameters of the anonymization procedure have to be estimated, too. Research in this paper is related to the project “Faktische Anonymisierung wirtschaftsstatistischer Einzeldaten” financed by German Ministry of Research and Technology.  相似文献   

5.
Summary The need to evaluate the performance of active labour market policies is not questioned any longer. Even though OECD countries spend significant shares of national resources on these measures, unemployment rates remain high or even increase. We focus on microeconometric evaluation which has to solve the fundamental evaluation problem and overcome the possible occurrence of selection bias. When using non-experimental data, different evaluation approaches can be thought of. The aim of this paper is to review the most relevant estimators, discuss their identifying assumptions and their (dis-)advantages. Thereby we will present estimators based on some form of exogeneity (selection on observables) as well as estimators where selection might also occur on unobservable characteristics. Since the possible occurrence of effect heterogeneity has become a major topic in evaluation research in recent years, we will also assess the ability of each estimator to deal with it. Additionally, we will also discuss some recent extensions of the static evaluation framework to allow for dynamic treatment evaluation. The authors thank Stephan L. Thomsen, Christopher Zeiss and one anonymous referee for valuable comments. The usual disclaimer applies.  相似文献   

6.
In this paper, we consider the prediction problem in multiple linear regression model in which the number of predictor variables, p, is extremely large compared to the number of available observations, n  . The least-squares predictor based on a generalized inverse is not efficient. We propose six empirical Bayes estimators of the regression parameters. Three of them are shown to have uniformly lower prediction error than the least-squares predictors when the vector of regressor variables are assumed to be random with mean vector zero and the covariance matrix (1/n)XtX(1/n)XtX where Xt=(x1,…,xn)Xt=(x1,,xn) is the p×np×n matrix of observations on the regressor vector centered from their sample means. For other estimators, we use simulation to show its superiority over the least-squares predictor.  相似文献   

7.
Summary Quantile regression methods are emerging as a popular technique in econometrics and biometrics for exploring the distribution of duration data. This paper discusses quantile regression for duration analysis allowing for a flexible specification of the functional relationship and of the error distribution. Censored quantile regression addresses the issue of right censoring of the response variable which is common in duration analysis. We compare quantile regression to standard duration models. Quantile regression does not impose a proportional effect of the covariates on the hazard over the duration time. However, the method cannot take account of time-varying covariates and it has not been extended so far to allow for unobserved heterogeneity and competing risks. We also discuss how hazard rates can be estimated using quantile regression methods. This paper benefitted from the helpful comments by an anonymous referee. Due to space constraints, we had to omit the details of the empirical application. These can be found in the long version of this paper, Fitzenberger and Wilke (2005). We gratefully acknowledge financial support by the German Research Foundation (DFG) through the research project ‘Microeconometric modelling of unemployment durations under consideration of the macroeconomic situation’. Thanks are due to Xuan Zhang for excellent research assistance. All errors are our sole responsibility.  相似文献   

8.
Shiue and Bain proposed an approximate F statistic for testing equality of two gamma distribution scale parameters in presence of a common and unknown shape parameter. By generalizing Shiue and Bain's statistic we develop a new statistic for testing equality of L >= 2 gamma distribution scale parameters. We derive the distribution of the new statistic ESP for L = 2 and equal sample size situation. For other situations distribution of ESP is not known and test based on the ESP statistic has to be performed by using simulated critical values. We also derive a C(α) statistic CML and develop a likelihood ratio statistic, LR, two modified likelihood ratio statistics M and MLB and a quadratic statistic Q. The distribution of each of the statistics CML, LR, M, MLB and Q is asymptotically chi-square with L - 1 degrees of freedom. We then conducted a monte-carlo simulation study to compare the perfor- mance of the statistics ESP, LR, M, MLB, CML and Q in terms of size and power. The statistics LR, M, MLB and Q are in general liberal and do not show power advantage over other statistics. The statistic CML, based on its asymptotic chi-square distribution, in general, holds nominal level well. It is most powerful or nearly most powerful in most situations and is simple to use. Hence, we recommend the statistic CML for use in general. For better power the statistic ESP, based on its empirical distribution, is recommended for the special situation for which there is evidence in the data that λ1 < … < λL and n1 < … < nL, where λ1 …, λL are the scale parameters and n1,…, nL are the sample sizes.  相似文献   

9.
《Econometric Reviews》2013,32(4):385-424
This paper introduces nonlinear dynamic factor models for various applications related to risk analysis. Traditional factor models represent the dynamics of processes driven by movements of latent variables, called the factors. Our approach extends this setup by introducing factors defined as random dynamic parameters and stochastic autocorrelated simulators. This class of factor models can represent processes with time varying conditional mean, variance, skewness and excess kurtosis. Applications discussed in the paper include dynamic risk analysis, such as risk in price variations (models with stochastic mean and volatility), extreme risks (models with stochastic tails), risk on asset liquidity (stochastic volatility duration models), and moral hazard in insurance analysis.

We propose estimation procedures for models with the marginal density of the series and factor dynamics parameterized by distinct subsets of parameters. Such a partitioning of the parameter vector found in many applications allows to simplify considerably statistical inference. We develop a two- stage Maximum Likelihood method, called the Finite Memory Maximum Likelihood, which is easy to implement in the presence of multiple factors. We also discuss simulation based estimation, testing, prediction and filtering.  相似文献   

10.
11.
The Shapiro–Wilk statistic and modified statistics are widely used test statistics for normality. They are based on regression and correlation. The statistics for the complete data can be easily generalized to the censored data. In this paper, the distribution theory for the modified Shapiro–Wilk statistic is investigated when it is generalized to Type II right censored data. As a result, it is shown that the limit distribution of the statistic can be representable as the integral of a Brownian bridge. Also, the power comparison to the other procedure is performed.  相似文献   

12.
We investigate the influence of residual serial correlation and of the time dimension on statistical inference for a unit root in dynamic longitudinal data, known as panel data in econometrics. To this end, we introduce two test statistics based on method of moments estimators. The first is based on the generalized method of moments estimators, while the second is based on the instrumental variables estimator. Analytical results for the Instrumental Variables (IV) based test in a simplified setting show that (i) large time dimension panel unit root tests will suffer from serious size distortions in finite samples, even for samples that would normally be considered large in practice, and (ii) negative serial correlation in the error terms of the panel reduces the power of the unit root tests, possibly up to a point where the test becomes biased. However, near the unit root the test is shown to have power against a wide range of alternatives. These findings are confirmed in a more general set-up through a series of Monte Carlo experiments.  相似文献   

13.
For given positive integers v, b, and k (all of them ≥2) a block design is a k × b array of the variety labels 1,…,v with blocks as columns. For the usual one-way heterogeneity model in standard form the problem is studied of finding a D-optimal block design for estimating the variety contrasts, when no balanced block design (BBD) exists. The paper presents solutions to this problem for v≤6. The results on D-optimality are derived from a graph-theoretic context. Block designs can be considered as multigraphs, and a block design is D-optimal iff its multigraph has greatest complexity (=number of spanning trees).  相似文献   

14.
This paper investigates the predictive mean squared error performance of a modified double k-class estimator by incorporating the Stein variance estimator. Recent studies show that the performance of the Stein rule estimator can be improved by using the Stein variance estimator. However, as we demonstrate below, this conclusion does not hold in general for all members of the double k-class estimators. On the other hand, an estimator is found to have smaller predictive mean squared error than the Stein variance-Stein rule estimator, over quite large parts of the parameter space.  相似文献   

15.
In this paper, we consider the classification of high-dimensional vectors based on a small number of training samples from each class. The proposed method follows the Bayesian paradigm, and it is based on a small vector which can be viewed as the regression of the new observation on the space spanned by the training samples. The classification method provides posterior probabilities that the new vector belongs to each of the classes, hence it adapts naturally to any number of classes. Furthermore, we show a direct similarity between the proposed method and the multicategory linear support vector machine introduced in Lee et al. [2004. Multicategory support vector machines: theory and applications to the classification of microarray data and satellite radiance data. Journal of the American Statistical Association 99 (465), 67–81]. We compare the performance of the technique proposed in this paper with the SVM classifier using real-life military and microarray datasets. The study shows that the misclassification errors of both methods are very similar, and that the posterior probabilities assigned to each class are fairly accurate.  相似文献   

16.
Zusammenfassung Volumenaggregate auf Vorjahrespreisbasis mit Verkettung unterscheiden sich von den herk?mmlichen Volumenaggregaten auf Festpreisbasis dadurch, dass die Mengen der Komponenten mit Preisen des Vorjahres statt mit festen Preisen einer Basisperiode bewertet sind. Die damit verbundene Nicht—Additivit?t bedeutet jedoch nicht, dass sich keine Volumenanteile (Quoten) mehr berechnen lie?en, wie Gr?mling (2005) jüngst meinte. Im vorliegenden Beitrag wird dargestellt, wie Volumenanteile und Wachstumsbeitr?ge bei verketteten Volumenreihen in einfacher und konsistenter Weise berechnet werden k?nnen.
Summary Chained volume aggregates differ from conventional fixed price aggregates as the quantities of the components are evaluated at prices of the previous year rather than at constant prices of a base period. However, the resulting non—additivity does not prevent the calculation of real shares as stated recently by Gr?mling (2005). This note shows how real shares and growth contributions can still be calculated in a simple and consistent way.
Der Beitrag gibt meine pers?nliche Meinung wieder, die nicht notwendig der Position der Deutschen Bundesbank entspricht. Ich danke Herrn Malte Knüppel sowie einem anonymen Gutachter für au?erordentlich hilfreiche Anregungen.  相似文献   

17.
Summary This paper presents a selective survey on panel data methods. The focus is on new developments. In particular, linear multilevel models, specific nonlinear, nonparametric and semiparametric models are at the center of the survey. In contrast to linear models there do not exist unified methods for nonlinear approaches. In this case conditional maximum likelihood methods dominate for fixed effects models. Under random effects assumptions it is sometimes possible to employ conventional maximum likelihood methods using Gaussian quadrature to reduce a T-dimensional integral. Alternatives are generalized methods of moments and simulated estimators. If the nonlinear function is not exactly known, nonparametric or semiparametric methods should be preferred. Helpful comments and suggestions from an unknown referee are gratefully acknowledged.  相似文献   

18.
徐凤  黎实 《统计研究》2014,31(9):91-98
对固定效应模型,本文基于拉格朗日乘数(LM)原理提出了一种新的可混合性检验。不同于已有的LM型可混合性检验,这里使用每个截面个体的LM统计量构建可混合性检验统计量。数理分析表明,本文所提的方法有着渐进正态性,对于扰动项的异方差和非正态均稳健,且与PY检验(Pesaran&Yamagata,2008)渐近等价。Monte Carlo模拟实验表明,相对于PY检验及另外两种LM型的可混合性检验,对于不同大小的 ,本文提出的方法有着良好的水平表现和更优越的检验势。  相似文献   

19.
In this article a general result is derived that, along with a functional central limit theorem for a sequence of statistics, can be employed in developing a nonparametric repeated significance test with adaptive target sample size. This method is used in deriving a repeated significance test with adaptive target sample size for the shift model. The repeated significance test is based on a functional central limit theorem for a sequence of partial sums of truncated observations. Based on numerical results presented in this article one can conclude that this nonparametric sequential test performs quite well.  相似文献   

20.
Summary Nonparametric models have become more and more popular over the last two decades. One reason for their popularity is software availability, which easily allows to fit smooth but otherwise unspecified functions to data. A benefit of the models is that the functional shape of a regression function is not prespecified in advance, but determined by the data. Clearly this allows for more insight which can be interpreted on a substance matter level. This paper gives an overview of available fitting routines, commonly called smoothing procedures. Moreover, a number of extensions to classical scatterplot smoothing are discussed, with examples supporting the advantages of the routines.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号