共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we study, by a Monte Carlo simulation, the effect of the order p of “Zhurbenko-Kolmogorov” taper on the asymptotic properties of semiparametric estimators. We show that p = [d + 1/2] + 1 gives the smallest variances and mean squared errors. These properties depend also on the truncation parameter
m. Moreover, we study the impact of the short-memory components on the bias and variances of these estimators. We finally carry
out an empirical application by using four monthly seasonally adjusted logarithm Consumer Price Index series.
相似文献
2.
Summary The likelihood function plays a very important role in the development of both the theory and practice of statistics. It is
somewhat surprising to realize that no general rigorous definition of a likelihood function seem to ever have been given.
Through a series of examples it is argued that no such definition is possible, illustrating the difficulties and ambiguities
encountered specially in situations involving “random variables” and ”parameters” which are not of primary interest. The fundamental
role of such auxiliary quantities (unfairly called “nuisance”) is highlighted and a very simple function is argued to convey
all the information provided by the observations. 相似文献
3.
P. Battipaglia 《Statistical Methods and Applications》1996,5(2):179-202
Summary The evaluation of the performance of seasonal adjustment procedures is an issue of practical importance in view of the unobservable
nature of the components. Looking at just one indicator when judging the overall quality of a procedure may be misleading,
even though this is common practice when many series are involved.
The main purpose of this paper is to compare the information content of different synthetic indicators with reference to the
X-11-ARIMA procedure.
Sixty-six different types of monthly seasonal series are generated and the seasonal component then extracted by carrying out
X-11-ARIMA with standard options. The correlation between the pseudo-true error for each series and various synthetic indicators
allows us to compare the latter's reliability, under both the hypotheses of minimum and maximum variance of the pseudo-true
seasonal component.
We show that the overall quality indexQ-the indicator most commonly adopted by users of the X-11-ARIMA-is always outperformed by the simpler diagnostics based on
the stability of the estimates.
In particular, the “sliding-spans” indicator, proposed by Findley et al. (1990) and included in the diagnostics of the new
X-12 procedure, shows a much stronger correlation with the pseudo-true error in the seasonal adjustment.
We also show that the total forecasting errors in the one-year-ahead extrapolation of the seasonal component have a good informative
power and perform almost as well as the “sliding-spans” indicator. 相似文献
4.
Revankar (1974, p. 190, equation (4.4)) obtains a result for the covariance matrices of the “Aitken” estimators of the regression
coefficients parameter matrices of two SUR models. The present note supplies a simpler derivation of this result. It is obtained
by using a known result in multivariate statistical analysis, see e.g., Sarkar (1981, p. 560, Theorem 3.1). 相似文献
5.
Because of their multimodality, mixture posterior distributions are difficult to sample with standard Markov chain Monte Carlo
(MCMC) methods. We propose a strategy to enhance the sampling of MCMC in this context, using a biasing procedure which originates
from computational Statistical Physics. The principle is first to choose a “reaction coordinate”, that is, a “direction” in
which the target distribution is multimodal. In a second step, the marginal log-density of the reaction coordinate with respect
to the posterior distribution is estimated; minus this quantity is called “free energy” in the computational Statistical Physics
literature. To this end, we use adaptive biasing Markov chain algorithms which adapt their targeted invariant distribution
on the fly, in order to overcome sampling barriers along the chosen reaction coordinate. Finally, we perform an importance
sampling step in order to remove the bias and recover the true posterior. The efficiency factor of the importance sampling
step can easily be estimated a priori once the bias is known, and appears to be rather large for the test cases we considered. 相似文献
6.
Estimating the distance between two points is of fundamental concern. This paper investigates some statistical properties
of three estimators of the distance between two points on a plane. The results of several theoretical comparisons of the performance
of the estimators assuming a large sample size are given. Also given is the comparison of the performance of the estimators
using simulation when the sample size is small. These comparisons suggest that the estimator of choice is not the most “natural”
estimator in this situation. Although the discussion is given in the framework of the plane, the results are readily extended
to high dimensional spaces. 相似文献
7.
“Science looks to Statistics for an objective measure of the strength of evidence in a given body of observations. The Law
of Likelihood explains that the strength of statistical evidence for one hypothesis over another is measured by their likelihood
ratio” (Blume, 2002). In this paper, we compare probabilities of weak and strong misleading evidence based on record data
with those based on the same number of iid observations from the original distribution. We shall also use a criterion defined
as a combination of probabilities of weak and strong misleading evidence to do the above comparison. We also give numerical
results of a simulated comparison. 相似文献
8.
In an earlier contribution to this journal, Kauermann and Weihs (Adv. Stat. Anal. 91(4):344 2007) addressed the lack of procedural understanding in statistical consulting: “Even though there seems to be a consensus that
statistical consulting should be well structured and target-orientated, the range of activity and the process itself seem
to be less well-understood.” While this issue appears to be rather new to statistical consultants, other consulting disciplines—in
particular management consultants—have long come up with a viable approach that divides the typical consulting process into
seven successive steps. Using this model as a frame allows for reflecting the approaches on statistical consulting suggested
by authors published in AStA volume 91, number 4, and for adding value to statistical consulting in general. 相似文献
9.
F. Pokropp 《Statistical Papers》1992,33(1):367-370
Consider the nonparametric 2-sample problem with ties. It is shown that the conditional variance (given the vector of lengths
of ties) of the Wilcoxon statistic is with “natural” ranks at most as large as with “mid-” (also “mean”, “average”) ranks. 相似文献
10.
The subject of the present study is to analyze how accurately an elaborated price jump detection methodology by Barndorff-Nielsen
and Shephard (J. Financ. Econom. 2:1–37, 2004a; 4:1–30, 2006) applies to financial time series characterized by less frequent trading. In this context, it is of primary interest to understand
the impact of infrequent trading on two test statistics, applicable to disentangle contributions from price jumps to realized
variance. In a simulation study, evidence is found that infrequent trading induces a sizable distortion of the test statistics
towards overrejection. A new empirical investigation using high frequency information of the most heavily traded electricity
forward contract of the Nord Pool Energy Exchange corroborates the evidence of the simulation. In line with the theory, a
“zero-return-adjusted estimation” is introduced to reduce the bias in the test statistics, both illustrated in the simulation
study and empirical case. 相似文献
11.
Andrea Guizzardi 《Statistical Methods and Applications》2006,15(2):229-242
In the comparison of municipalities balance sheets figures, the per-resident ratios lose some of their relevance as indicators of municipal expenditure or taxation behaviour. At this level of spatial disaggregation, the resident population is generally not in a direct relationship with balance sheets figures; the problem is evident especially for those municipalities with significant flows of non-residents, demanding and paying for services. The paper provides a frame of reference within which the measurement of municipal behaviour is conditioned to the approximation of the unobserved “real” size of each individual municipality. The joint use of several types of information about the structure of municipalities yields composite indicators of greater statistical relevance than the usual indicators based on the residential population alone. 相似文献
12.
Georgios Tsiotas 《Statistical Methods and Applications》2009,18(4):555-583
Stochastic Volatility models have been considered as a real alternative to conditional variance models, assuming that volatility
follows a process different from the observed one. However, issues like the unobservable nature of volatility and the creation
of “rich” dynamics give rise to the use of non-linear transformations for the volatility process. The Box–Cox transformation
and its Yeo–Johnson variation, by nesting both the linear and the non-linear case, can be considered as natural functions
to specify non-linear Stochastic Volatility models. In this framework, a fully Bayesian approach is used for parametric and
log–volatility estimation. The new models are then investigated for their within-sample and out-of-sample performance against
alternative Stochastic Volatility models using real financial data series. 相似文献
13.
Gerd Ronning 《Allgemeines Statistisches Archiv》2006,90(1):153-166
Summary The paper first provides a short review of the most common microeconometric models including logit, probit, discrete choice,
duration models, models for count data and Tobit-type models. In the second part we consider the situation that the micro
data have undergone some anonymization procedure which has become an important issue since otherwise confidentiality would
not be guaranteed. We shortly describe the most important approaches for data protection which also can be seen as creating
errors of measurement by purpose. We also consider the possibility of correcting the estimation procedure while taking into
account the anonymization procedure. We illustrate this for the case of binary data which are anonymized by ‘post-randomization’
and which are used in a probit model. We show the effect of ‘naive’ estimation, i. e. when disregarding the anonymization
procedure. We also show that a ‘corrected’ estimate is available which is satisfactory in statistical terms. This is also
true if parameters of the anonymization procedure have to be estimated, too.
Research in this paper is related to the project “Faktische Anonymisierung wirtschaftsstatistischer Einzeldaten” financed
by German Ministry of Research and Technology. 相似文献
14.
The paper analyses the biasing effect of anonymising micro data by multiplicative stochastic noise on the within estimation of a linear panel model. In short panels, additional bias results from serially correlated regressors.
Results in this paper are related to the project “Firms’ Panel Data and Factual Anonymisation,” which is financed by Federal
Ministry of Education and Research. We would like to thank the anonymous referees for helpful comments. 相似文献
15.
This paper compares the performance of “aggregate” and “disaggregate” predictors in forecasting contemporaneously aggregated
vector MA(1) processes. The necessary and sufficient condition for the equality of mean squared errors associated with the
two competing predictors is provided in the bivariate MA(1) case. Furthermore, it is argued that the condition of equality
of predictors as stated by Lütkepohl (Forecasting aggregated vector ARMA processes, Springer, Berlin, 1987) is only sufficient
(not necessary) for the equality of mean squared errors. Finally, it is shown that the equality of forecasting accuracy for
the two predictors can be achieved using specific assumptions on the parameters of the vector MA(1) structure. 相似文献
16.
In principal component analysis (PCA), it is crucial to know how many principal components (PCs) should be retained in order
to account for most of the data variability. A class of “objective” rules for finding this quantity is the class of cross-validation
(CV) methods. In this work we compare three CV techniques showing how the performance of these methods depends on the covariance
matrix structure. Finally we propose a rule for the choice of the “best” CV method and give an application to real data. 相似文献
17.
Rosa Arboretti Giancristofaro Stefano Bonnini 《Statistical Methods and Applications》2009,18(2):221-236
In several sciences, especially when dealing with performance evaluation, complex testing problems may arise due in particular
to the presence of multidimensional categorical data. In such cases the application of nonparametric methods can represent
a reasonable approach. In this paper, we consider the problem of testing whether a “treatment” is stochastically larger than
a “control” when univariate and multivariate ordinal categorical data are present. We propose a solution based on the nonparametric
combination of dependent permutation tests (Pesarin in Multivariate permutation test with application to biostatistics. Wiley,
Chichester, 2001), on variable transformation, and on tests on moments. The solution requires the transformation of categorical
response variables into numeric variables and the breaking up of the original problem’s hypotheses into partial sub-hypotheses
regarding the moments of the transformed variables. This type of problem is considered to be almost impossible to analyze
within likelihood ratio tests, especially in the multivariate case (Wang in J Am Stat Assoc 91:1676–1683, 1996). A comparative
simulation study is also presented along with an application example. 相似文献
18.
We consider linear combinations of “natural” timescales and choose the “best” one which provides the minimum coefficient of
variation of the lifetime. Our time scale is in fact a generalized Miner time scale because the latter is based on an appropriate
weighting of the times spent on low and high level loadings. The suggested modus operandi for finding the“best” time scale
has many features in common with the approach suggested by Farewell and Cox (1979) and Oakes (1995) which is devoted to multiple
time scales in survival analysis.
This revised version was published online in July 2006 with corrections to the Cover Date. 相似文献
19.
Noteworthy connections among conglomerability, countable additivity and coherence are discussed in detail, reaching the conclusion
that nonconglomerable conditional probabilities must not be doomed and play a significant role in statistical inference.
Extended and updated version of a contributed paper presented at the International Conference on “Information Processing and
Management of Uncertainty in knowledge-based systems”, IPMU 2004, Perugia, Italy. 相似文献
20.
This editorial introduces the Special Issue “Advances in Structural Equation Modeling” which provides a snapshot of the different
research activities performed by members of the working group “Structural Equation Modeling”. More specifically, this issue
contains a selection of papers presented at the 2009 annual meeting in Berlin at Humboldt University. 相似文献