首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Efficient, accurate, and fast Markov Chain Monte Carlo estimation methods based on the Implicit approach are proposed. In this article, we introduced the notion of Implicit method for the estimation of parameters in Stochastic Volatility models.

Implicit estimation offers a substantial computational advantage for learning from observations without prior knowledge and thus provides a good alternative to classical inference in Bayesian method when priors are missing.

Both Implicit and Bayesian approach are illustrated using simulated data and are applied to analyze daily stock returns data on CAC40 index.  相似文献   


2.
In this study an attempt is made to assess statistically the validity of two theories as to the origin of comets. This subject still leads to great controversy amongst astronomers but recently two main schools of thought have developed.

These are that comets are of

(i) planetary origin,

(ii) interstellar origin.

Many theories have been expanded within each school of thought but at the present time one theory in each is generally accepted. This paper sets out to identify the statistical implications of each theory and evaluate each theory in terms of their implications.  相似文献   


3.
In this paper, we study, by means of randomized sampling, the long-run stability of some open Markov population fed with time-dependent Poisson inputs. We show that state probabilities within transient states converge—even when the overall expected population dimension increases without bound—under general conditions on the transition matrix and input intensities.

Following the convergence results, we obtain ML estimators for a particular sequence of input intensities, where the sequence of new arrivals is modeled by a sigmoidal function. These estimators allow for the forecast, by confidence intervals, of the evolution of the relative population structure in the transient states.

Applying these results to the study of a consumption credit portfolio, we estimate the implicit default rate.  相似文献   


4.
ARIMA (p, d, q) models were fitted to areal annual rainfall of two homogeneous regions in East Africa with rainfall records extending between the period 1922–80. The areal estimates of the regional rainfall were derived from the time series of the first eigenvector, which was significantly dominant at each of the two regions. The first eigenvector accounted for about 80% of the total rainfall variance in each region.

The class of ARIMA (p, d, q) models which best fitted the areal indices of relative wetness/dryness were the A R M A (3, 1) models. Tests of forecasting skill however indicated low skill in the forecasts given by these models. In all cases the models accounted for less than 50% of the total variance.

Spectral analysis of the indices time series indicated dominant quasi-periodic fluctuations around 2.2–2.8 years, 3–3.7 years, 5–6 years and 10–13 years. These spectral bands however accounted for very low proportion of the total rainfall variance.  相似文献   


5.
Four procedures are suggested for estimating the parameter ‘a’ in the Pauling equation:

e-X/a+e ? Y/a = 1.

The procedures are: using the mean of individual solutions, least squares with Y the subject of the equation, least squares with X the subject of the equation and maximum likelihood using a statistical model. In order to compare these estimates, we use Efron's bootstrap technique (1979), since distributional results are not available. This example also illustrates the role of the bootstrap in statistical inference.  相似文献   


6.
The 1978 European Community Typology for Agricultural Holdings is described in this paper and contrasted with a data based, polythetic-multivariate classification based on cluster analysis.

The requirement to reduce the size of the variable set employed in an optimisation-partition method of clustering suggested the value of principal components and factor analysis for the identification of major ‘source’ dimensions against which to measure farm differences and similarities.

The Euclidean cluster analysis incorporating the reduced dimensions quickly converged to a stable solution and was little influenced by the initial number or nature of ‘seeding’ partitions of the data.

The assignment of non-sampled observations from the population to cluster classes was completed using classification functions.

The final scheme, based on a sample of over 2,000 observations, was found to be both capable of interpretation and meaningful in terms of agricultural structure and practice and much superior in its explanatory power when compared with a version of the principal activity typology.  相似文献   


7.
Structural breaks in the level as well as in the volatility have often been exhibited in economic time series. In this paper, we propose new unit root tests when a time series has multiple shifts in its level and the corresponding volatility. The proposed tests are Lagrangian multiplier type tests based on the residual's marginal likelihood which is free from the nuisance mean parameters. The limiting null distributions of the proposed tests are the χ2distributions, and are affected not by the size and the location of breaks but only by the number of breaks.

We set the structural breaks under both the null and the alternative hypotheses to relieve a possible vagueness in interpreting test results in empirical work. The null hypothesis implies a unit root process with level shifts and the alternative connotes a stationary process with level shifts. The Monte Carlo simulation shows that our tests are locally more powerful than the OLSE-based tests, and that the powers of our tests, in a fixed time span, remain stable regardless the number of breaks. In our application, we employ the data which are analyzed by Perron (1990), and some results differ from those of Perron's (1990).  相似文献   


8.
Permutation tests for symmetry are suggested using data that are subject to right censoring. Such tests are directly relevant to the assumptions that underlie the generalized Wilcoxon test since the symmetric logistic distribution for log-errors has been used to motivate Wilcoxon scores in the censored accelerated failure time model. Its principal competitor is the log-rank (LGR) test motivated by an extreme value error distribution that is positively skewed. The proposed one-sided tests for symmetry against the alternative of positive skewness are directly relevant to the choice between usage of these two tests.

The permutation tests use statistics from the weighted LGR class normally used for making two-sample comparisons. From this class, the test using LGR weights (all weights equal) showed the greatest discriminatory power in simulations that compared the possibility of logistic errors versus extreme value errors.

In the test construction, a median estimate, determined by inverting the Kaplan–Meier estimator, is used to divide the data into a “control” group to its left that is compared with a “treatment” group to its right. As an unavoidable consequence of testing symmetry, data in the control group that have been censored become uninformative in performing this two-sample test. Thus, early heavy censoring of data can reduce the effective sample size of the control group and result in diminished power for discriminating symmetry in the population distribution.  相似文献   


9.
This paper analyses direct and indirect forms of dependence in the probability of scoring in a handball match, taking into account the mutual influence of both playing teams. Non-identical distribution (i.d.) and non-stationarity, which are commonly observed in sport games, are studied through the specification of time-varying parameters.

The model accounts for the binary character of the dependent variable, and for unobserved heterogeneity. The parameter dynamics is specified by a first-order auto-regressive process.

Data from the Handball World Championships 2001–2005 show that the dynamics of handball violate both independence and i.d., in some cases having a non-stationary behaviour.  相似文献   


10.
Among other types of non sampling errors, non response error (NRE) is an inherent component of any sample survey, which is supposed to be given much attention during the designing and execution stages. With increasing awareness of these estimators, therefore, there is an urge for the development of suitable techniques for controlling them.

This article proposes two families of estimators for population mean in the presence of non response and discuses various properties under model approach, namely polynomial regression model. The families include some existing estimators. Comparison of efficiencies along with the robustness of the estimators under misspecification of models has been empirically discussed.  相似文献   


11.
In this study some new unbiased estimators based on order statistics are proposed for the scale parameter in some family of scale distributions. These new estimators are suitable for the cases of complete (uncensored) and symmetric doubly Type-II censored samples. Further, they can be adapted to Type II right or Type II left censored samples. In addition, unbiased standard deviation estimators of the proposed estimators are also given. Moreover, unlike BLU estimators based on order statistics, expectation and variance-covariance of relevant order statistics are not required in computing these new estimators.

Simulation studies are conducted to compare performances of the new estimators with their counterpart BLU estimators for small sample sizes. The simulation results show that most of the proposed estimators in general perform almost as good as the counterpart BLU estimators; even some of them are better than BLU in some cases. Furthermore, a real data set is used to illustrate the new estimators and the results obtained parallel with those of BLUE methods.  相似文献   


12.
Tree algorithms are a well-known class of random access algorithms with a provable maximum stable throughput under the infinite population model (as opposed to ALOHA or the binary exponential backoff algorithm). In this article, we propose a tree algorithm for opportunistic spectrum usage in cognitive radio networks. A channel in such a network is shared among so-called primary and secondary users, where the secondary users are allowed to use the channel only if there is no primary user activity. The tree algorithm designed in this article can be used by the secondary users to share the channel capacity left by the primary users.

We analyze the maximum stable throughput and mean packet delay of the secondary users by developing a tree structured Quasi-Birth Death Markov chain under the assumption that the primary user activity can be modeled by means of a finite state Markov chain and that packets lengths follow a discrete phase-type distribution.

Numerical experiments provide insight on the effect of various system parameters and indicate that the proposed algorithm is able to make good use of the bandwidth left by the primary users.  相似文献   


13.
This paper considers the constant-partially accelerated life tests for series system products, where dependent M-O bivariate exponential distribution is assumed for the components.

Based on progressive type-II censored and masked data, the maximum likelihood estimates for the parameters and acceleration factors are obtained by using the decomposition approach. In addition, this method can also be applied to the Bayes estimates, which are too complex to obtain as usual way. Finally, a Monte Carlo simulation study is carried out to verify the accuracy of the methods under different masking probabilities and censoring schemes.  相似文献   


14.
This article provides a procedure for the detection and identification of outliers in the spectral domain where the Whittle maximum likelihood estimator of the panel data model proposed by Chen [W.D. Chen, Testing for spurious regression in a panel data model with the individual number and time length growing, J. Appl. Stat. 33(88) (2006b), pp. 759–772] is implemented. We extend the approach of Chang and co-workers [I. Chang, G.C. Tiao, and C. Chen, Estimation of time series parameters in the presence of outliers, Technometrics 30 (2) (1988), pp. 193–204] to the spectral domain and through the Whittle approach we can quickly detect and identify the type of outliers. A fixed effects panel data model is used, in which the remainder disturbance is assumed to be a fractional autoregressive integrated moving-average (ARFIMA) process and the likelihood ratio criterion is obtained directly through the modified inverse Fourier transform. This saves much time, especially when the estimated model implements a huge data-set.

Through Monte Carlo experiments, the consistency of the estimator is examined by growing the individual number N and time length T, in which the long memory remainder disturbances are contaminated with two types of outliers: additive outlier and innovation outlier. From the power tests, we see that the estimators are quite successful and powerful.

In the empirical study, we apply the model on Taiwan's computer motherboard industry. Weekly data from 1 January 2000 to 31 October 2006 of nine familiar companies are used. The proposed model has a smaller mean square error and shows more distinctive aggressive properties than the raw data model does.  相似文献   


15.
Over the last 25 years, increasing attention has been given to the problem of analysing data arising from circular distributions. The most important circular distribution was introduced by Von Mises (1918) which takes the form:

[Formulas]

where Io(k) is a modified Bessel function, u0 is the mean direction and k is the concentration parameter of the distribution. Watson & Williams (1956) laid the foundation of analysis of variance type techniques for the two-dimensional case of circular data using the Von Mises distribution. Stephens (1962a,b; 1969, 1972). Upton (1974) and Stephens (1982) made further improvements to Watson & Williams’ work. In this paper the authors will discuss the pitfalls of the methods adopted by Stephens (1982) and present a unified analysis of variance type approach for circular data.  相似文献   


16.
17.
Measures of the spread of data for random sums arise frequently in many problems and have a wide range of applications in real life, such as in the insurance field (e.g., the total claim size in a portfolio). The exact distribution of random sums is extremely difficult to determine, and normal approximation usually performs very badly for this complex distributions. A better method of approximating a random-sum distribution involves the use of saddlepoint approximations.

Saddlepoint approximations are powerful tools for providing accurate expressions for distribution functions that are not known in closed form. This method not only yields an accurate approximation near the center of the distribution but also controls the relative error in the far tail of the distribution.

In this article, we discuss approximations to the unknown complex random-sum Poisson–Erlang random variable, which has a continuous distribution, and the random-sum Poisson-negative binomial random variable, which has a discrete distribution. We show that the saddlepoint approximation method is not only quick, dependable, stable, and accurate enough for general statistical inference but is also applicable without deep knowledge of probability theory. Numerical examples of application of the saddlepoint approximation method to continuous and discrete random-sum Poisson distributions are presented.  相似文献   


18.
The multivariate extremal index function is a measure of the clustering among the extreme values of a multivariate stationary sequence. In this article, we introduce a measure of the degree of clustering of upcrossings in a multivariate stationary sequence, called multivariate upcrossings index, which is a multivariate generalization of the concept of upcrossings index. We derive the main properties of this function, namely the relations with the multivariate extremal index and the clustering of upcrossings.

Imposing general local and asymptotic dependence restrictions on the sequence or on its marginals we compute the multivariate upcrossings index from the marginal upcrossings indices and from the joint distribution of a finite number of variables. A couple of illustrative examples are exploited.  相似文献   


19.
Asymptotically valid inference in linear regression models is easily achieved under mild conditions using the well-known Eicker–White heteroskedasticity–robust covariance matrix estimator or one of its variant. In finite sample however, such inferences can suffer from substantial size distortion. Indeed, it is well established in the literature that the finite sample accuracy of a test may depend on which variant of the Eicker–White estimator is used, on the underlying data generating process (DGP) and on the desired level of the test.

This paper develops a new variant of the Eicker–White estimator which explicitly aims to minimize the finite sample null error in rejection probability (ERP) of the test. This is made possible by selecting the transformation of the squared residuals which results in the smallest possible ERP through a numerical algorithm based on the wild bootstrap. Monte Carlo evidence indicates that this new procedure achieves a level of robustness to the DGP, sample size and nominal testing level unequaled by any other Eicker–White estimator based asymptotic test.  相似文献   


20.
In manufacturing industry, the lifetime performance index CL is applied to evaluate the larger-the-better quality features of products. It can quickly show whether the lifetime performance of products meets the desired level. In this article, first we obtain the maximum likelihood estimator of CL with two unknown parameters in the Lomax distribution on the basis of progressive type I interval censored sample. With the MLE we proposed, some asymptotic confidence intervals of CL are discussed by using the delta method. Furthermore, the MLE of CL is used to establish the hypothesis test procedure under a given lower specification limit L. In addition, we also conduct a hypothesis test procedure when the scale parameter in the Lomax distribution is given. Finally, we illustrate the proposed inspection procedures through a real example. The testing procedure algorithms presented in this paper are efficient and easy to implement.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号