首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Four procedures are suggested for estimating the parameter ‘a’ in the Pauling equation:

e-X/a+e ? Y/a = 1.

The procedures are: using the mean of individual solutions, least squares with Y the subject of the equation, least squares with X the subject of the equation and maximum likelihood using a statistical model. In order to compare these estimates, we use Efron's bootstrap technique (1979), since distributional results are not available. This example also illustrates the role of the bootstrap in statistical inference.  相似文献   


2.
3.
Asymptotically valid inference in linear regression models is easily achieved under mild conditions using the well-known Eicker–White heteroskedasticity–robust covariance matrix estimator or one of its variant. In finite sample however, such inferences can suffer from substantial size distortion. Indeed, it is well established in the literature that the finite sample accuracy of a test may depend on which variant of the Eicker–White estimator is used, on the underlying data generating process (DGP) and on the desired level of the test.

This paper develops a new variant of the Eicker–White estimator which explicitly aims to minimize the finite sample null error in rejection probability (ERP) of the test. This is made possible by selecting the transformation of the squared residuals which results in the smallest possible ERP through a numerical algorithm based on the wild bootstrap. Monte Carlo evidence indicates that this new procedure achieves a level of robustness to the DGP, sample size and nominal testing level unequaled by any other Eicker–White estimator based asymptotic test.  相似文献   


4.
We consider wavelet-based non linear estimators, which are constructed by using the thresholding of the empirical wavelet coefficients, for the mean regression functions with strong mixing errors and investigate their asymptotic rates of convergence. We show that these estimators achieve nearly optimal convergence rates within a logarithmic term over a large range of Besov function classes Bsp, q. The theory is illustrated with some numerical examples.

A new ingredient in our development is a Bernstein-type exponential inequality, for a sequence of random variables with certain mixing structure and are not necessarily bounded or sub-Gaussian. This moderate deviation inequality may be of independent interest.  相似文献   


5.
This article provides a procedure for the detection and identification of outliers in the spectral domain where the Whittle maximum likelihood estimator of the panel data model proposed by Chen [W.D. Chen, Testing for spurious regression in a panel data model with the individual number and time length growing, J. Appl. Stat. 33(88) (2006b), pp. 759–772] is implemented. We extend the approach of Chang and co-workers [I. Chang, G.C. Tiao, and C. Chen, Estimation of time series parameters in the presence of outliers, Technometrics 30 (2) (1988), pp. 193–204] to the spectral domain and through the Whittle approach we can quickly detect and identify the type of outliers. A fixed effects panel data model is used, in which the remainder disturbance is assumed to be a fractional autoregressive integrated moving-average (ARFIMA) process and the likelihood ratio criterion is obtained directly through the modified inverse Fourier transform. This saves much time, especially when the estimated model implements a huge data-set.

Through Monte Carlo experiments, the consistency of the estimator is examined by growing the individual number N and time length T, in which the long memory remainder disturbances are contaminated with two types of outliers: additive outlier and innovation outlier. From the power tests, we see that the estimators are quite successful and powerful.

In the empirical study, we apply the model on Taiwan's computer motherboard industry. Weekly data from 1 January 2000 to 31 October 2006 of nine familiar companies are used. The proposed model has a smaller mean square error and shows more distinctive aggressive properties than the raw data model does.  相似文献   


6.
We consider the problem of estimating and testing a general linear hypothesis in a general multivariate linear model, the so-called Growth Curve model, when the p × N observation matrix is normally distributed.

The maximum likelihood estimator (MLE) for the mean is a weighted estimator with the inverse of the sample covariance matrix which is unstable for large p close to N and singular for p larger than N. We modify the MLE to an unweighted estimator and propose new tests which we compare with the previous likelihood ratio test (LRT) based on the weighted estimator, i.e., the MLE. We show that the performance of these new tests based on the unweighted estimator is better than the LRT based on the MLE.  相似文献   


7.
In this study an attempt is made to assess statistically the validity of two theories as to the origin of comets. This subject still leads to great controversy amongst astronomers but recently two main schools of thought have developed.

These are that comets are of

(i) planetary origin,

(ii) interstellar origin.

Many theories have been expanded within each school of thought but at the present time one theory in each is generally accepted. This paper sets out to identify the statistical implications of each theory and evaluate each theory in terms of their implications.  相似文献   


8.
9.
The 1978 European Community Typology for Agricultural Holdings is described in this paper and contrasted with a data based, polythetic-multivariate classification based on cluster analysis.

The requirement to reduce the size of the variable set employed in an optimisation-partition method of clustering suggested the value of principal components and factor analysis for the identification of major ‘source’ dimensions against which to measure farm differences and similarities.

The Euclidean cluster analysis incorporating the reduced dimensions quickly converged to a stable solution and was little influenced by the initial number or nature of ‘seeding’ partitions of the data.

The assignment of non-sampled observations from the population to cluster classes was completed using classification functions.

The final scheme, based on a sample of over 2,000 observations, was found to be both capable of interpretation and meaningful in terms of agricultural structure and practice and much superior in its explanatory power when compared with a version of the principal activity typology.  相似文献   


10.
The C statistic, also known as the Cash statistic, is often used in astronomy for the analysis of low-count Poisson data. The main advantage of this statistic, compared to the more commonly used χ2 statistic, is its applicability without the need to combine data points. This feature has made the C statistic a very useful method to analyze Poisson data that have small (or even null) counts in each resolution element. One of the challenges of the C statistic is that its probability distribution, under the null hypothesis that the data follow a parent model, is not known exactly. This paper presents an effort towards improving our understanding of the C statistic by studying (a) the distribution of C statistic for a fully specified model, (b) the distribution of Cmin resulting from a maximum-likelihood fit to a simple one-parameter constant model, i.e. a model that represents the sample mean of N Poisson measurements, and (c) the distribution of the associated ΔC statistic that is used for parameter estimation. The results confirm the expectation that, in the high-count limit, both C statistic and Cmin have the same mean and variance as a χ2 statistic with same number of degrees of freedom. It is also found that, in the low-count regime, the expectation of the C statistic and Cmin can be substantially lower than for a χ2 distribution. The paper makes use of recent X-ray observations of the astronomical source PG 1116+215 to illustrate the application of the C statistic to Poisson data.  相似文献   

11.
12.
Methods: Based on the index S (S = SENSITIVITY (SEN) × SPECIFICITY (SPE)), the new weighted product index Sw is defined as Sw = (SEN)2w × (SPE)2(1-w), where (0≤w≤1). The Sw is developed to be a new tool to select the optimal cut point in ROC analysis and be compared with the other two commonly used criteria.

Results: Comparing the optimal cut point for the three criteria, the wave range of the optimal cut point for the maximized weighted Youden index criterion is the widest, the weighted closest-to-(0,1) criterion is the narrowest and the weighted product index Sw criterion lays between the ranges of the two criteria.  相似文献   


13.
We define a new family of stochastic processes called Markov modulated Brownian motions with a sticky boundary at zero. Intuitively, each process is a regulated Markov-modulated Brownian motion whose boundary behavior is modified to slow down at level zero.

To determine the stationary distribution of a sticky MMBM, we follow a Markov-regenerative approach similar to the one developed with great success in the context of quasi-birth-and-death processes and fluid queues. Our analysis also relies on recent work showing that Markov-modulated Brownian motions arise as limits of a parametrized family of fluid queues.  相似文献   


14.
Efficient, accurate, and fast Markov Chain Monte Carlo estimation methods based on the Implicit approach are proposed. In this article, we introduced the notion of Implicit method for the estimation of parameters in Stochastic Volatility models.

Implicit estimation offers a substantial computational advantage for learning from observations without prior knowledge and thus provides a good alternative to classical inference in Bayesian method when priors are missing.

Both Implicit and Bayesian approach are illustrated using simulated data and are applied to analyze daily stock returns data on CAC40 index.  相似文献   


15.
Stock & Watson (1999) consider the relative quality of different univariate forecasting techniques. This paper extends their study on forecasting practice, comparing the forecasting performance of two popular model selection procedures, the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). This paper considers several topics: how AIC and BIC choose lags in autoregressive models on actual series, how models so selected forecast relative to an AR(4) model, the effect of using a maximum lag on model selection, and the forecasting performance of combining AR(4), AIC, and BIC models with an equal weight.  相似文献   

16.
This paper analyses direct and indirect forms of dependence in the probability of scoring in a handball match, taking into account the mutual influence of both playing teams. Non-identical distribution (i.d.) and non-stationarity, which are commonly observed in sport games, are studied through the specification of time-varying parameters.

The model accounts for the binary character of the dependent variable, and for unobserved heterogeneity. The parameter dynamics is specified by a first-order auto-regressive process.

Data from the Handball World Championships 2001–2005 show that the dynamics of handball violate both independence and i.d., in some cases having a non-stationary behaviour.  相似文献   


17.
In this article, we consider the problem of testing (a) sphericity and (b) intraclass covariance structure under a growth curve model. The maximum likelihood estimator (MLE) for the mean in a growth curve model is a weighted estimator with the inverse of the sample covariance matrix which is unstable for large p close to N and singular for p larger than N. The MLE for the covariance matrix is based on the MLE for the mean, which can be very poor for p close to N. For both structures (a) and (b), we modify the MLE for the mean to an unweighted estimator and based on this estimator we propose a new estimator for the covariance matrix. This new estimator leads to new tests for (a) and (b). We also propose two other tests for each structure, which are just based on the sample covariance matrix.

To compare the performance of all four tests we compute for each structure (a) and (b) the attained significance level and the empirical power. We show that one of the tests based on the sample covariance matrix is better than the likelihood ratio test based on the MLE.  相似文献   


18.
Tree algorithms are a well-known class of random access algorithms with a provable maximum stable throughput under the infinite population model (as opposed to ALOHA or the binary exponential backoff algorithm). In this article, we propose a tree algorithm for opportunistic spectrum usage in cognitive radio networks. A channel in such a network is shared among so-called primary and secondary users, where the secondary users are allowed to use the channel only if there is no primary user activity. The tree algorithm designed in this article can be used by the secondary users to share the channel capacity left by the primary users.

We analyze the maximum stable throughput and mean packet delay of the secondary users by developing a tree structured Quasi-Birth Death Markov chain under the assumption that the primary user activity can be modeled by means of a finite state Markov chain and that packets lengths follow a discrete phase-type distribution.

Numerical experiments provide insight on the effect of various system parameters and indicate that the proposed algorithm is able to make good use of the bandwidth left by the primary users.  相似文献   


19.
20.
In this paper, we study, by means of randomized sampling, the long-run stability of some open Markov population fed with time-dependent Poisson inputs. We show that state probabilities within transient states converge—even when the overall expected population dimension increases without bound—under general conditions on the transition matrix and input intensities.

Following the convergence results, we obtain ML estimators for a particular sequence of input intensities, where the sequence of new arrivals is modeled by a sigmoidal function. These estimators allow for the forecast, by confidence intervals, of the evolution of the relative population structure in the transient states.

Applying these results to the study of a consumption credit portfolio, we estimate the implicit default rate.  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号