The requirement to reduce the size of the variable set employed in an optimisation-partition method of clustering suggested the value of principal components and factor analysis for the identification of major ‘source’ dimensions against which to measure farm differences and similarities.
The Euclidean cluster analysis incorporating the reduced dimensions quickly converged to a stable solution and was little influenced by the initial number or nature of ‘seeding’ partitions of the data.
The assignment of non-sampled observations from the population to cluster classes was completed using classification functions.
The final scheme, based on a sample of over 2,000 observations, was found to be both capable of interpretation and meaningful in terms of agricultural structure and practice and much superior in its explanatory power when compared with a version of the principal activity typology. 相似文献
These are that comets are of
(i) planetary origin,
(ii) interstellar origin.
Many theories have been expanded within each school of thought but at the present time one theory in each is generally accepted. This paper sets out to identify the statistical implications of each theory and evaluate each theory in terms of their implications. 相似文献
Implicit estimation offers a substantial computational advantage for learning from observations without prior knowledge and thus provides a good alternative to classical inference in Bayesian method when priors are missing.
Both Implicit and Bayesian approach are illustrated using simulated data and are applied to analyze daily stock returns data on CAC40 index. 相似文献
It is widely recognized that internal loss data alone do not suffice to provide accurate capital charge in financial risk management, especially for high-severity and low-frequency events. Financial institutions typically use external loss data to augment the available evidence and, therefore, provide more accurate risk estimates. Rigorous statistical treatments are required to make internal and external data comparable and to ensure that merging the two databases leads to unbiased estimates.
The goal of this paper is to propose a correct statistical treatment to make the external and internal data comparable and, therefore, mergeable. Such methodology augments internal losses with relevant, rather than redundant, external loss data. 相似文献
e-X/a+e ? Y/a = 1.
The procedures are: using the mean of individual solutions, least squares with Y the subject of the equation, least squares with X the subject of the equation and maximum likelihood using a statistical model. In order to compare these estimates, we use Efron's bootstrap technique (1979), since distributional results are not available. This example also illustrates the role of the bootstrap in statistical inference. 相似文献
The time-domain methods that are implemented in the TRAMO–SEATS and the STAMP programs are compared. An abbreviated time-domain method of seasonal adjustment that is implemented in the IDEOLOG program is also presented. Finite-sample versions of the Wiener–Kolmogorov filter are described that can be used to implement the methods in a common way.
The frequency-domain method, which is also implemented in the IDEOLOG program, employs an ideal frequency selective filter that depends on identifying the ordinates of the Fourier transform of a detrended data sequence that should lie in the pass band of the filter and those that should lie in its stop band. Filters of this nature can be used both for extracting a low-frequency cyclical component of the data and for extracting the seasonal component. 相似文献
The model accounts for the binary character of the dependent variable, and for unobserved heterogeneity. The parameter dynamics is specified by a first-order auto-regressive process.
Data from the Handball World Championships 2001–2005 show that the dynamics of handball violate both independence and i.d., in some cases having a non-stationary behaviour. 相似文献
Following the convergence results, we obtain ML estimators for a particular sequence of input intensities, where the sequence of new arrivals is modeled by a sigmoidal function. These estimators allow for the forecast, by confidence intervals, of the evolution of the relative population structure in the transient states.
Applying these results to the study of a consumption credit portfolio, we estimate the implicit default rate. 相似文献
To determine the stationary distribution of a sticky MMBM, we follow a Markov-regenerative approach similar to the one developed with great success in the context of quasi-birth-and-death processes and fluid queues. Our analysis also relies on recent work showing that Markov-modulated Brownian motions arise as limits of a parametrized family of fluid queues. 相似文献
The maximum likelihood estimator (MLE) for the mean is a weighted estimator with the inverse of the sample covariance matrix which is unstable for large p close to N and singular for p larger than N. We modify the MLE to an unweighted estimator and propose new tests which we compare with the previous likelihood ratio test (LRT) based on the weighted estimator, i.e., the MLE. We show that the performance of these new tests based on the unweighted estimator is better than the LRT based on the MLE. 相似文献
We analyze the maximum stable throughput and mean packet delay of the secondary users by developing a tree structured Quasi-Birth Death Markov chain under the assumption that the primary user activity can be modeled by means of a finite state Markov chain and that packets lengths follow a discrete phase-type distribution.
Numerical experiments provide insight on the effect of various system parameters and indicate that the proposed algorithm is able to make good use of the bandwidth left by the primary users. 相似文献
[Formulas]
where Io(k) is a modified Bessel function, u0 is the mean direction and k is the concentration parameter of the distribution. Watson & Williams (1956) laid the foundation of analysis of variance type techniques for the two-dimensional case of circular data using the Von Mises distribution. Stephens (1962a,b; 1969, 1972). Upton (1974) and Stephens (1982) made further improvements to Watson & Williams’ work. In this paper the authors will discuss the pitfalls of the methods adopted by Stephens (1982) and present a unified analysis of variance type approach for circular data. 相似文献
A new ingredient in our development is a Bernstein-type exponential inequality, for a sequence of random variables with certain mixing structure and are not necessarily bounded or sub-Gaussian. This moderate deviation inequality may be of independent interest. 相似文献
AMS (MOS) Subject Classifivations: 62M10, 62H30 相似文献
Throughout, Markov chain Monte Carlo algorithms are used to perform the Bayesian calculations. 相似文献
Saddlepoint approximations are powerful tools for providing accurate expressions for distribution functions that are not known in closed form. This method not only yields an accurate approximation near the center of the distribution but also controls the relative error in the far tail of the distribution.
In this article, we discuss approximations to the unknown complex random-sum Poisson–Erlang random variable, which has a continuous distribution, and the random-sum Poisson-negative binomial random variable, which has a discrete distribution. We show that the saddlepoint approximation method is not only quick, dependable, stable, and accurate enough for general statistical inference but is also applicable without deep knowledge of probability theory. Numerical examples of application of the saddlepoint approximation method to continuous and discrete random-sum Poisson distributions are presented. 相似文献
Assuming the land use is known, that is to say the proportion of each theme within each mixed pixel, we propose to address the downscaling issue through the generalization of varying-time regression models for longitudinal data and/or functional data by introducing random individual effects. The estimators are built by expanding the mixed pixels trajectories with B-splines functions and maximizing the log-likelihood with a backfitting-ECME algorithm. A BLUP formula allows then to get the ‘best possible’ estimations of the local temporal responses of each crop when observing mixed pixels trajectories. We show that this model has many potential applications in remote sensing, and an interesting one consists of coupling high and low spatial resolution images in order to perform temporal interpolation of high spatial resolution images (20 m), increasing the knowledge on particular crops in very precise locations.
The unmixing and temporal high-resolution interpolation approaches are illustrated on remote-sensing data obtained on the South-Western France during the year 2002. 相似文献