首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
This paper analyses direct and indirect forms of dependence in the probability of scoring in a handball match, taking into account the mutual influence of both playing teams. Non-identical distribution (i.d.) and non-stationarity, which are commonly observed in sport games, are studied through the specification of time-varying parameters.

The model accounts for the binary character of the dependent variable, and for unobserved heterogeneity. The parameter dynamics is specified by a first-order auto-regressive process.

Data from the Handball World Championships 2001–2005 show that the dynamics of handball violate both independence and i.d., in some cases having a non-stationary behaviour.  相似文献   


2.
This paper considers the constant-partially accelerated life tests for series system products, where dependent M-O bivariate exponential distribution is assumed for the components.

Based on progressive type-II censored and masked data, the maximum likelihood estimates for the parameters and acceleration factors are obtained by using the decomposition approach. In addition, this method can also be applied to the Bayes estimates, which are too complex to obtain as usual way. Finally, a Monte Carlo simulation study is carried out to verify the accuracy of the methods under different masking probabilities and censoring schemes.  相似文献   


3.
We consider the problem of estimating and testing a general linear hypothesis in a general multivariate linear model, the so-called Growth Curve model, when the p × N observation matrix is normally distributed.

The maximum likelihood estimator (MLE) for the mean is a weighted estimator with the inverse of the sample covariance matrix which is unstable for large p close to N and singular for p larger than N. We modify the MLE to an unweighted estimator and propose new tests which we compare with the previous likelihood ratio test (LRT) based on the weighted estimator, i.e., the MLE. We show that the performance of these new tests based on the unweighted estimator is better than the LRT based on the MLE.  相似文献   


4.
In this study an attempt is made to assess statistically the validity of two theories as to the origin of comets. This subject still leads to great controversy amongst astronomers but recently two main schools of thought have developed.

These are that comets are of

(i) planetary origin,

(ii) interstellar origin.

Many theories have been expanded within each school of thought but at the present time one theory in each is generally accepted. This paper sets out to identify the statistical implications of each theory and evaluate each theory in terms of their implications.  相似文献   


5.
We define a new family of stochastic processes called Markov modulated Brownian motions with a sticky boundary at zero. Intuitively, each process is a regulated Markov-modulated Brownian motion whose boundary behavior is modified to slow down at level zero.

To determine the stationary distribution of a sticky MMBM, we follow a Markov-regenerative approach similar to the one developed with great success in the context of quasi-birth-and-death processes and fluid queues. Our analysis also relies on recent work showing that Markov-modulated Brownian motions arise as limits of a parametrized family of fluid queues.  相似文献   


6.
Discrete time models are used in Ecology for describing the dynamics of an age-structured population. They can be introduced from a deterministic or from a stochastic viewpoint. We analyze a stochastic model for the case in which the dynamics of the population is described by means of a projection matrix. In this statistical model, fertility rates and survival rates are unknown parameters which are estimated by using a Bayesian approach and also data cloning, which is a simulation-based method especially useful with complex hierarchical models.

Both methodologies are applied to real data from the population of Steller sea lions located in the Alaska coast since 1978–2004. The estimates obtained from these methods show a good behavior when they are compared to the nonmissing actual values.  相似文献   


7.
Measures of the spread of data for random sums arise frequently in many problems and have a wide range of applications in real life, such as in the insurance field (e.g., the total claim size in a portfolio). The exact distribution of random sums is extremely difficult to determine, and normal approximation usually performs very badly for this complex distributions. A better method of approximating a random-sum distribution involves the use of saddlepoint approximations.

Saddlepoint approximations are powerful tools for providing accurate expressions for distribution functions that are not known in closed form. This method not only yields an accurate approximation near the center of the distribution but also controls the relative error in the far tail of the distribution.

In this article, we discuss approximations to the unknown complex random-sum Poisson–Erlang random variable, which has a continuous distribution, and the random-sum Poisson-negative binomial random variable, which has a discrete distribution. We show that the saddlepoint approximation method is not only quick, dependable, stable, and accurate enough for general statistical inference but is also applicable without deep knowledge of probability theory. Numerical examples of application of the saddlepoint approximation method to continuous and discrete random-sum Poisson distributions are presented.  相似文献   


8.
Among other types of non sampling errors, non response error (NRE) is an inherent component of any sample survey, which is supposed to be given much attention during the designing and execution stages. With increasing awareness of these estimators, therefore, there is an urge for the development of suitable techniques for controlling them.

This article proposes two families of estimators for population mean in the presence of non response and discuses various properties under model approach, namely polynomial regression model. The families include some existing estimators. Comparison of efficiencies along with the robustness of the estimators under misspecification of models has been empirically discussed.  相似文献   


9.
The 1978 European Community Typology for Agricultural Holdings is described in this paper and contrasted with a data based, polythetic-multivariate classification based on cluster analysis.

The requirement to reduce the size of the variable set employed in an optimisation-partition method of clustering suggested the value of principal components and factor analysis for the identification of major ‘source’ dimensions against which to measure farm differences and similarities.

The Euclidean cluster analysis incorporating the reduced dimensions quickly converged to a stable solution and was little influenced by the initial number or nature of ‘seeding’ partitions of the data.

The assignment of non-sampled observations from the population to cluster classes was completed using classification functions.

The final scheme, based on a sample of over 2,000 observations, was found to be both capable of interpretation and meaningful in terms of agricultural structure and practice and much superior in its explanatory power when compared with a version of the principal activity typology.  相似文献   


10.
Over the last 25 years, increasing attention has been given to the problem of analysing data arising from circular distributions. The most important circular distribution was introduced by Von Mises (1918) which takes the form:

[Formulas]

where Io(k) is a modified Bessel function, u0 is the mean direction and k is the concentration parameter of the distribution. Watson & Williams (1956) laid the foundation of analysis of variance type techniques for the two-dimensional case of circular data using the Von Mises distribution. Stephens (1962a,b; 1969, 1972). Upton (1974) and Stephens (1982) made further improvements to Watson & Williams’ work. In this paper the authors will discuss the pitfalls of the methods adopted by Stephens (1982) and present a unified analysis of variance type approach for circular data.  相似文献   


11.
Alternative methods of trend extraction and of seasonal adjustment are described that operate in the time domain and in the frequency domain.

The time-domain methods that are implemented in the TRAMO–SEATS and the STAMP programs are compared. An abbreviated time-domain method of seasonal adjustment that is implemented in the IDEOLOG program is also presented. Finite-sample versions of the Wiener–Kolmogorov filter are described that can be used to implement the methods in a common way.

The frequency-domain method, which is also implemented in the IDEOLOG program, employs an ideal frequency selective filter that depends on identifying the ordinates of the Fourier transform of a detrended data sequence that should lie in the pass band of the filter and those that should lie in its stop band. Filters of this nature can be used both for extracting a low-frequency cyclical component of the data and for extracting the seasonal component.  相似文献   


12.
This article provides a procedure for the detection and identification of outliers in the spectral domain where the Whittle maximum likelihood estimator of the panel data model proposed by Chen [W.D. Chen, Testing for spurious regression in a panel data model with the individual number and time length growing, J. Appl. Stat. 33(88) (2006b), pp. 759–772] is implemented. We extend the approach of Chang and co-workers [I. Chang, G.C. Tiao, and C. Chen, Estimation of time series parameters in the presence of outliers, Technometrics 30 (2) (1988), pp. 193–204] to the spectral domain and through the Whittle approach we can quickly detect and identify the type of outliers. A fixed effects panel data model is used, in which the remainder disturbance is assumed to be a fractional autoregressive integrated moving-average (ARFIMA) process and the likelihood ratio criterion is obtained directly through the modified inverse Fourier transform. This saves much time, especially when the estimated model implements a huge data-set.

Through Monte Carlo experiments, the consistency of the estimator is examined by growing the individual number N and time length T, in which the long memory remainder disturbances are contaminated with two types of outliers: additive outlier and innovation outlier. From the power tests, we see that the estimators are quite successful and powerful.

In the empirical study, we apply the model on Taiwan's computer motherboard industry. Weekly data from 1 January 2000 to 31 October 2006 of nine familiar companies are used. The proposed model has a smaller mean square error and shows more distinctive aggressive properties than the raw data model does.  相似文献   


13.
We consider wavelet-based non linear estimators, which are constructed by using the thresholding of the empirical wavelet coefficients, for the mean regression functions with strong mixing errors and investigate their asymptotic rates of convergence. We show that these estimators achieve nearly optimal convergence rates within a logarithmic term over a large range of Besov function classes Bsp, q. The theory is illustrated with some numerical examples.

A new ingredient in our development is a Bernstein-type exponential inequality, for a sequence of random variables with certain mixing structure and are not necessarily bounded or sub-Gaussian. This moderate deviation inequality may be of independent interest.  相似文献   


14.
In this article, we take a brief overview of different functional forms of generalized Poisson distribution (GPD) and various methods of its parameter estimation found in the literature. We compare the method of moment estimation (ME) and maximum likelihood estimation (MLE) of parameters of GPD through simulation study in terms of bias, MSE and covariance. To simulate random numbers from GPD, we develop a Matlab function gpoissrnd(). The simulation study leads to the important conclusion that the ME performs better or equally good as compared to MLE when sample size is small.

Further we fit the GPD to various datasets in literature using both estimation methods and observe that the results do not differ significantly even though the sample size is large. Overall, we conclude that for GPD, use of ME in place of MLE will lead to almost similar results. The computational simplicity in calculation of ME as compared to MLE also gives support to the use of ME in case of GPD for practitioners.  相似文献   


15.
In this article, we consider the problem of testing (a) sphericity and (b) intraclass covariance structure under a growth curve model. The maximum likelihood estimator (MLE) for the mean in a growth curve model is a weighted estimator with the inverse of the sample covariance matrix which is unstable for large p close to N and singular for p larger than N. The MLE for the covariance matrix is based on the MLE for the mean, which can be very poor for p close to N. For both structures (a) and (b), we modify the MLE for the mean to an unweighted estimator and based on this estimator we propose a new estimator for the covariance matrix. This new estimator leads to new tests for (a) and (b). We also propose two other tests for each structure, which are just based on the sample covariance matrix.

To compare the performance of all four tests we compute for each structure (a) and (b) the attained significance level and the empirical power. We show that one of the tests based on the sample covariance matrix is better than the likelihood ratio test based on the MLE.  相似文献   


16.
Tree algorithms are a well-known class of random access algorithms with a provable maximum stable throughput under the infinite population model (as opposed to ALOHA or the binary exponential backoff algorithm). In this article, we propose a tree algorithm for opportunistic spectrum usage in cognitive radio networks. A channel in such a network is shared among so-called primary and secondary users, where the secondary users are allowed to use the channel only if there is no primary user activity. The tree algorithm designed in this article can be used by the secondary users to share the channel capacity left by the primary users.

We analyze the maximum stable throughput and mean packet delay of the secondary users by developing a tree structured Quasi-Birth Death Markov chain under the assumption that the primary user activity can be modeled by means of a finite state Markov chain and that packets lengths follow a discrete phase-type distribution.

Numerical experiments provide insight on the effect of various system parameters and indicate that the proposed algorithm is able to make good use of the bandwidth left by the primary users.  相似文献   


17.
Efficient, accurate, and fast Markov Chain Monte Carlo estimation methods based on the Implicit approach are proposed. In this article, we introduced the notion of Implicit method for the estimation of parameters in Stochastic Volatility models.

Implicit estimation offers a substantial computational advantage for learning from observations without prior knowledge and thus provides a good alternative to classical inference in Bayesian method when priors are missing.

Both Implicit and Bayesian approach are illustrated using simulated data and are applied to analyze daily stock returns data on CAC40 index.  相似文献   


18.
Latent class analysis (LCA) has been found to have important applications in social and behavioral sciences for modeling categorical response variables, and nonresponse is typical when collecting data. In this study, the nonresponse mainly included “contingency questions” and real “missing data.” The primary objective of this research was to evaluate the effects of some potential factors on model selection indices in LCA with nonresponse data.

We simulated missing data with contingency questions and evaluated the accuracy rates of eight information criteria for selecting the correct models. The results showed that the main factors are latent class proportions, conditional probabilities, sample size, the number of items, the missing data rate, and the contingency data rate. Interactions of the conditional probabilities with class proportions, sample size, and the number of items are also significant. From our simulation results, the impact of missing data and contingency questions can be amended by increasing the sample size or the number of items.  相似文献   


19.
Multivariate data are present in many research areas. Its analysis is challenging when assumptions of normality are violated and the data are discrete. The Poisson discrete data can be thought of as very common discrete type, but the inflated and the doubly inflated correspondence are gaining popularity (Sengupta, Chaganty, and Sabo 2015; Lee, Jung, and Jin 2009; Agarwal, Gelfand, and Citron-Pousty 2002).

Our aim is to build a statistical model that can be tractable and used to estimate the model parameters for the multivariate doubly inflated Poisson. To keep the correlation structure, we incorporate ideas from the copula distributions. A multivariate doubly inflated Poisson distribution using Gaussian copula is introduced. Data simulation and parameter estimation algorithms are also provided. Residual checks are carried out to assess any substantial biases. The model dimensionality has been increased to test the performance of the provided estimation method. All results show high-efficiency and promising outcomes in the modeling of discrete data and particularly the doubly inflated Poisson count type data, under a novel modified algorithm.  相似文献   


20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号