首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Yu et al. [An improved score interval with a modified midpoint for a binomial proportion. J Stat Comput Simul. 2014;84:1022–1038] propose a novel confidence interval (CI) for a binomial proportion by modifying the midpoint of the score interval. This CI is competitive with the various commonly used methods. At the same time, Martín and Álvarez [Two-tailed asymptotic inferences for a proportion. J Appl Stat. 2014;41:1516–1529] analyse the performance of 29 asymptotic two-tailed CI for a proportion. The CI they selected is based on the arcsin transformation (when this is applied to the data increased by 0.5), although they also refer to the good behaviour of the classical methods of score and Agresti and Coull (which may be preferred in certain circumstances). The aim of this commentary is to compare the four methods referred to previously. The conclusion (for the classic error α of 5%) is that with a small sample size (≤80) the method that should be used is that of Yu et al.; for a large sample size (n?≥?100), the four methods perform in a similar way, with a slight advantage for the Agresti and Coull method. In any case the Agresti and Coull method does not perform badly and tends to be conservative. The program which determines these four intervals are available from the address http://www.ugr.es/local/bioest/Z_LINEAR_K.EXEhttp://www.ugr.es/local/bioest/Z_LINEAR_K.EXE.  相似文献   

2.
Asymptotic inferences about a linear combination of K independent binomial proportions are very frequent in applied research. Nevertheless, until quite recently research had been focused almost exclusively on cases of K≤2 (particularly on cases of one proportion and the difference of two proportions). This article focuses on cases of K>2, which have recently begun to receive more attention due to their great practical interest. In order to make this inference, there are several procedures which have not been compared: the score method (S0) and the method proposed by Martín Andrés et al. (W3) for adjusted Wald (which is a generalization of the method proposed by Price and Bonett) on the one hand and, on the other hand, the method of Zou et al. (N0) based on the Wilson confidence interval (which is a generalization of the Newcombe method). The article describes a new procedure (P0) based on the classic Peskun method, modifies the previous methods giving them continuity correction (methods S0c, W3c, N0c and P0c, respectively) and, finally, a simulation is made to compare the eight aforementioned procedures (which are selected from a total of 32 possible methods). The conclusion reached is that the S0c method is the best, although for very small samples (n i ≤10, ? i) the W3 method is better. The P0 method would be the optimal method if one needs a method which is almost never too liberal, but this entails using a method which is too conservative and which provides excessively wide CIs. The W3 and P0 methods have the additional advantage of being very easy to apply. A free programme which allows the application of the S0 and S0c methods (which are the most complex) can be obtained at http://www.ugr.es/local/bioest/Z_LINEAR_K.EXE.  相似文献   

3.
ARIMA (p, d, q) models were fitted to areal annual rainfall of two homogeneous regions in East Africa with rainfall records extending between the period 1922–80. The areal estimates of the regional rainfall were derived from the time series of the first eigenvector, which was significantly dominant at each of the two regions. The first eigenvector accounted for about 80% of the total rainfall variance in each region.

The class of ARIMA (p, d, q) models which best fitted the areal indices of relative wetness/dryness were the A R M A (3, 1) models. Tests of forecasting skill however indicated low skill in the forecasts given by these models. In all cases the models accounted for less than 50% of the total variance.

Spectral analysis of the indices time series indicated dominant quasi-periodic fluctuations around 2.2–2.8 years, 3–3.7 years, 5–6 years and 10–13 years. These spectral bands however accounted for very low proportion of the total rainfall variance.  相似文献   


4.
5.
In this study an attempt is made to assess statistically the validity of two theories as to the origin of comets. This subject still leads to great controversy amongst astronomers but recently two main schools of thought have developed.

These are that comets are of

(i) planetary origin,

(ii) interstellar origin.

Many theories have been expanded within each school of thought but at the present time one theory in each is generally accepted. This paper sets out to identify the statistical implications of each theory and evaluate each theory in terms of their implications.  相似文献   


6.
This paper analyses direct and indirect forms of dependence in the probability of scoring in a handball match, taking into account the mutual influence of both playing teams. Non-identical distribution (i.d.) and non-stationarity, which are commonly observed in sport games, are studied through the specification of time-varying parameters.

The model accounts for the binary character of the dependent variable, and for unobserved heterogeneity. The parameter dynamics is specified by a first-order auto-regressive process.

Data from the Handball World Championships 2001–2005 show that the dynamics of handball violate both independence and i.d., in some cases having a non-stationary behaviour.  相似文献   


7.
The 1978 European Community Typology for Agricultural Holdings is described in this paper and contrasted with a data based, polythetic-multivariate classification based on cluster analysis.

The requirement to reduce the size of the variable set employed in an optimisation-partition method of clustering suggested the value of principal components and factor analysis for the identification of major ‘source’ dimensions against which to measure farm differences and similarities.

The Euclidean cluster analysis incorporating the reduced dimensions quickly converged to a stable solution and was little influenced by the initial number or nature of ‘seeding’ partitions of the data.

The assignment of non-sampled observations from the population to cluster classes was completed using classification functions.

The final scheme, based on a sample of over 2,000 observations, was found to be both capable of interpretation and meaningful in terms of agricultural structure and practice and much superior in its explanatory power when compared with a version of the principal activity typology.  相似文献   


8.
Four procedures are suggested for estimating the parameter ‘a’ in the Pauling equation:

e-X/a+e ? Y/a = 1.

The procedures are: using the mean of individual solutions, least squares with Y the subject of the equation, least squares with X the subject of the equation and maximum likelihood using a statistical model. In order to compare these estimates, we use Efron's bootstrap technique (1979), since distributional results are not available. This example also illustrates the role of the bootstrap in statistical inference.  相似文献   


9.
Efficient, accurate, and fast Markov Chain Monte Carlo estimation methods based on the Implicit approach are proposed. In this article, we introduced the notion of Implicit method for the estimation of parameters in Stochastic Volatility models.

Implicit estimation offers a substantial computational advantage for learning from observations without prior knowledge and thus provides a good alternative to classical inference in Bayesian method when priors are missing.

Both Implicit and Bayesian approach are illustrated using simulated data and are applied to analyze daily stock returns data on CAC40 index.  相似文献   


10.
In this paper we consider the estimation of regression coefficients in two partitioned linear models, shortly denoted as , and , which differ only in their covariance matrices. We call and full models, and correspondingly, and small models. We give a necessary and sufficient condition for the equality between the best linear unbiased estimators (BLUEs) of X1β1 under and . In particular, we consider the equality of the BLUEs under the full models assuming that they are equal under the small models.  相似文献   

11.
Remote sensing is a helpful tool for crop monitoring or vegetation-growth estimation at a country or regional scale. However, satellite images generally have to cope with a compromise between the time frequency of observations and their resolution (i.e. pixel size). When concerned with high temporal resolution, we have to work with information on the basis of kilometric pixels, named mixed pixels, that represent aggregated responses of multiple land cover. Disaggreggation or unmixing is then necessary to downscale from the square kilometer to the local dynamic of each theme (crop, wood, meadows, etc.).

Assuming the land use is known, that is to say the proportion of each theme within each mixed pixel, we propose to address the downscaling issue through the generalization of varying-time regression models for longitudinal data and/or functional data by introducing random individual effects. The estimators are built by expanding the mixed pixels trajectories with B-splines functions and maximizing the log-likelihood with a backfitting-ECME algorithm. A BLUP formula allows then to get the ‘best possible’ estimations of the local temporal responses of each crop when observing mixed pixels trajectories. We show that this model has many potential applications in remote sensing, and an interesting one consists of coupling high and low spatial resolution images in order to perform temporal interpolation of high spatial resolution images (20 m), increasing the knowledge on particular crops in very precise locations.

The unmixing and temporal high-resolution interpolation approaches are illustrated on remote-sensing data obtained on the South-Western France during the year 2002.  相似文献   


12.
The multivariate extremal index function is a measure of the clustering among the extreme values of a multivariate stationary sequence. In this article, we introduce a measure of the degree of clustering of upcrossings in a multivariate stationary sequence, called multivariate upcrossings index, which is a multivariate generalization of the concept of upcrossings index. We derive the main properties of this function, namely the relations with the multivariate extremal index and the clustering of upcrossings.

Imposing general local and asymptotic dependence restrictions on the sequence or on its marginals we compute the multivariate upcrossings index from the marginal upcrossings indices and from the joint distribution of a finite number of variables. A couple of illustrative examples are exploited.  相似文献   


13.
Reducing process variability is essential to many organisations. According to the pertinent literature, a quality system that utilizes quality techniques to reduce process variability is necessary. Quality programs that respond to measurement precision are central to quality systems, and the most common method of assessing the precision of a measurement system is repeatability and reproducibility (R&R). Few studies have investigated R&R using attribute data.

In modern manufacturing environments, automated manufacturing is becoming increasingly common; however, a measurement resolution problem exists in automatic inspection equipment, resulting in clusters and product defects. It is vital to monitor effectively these bivariate quality characteristics. This study presents a novel model for calculating R&R for bivariate attribute data. An alloy manufacturing case is utilized to illustrate the process and potential of the proposed model. Findings can be employed to evaluate and improve measurement systems with bivariate attribute data.  相似文献   


14.
Tree algorithms are a well-known class of random access algorithms with a provable maximum stable throughput under the infinite population model (as opposed to ALOHA or the binary exponential backoff algorithm). In this article, we propose a tree algorithm for opportunistic spectrum usage in cognitive radio networks. A channel in such a network is shared among so-called primary and secondary users, where the secondary users are allowed to use the channel only if there is no primary user activity. The tree algorithm designed in this article can be used by the secondary users to share the channel capacity left by the primary users.

We analyze the maximum stable throughput and mean packet delay of the secondary users by developing a tree structured Quasi-Birth Death Markov chain under the assumption that the primary user activity can be modeled by means of a finite state Markov chain and that packets lengths follow a discrete phase-type distribution.

Numerical experiments provide insight on the effect of various system parameters and indicate that the proposed algorithm is able to make good use of the bandwidth left by the primary users.  相似文献   


15.
In this paper, we study, by means of randomized sampling, the long-run stability of some open Markov population fed with time-dependent Poisson inputs. We show that state probabilities within transient states converge—even when the overall expected population dimension increases without bound—under general conditions on the transition matrix and input intensities.

Following the convergence results, we obtain ML estimators for a particular sequence of input intensities, where the sequence of new arrivals is modeled by a sigmoidal function. These estimators allow for the forecast, by confidence intervals, of the evolution of the relative population structure in the transient states.

Applying these results to the study of a consumption credit portfolio, we estimate the implicit default rate.  相似文献   


16.
When VAR models are used to predict future outcomes, the forecast error can be substantial. Through imposition of restrictions on the off-diagonal elements of the parameter matrix, however, the information in the process may be condensed to the marginal processes. In particular, if the cross-autocorrelations in the system are small and only a small sample is available, then such a restriction may reduce the forecast mean squared error considerably.

In this paper, we propose three different techniques to decide whether to use the restricted or unrestricted model, i.e. the full VAR(1) model or only marginal AR(1) models. In a Monte Carlo simulation study, all three proposed tests have been found to behave quite differently depending on the parameter setting. One of the proposed tests stands out, however, as the preferred one and is shown to outperform other estimators for a wide range of parameter settings.  相似文献   


17.
In this paper, we discuss the implementation of fully Bayesian analysis of dynamic image sequences in the context of stochastic deformable templates for shape modelling, Markov/Gibbs random fields for modelling textures, and dynomation.

Throughout, Markov chain Monte Carlo algorithms are used to perform the Bayesian calculations.  相似文献   


18.
This article provides a procedure for the detection and identification of outliers in the spectral domain where the Whittle maximum likelihood estimator of the panel data model proposed by Chen [W.D. Chen, Testing for spurious regression in a panel data model with the individual number and time length growing, J. Appl. Stat. 33(88) (2006b), pp. 759–772] is implemented. We extend the approach of Chang and co-workers [I. Chang, G.C. Tiao, and C. Chen, Estimation of time series parameters in the presence of outliers, Technometrics 30 (2) (1988), pp. 193–204] to the spectral domain and through the Whittle approach we can quickly detect and identify the type of outliers. A fixed effects panel data model is used, in which the remainder disturbance is assumed to be a fractional autoregressive integrated moving-average (ARFIMA) process and the likelihood ratio criterion is obtained directly through the modified inverse Fourier transform. This saves much time, especially when the estimated model implements a huge data-set.

Through Monte Carlo experiments, the consistency of the estimator is examined by growing the individual number N and time length T, in which the long memory remainder disturbances are contaminated with two types of outliers: additive outlier and innovation outlier. From the power tests, we see that the estimators are quite successful and powerful.

In the empirical study, we apply the model on Taiwan's computer motherboard industry. Weekly data from 1 January 2000 to 31 October 2006 of nine familiar companies are used. The proposed model has a smaller mean square error and shows more distinctive aggressive properties than the raw data model does.  相似文献   


19.
Alternative methods of trend extraction and of seasonal adjustment are described that operate in the time domain and in the frequency domain.

The time-domain methods that are implemented in the TRAMO–SEATS and the STAMP programs are compared. An abbreviated time-domain method of seasonal adjustment that is implemented in the IDEOLOG program is also presented. Finite-sample versions of the Wiener–Kolmogorov filter are described that can be used to implement the methods in a common way.

The frequency-domain method, which is also implemented in the IDEOLOG program, employs an ideal frequency selective filter that depends on identifying the ordinates of the Fourier transform of a detrended data sequence that should lie in the pass band of the filter and those that should lie in its stop band. Filters of this nature can be used both for extracting a low-frequency cyclical component of the data and for extracting the seasonal component.  相似文献   


20.
Measures of the spread of data for random sums arise frequently in many problems and have a wide range of applications in real life, such as in the insurance field (e.g., the total claim size in a portfolio). The exact distribution of random sums is extremely difficult to determine, and normal approximation usually performs very badly for this complex distributions. A better method of approximating a random-sum distribution involves the use of saddlepoint approximations.

Saddlepoint approximations are powerful tools for providing accurate expressions for distribution functions that are not known in closed form. This method not only yields an accurate approximation near the center of the distribution but also controls the relative error in the far tail of the distribution.

In this article, we discuss approximations to the unknown complex random-sum Poisson–Erlang random variable, which has a continuous distribution, and the random-sum Poisson-negative binomial random variable, which has a discrete distribution. We show that the saddlepoint approximation method is not only quick, dependable, stable, and accurate enough for general statistical inference but is also applicable without deep knowledge of probability theory. Numerical examples of application of the saddlepoint approximation method to continuous and discrete random-sum Poisson distributions are presented.  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号