首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Two approximation methods are used to obtain the Bayes estimate for the renewal function of inverse Gaussian renewal process. Both approximations use a gamma-type conditional prior for the location parameter, a non-informative marginal prior for the shape parameter, and a squared error loss function. Simulations compare the accuracy of the estimators and indicate that the Tieney and Kadane (T–K)-based estimator out performs Maximum Likelihood (ML)- and Lindley (L)-based estimator. Computations for the T–K-based Bayes estimate employ the generalized Newton's method as well as a recent modified Newton's method with cubic convergence to maximize modified likelihood functions. The program is available from the author.  相似文献   

2.
In discriminant analysis, the dimension of the hyperplane which population mean vectors span is called the dimensionality. The procedures commonly used to estimate this dimension involve testing a sequence of dimensionality hypotheses as well as model fitting approaches based on (consistent) Akaike's method, (modified) Mallows' method and Schwarz's method. The marginal log-likelihood (MLL) method is developed and the asymptotic distribution of the dimensionality estimated by this method for normal populations is derived. Furthermore a modified marginal log-likelihood (MMLL) method is also considered. The MLL method is not consistent for large samples and two modified criteria are proposed which attain asymptotic consistency. Some comments are made with regard to the robustness of this method to departures from normality. The operating characteristics of the various methods proposed are examined and compared.  相似文献   

3.
Exploratory methods for determining appropriate lagged vsrlables in a vector nonlinear time series model are investigated. The first is a multivariate extension of the R statistic considered by Granger and Lin (1994), which is based on an estimate of the mutual information criterion. The second method uses Kendall's ρ and partial ρ statistics for lag determination. The methods provide nonlinear analogues of the autocorrelation and partial autocorrelation matrices for a vector time series. Simulation studies indicate that the R statistic reliabiy identifies appropriate lagged nonlinear moving average terms in a vector time series, while Kendall's ρ and partial ρ statistics have some power in identifying appropirate lagged nonlinear moving average and autoregressive terms, respectively, when the nonlinear relationship between lagged variables is monotonic. For illustration, the methods are applied to set of annual temperature and tree ring measurements at Campito Mountain In California.  相似文献   

4.
We study the finite-sample properties of White's test for heteroskedasticity in stochastic regression models where explanatory variables are random and not given. We investigate by simulation the effect of non independence of explanatory variables and error term and heteroskedasticity on White's test. A standard bootstrap method in the computationally convenient form is found to work well with respect to the size and power.  相似文献   

5.
A generalized version of inverted exponential distribution (IED) is considered in this paper. This lifetime distribution is capable of modeling various shapes of failure rates, and hence various shapes of aging criteria. The model can be considered as another useful two-parameter generalization of the IED. Maximum likelihood and Bayes estimates for two parameters of the generalized inverted exponential distribution (GIED) are obtained on the basis of a progressively type-II censored sample. We also showed the existence, uniqueness and finiteness of the maximum likelihood estimates of the parameters of GIED based on progressively type-II censored data. Bayesian estimates are obtained using squared error loss function. These Bayesian estimates are evaluated by applying the Lindley's approximation method and via importance sampling technique. The importance sampling technique is used to compute the Bayes estimates and the associated credible intervals. We further consider the Bayes prediction problem based on the observed samples, and provide the appropriate predictive intervals. Monte Carlo simulations are performed to compare the performances of the proposed methods and a data set has been analyzed for illustrative purposes.  相似文献   

6.
The maximum likelihood and Bayesian approaches have been considered for the two-parameter generalized exponential distribution based on record values with the number of trials following the record values (inter-record times). The maximum likelihood estimates are obtained under the inverse sampling and the random sampling schemes. It is shown that the maximum likelihood estimator of the shape parameter converges in mean square to the true value when the scale parameter is known. The Bayes estimates of the parameters have been developed by using Lindley's approximation and the Markov Chain Monte Carlo methods due to the lack of explicit forms under the squared error and the linear-exponential loss functions. The confidence intervals for the parameters are constructed based on asymptotic and Bayesian methods. The Bayes and the maximum likelihood estimators are compared in terms of the estimated risk by the Monte Carlo simulations. The comparison of the estimators based on the record values and the record values with their corresponding inter-record times are performed by using Monte Carlo simulations.  相似文献   

7.
Abstract

The efficacy and the asymptotic relative efficiency (ARE) of a weighted sum of Kendall's taus, a weighted sum of Spearman's rhos, a weighted sum of Pearson's r's, and a weighted sum of z-transformation of the Fisher–Yates correlation coefficients, in the presence of a blocking variable, are discussed. The method of selecting the weighting constants that maximize the efficacy of these four correlation coefficients is proposed. The estimate, test statistics and confidence interval of the four correlation coefficients with weights are also developed. To compare the small-sample properties of the four tests, a simulation study is performed. The theoretical and simulated results all prefer the weighted sum of the Pearson correlation coefficients with the optimal weights, as well as the weighted sum of z-transformation of the Fisher–Yates correlation coefficients with the optimal weights.  相似文献   

8.
The maximum likelihood and maximum partial likelihood approaches to the proportional hazards model are unified. The purpose is to give a general approach to the analysis of the proportional hazards model, whether the baseline distribution is absolutely continuous, discrete, or a mixture. The advantage is that heavily tied data will be analyzed with a discrete time model, while data with no ties is analyzed with ordinary Cox regression. Data sets in between are treated by a compromise between the discrete time model and Efron's approach to tied data in survival analysis, and the transitions between modes are automatic. A simulation study is conducted comparing the proposed approach to standard methods of handling ties. A recent suggestion, that revives Breslow's approach to tied data, is finally discussed.  相似文献   

9.
Several methods exist for testing interaction in unreplicated two-way layouts. Some are based on specifying a functional form for the interaction term and perform well provided that the functional form is appropriate. Other methods do not require such a functional form to be specified but only test for the presence of non-additivity and do not provide a suitable estimate of error variance for a non-additive model. This paper presents a method for testing for interaction in unreplicated two-way tables that is based on testing all pairwise interaction contrasts. This method (i) is easy to implement, (ii) does not assume a functional form for the interaction term, (iii) can find a sub-table of data which may be free from interaction and to base the estimate of unknown error variance, and (iv) can be used for incomplete two-way layouts. The proposed method is illustrated using examples and its power is investigated via simulation studies. Simulation results show that the proposed method is competitive with existing methods for testing for interaction in unreplicated two-way layouts.  相似文献   

10.
The most common asymptotic procedure for analyzing a 2 × 2 table (under the conditioning principle) is the ‰ chi-squared test with correction for continuity (c.f.c). According to the way this is applied, up to the present four methods have been obtained: one for one-tailed tests (Yates') and three for two-tailed tests (those of Mantel, Conover and Haber). In this paper two further methods are defined (one for each case), the 6 resulting methods are grouped in families, their individual behaviour studied and the optimal is selected. The conclusions are established on the assumption that the method studied is applied indiscriminately (without being subjected to validity conditions), and taking a basis of 400,000 tables (with the values of sample size n between 20 and 300 and exact P-values between 1% and 10%) and a criterion of evaluation based on the percentage of times in which the approximate P-value differs from the exact (Fisher's exact test) by an excessive amount. The optimal c.f.c. depends on n, on E (the minimum quantity expected) and on the error α to be used, but the rule of selection is not complicated and the new methods proposed are frequently selected. In the paper we also study what occurs when E ≥ 5, as well as whether the chi-squared by factor (n-1).  相似文献   

11.
Cohen's kappa coefficient is traditionally used to quantify the degree of agreement between two raters on a nominal scale. Correlated kappas occur in many settings (e.g., repeated agreement by raters on the same individuals, concordance between diagnostic tests and a gold standard) and often need to be compared. While different techniques are now available to model correlated κ coefficients, they are generally not easy to implement in practice. The present paper describes a simple alternative method based on the bootstrap for comparing correlated kappa coefficients. The method is illustrated by examples and its type I error studied using simulations. The method is also compared with the generalized estimating equations of the second order and the weighted least-squares methods.  相似文献   

12.
Nonparametric methods, Theil's method and Hussain's method have been applied to simple linear regression problems for estimating the slope of the regression line.We extend these methods and propose a robust estimator to estimate the coefficient of a first order autoregressive process under various distribution shapes, A simulation study to compare Theil's estimator, Hus-sain's estimator, the least squares estimator, and the proposed estimator is also presented.  相似文献   

13.
Scheffé (1970) introduced a method for deriving confidence sets for directions and ratios of normals. The procedure requires use of an approximation and Scheffé provided evidence that the method performs well for cases in which the variances of the random deviates are known. This paper extends Scheffé's numerical integrations to the case of unknown variances. Our results indicate that Scheffé's method works well when variances are unknown  相似文献   

14.
The problem posed by exact confidence intervals (CIs) which can be either all-inclusive or empty for a nonnegligible set of sample points is known to have no solution within CI theory. Confidence belts causing improper CIs can be modified by using margins of error from the renewed theory of errors initiated by J. W. Tukey—briefly described in the article—for which an extended Fraser's frequency interpretation is given. This approach is consistent with Kolmogorov's axiomatization of probability, in which a probability and an error measure obey the same axioms, although the connotation of the two words is different. An algorithm capable of producing a margin of error for any parameter derived from the five parameters of the bivariate normal distribution is provided. Margins of error correcting Fieller's CIs for a ratio of means are obtained, as are margins of error replacing Jolicoeur's CIs for the slope of the major axis. Margins of error using Dempster's conditioning that can correct optimal, but improper, CIs for the noncentrality parameter of a noncentral chi-square distribution are also given.  相似文献   

15.
For the hierarchical Poisson and gamma model, we calculate the Bayes posterior estimator of the parameter of the Poisson distribution under Stein's loss function which penalizes gross overestimation and gross underestimation equally and the corresponding Posterior Expected Stein's Loss (PESL). We also obtain the Bayes posterior estimator of the parameter under the squared error loss and the corresponding PESL. Moreover, we obtain the empirical Bayes estimators of the parameter of the Poisson distribution with a conjugate gamma prior by two methods. In numerical simulations, we have illustrated: The two inequalities of the Bayes posterior estimators and the PESLs; the moment estimators and the Maximum Likelihood Estimators (MLEs) are consistent estimators of the hyperparameters; the goodness-of-fit of the model to the simulated data. The numerical results indicate that the MLEs are better than the moment estimators when estimating the hyperparameters. Finally, we exploit the attendance data on 314 high school juniors from two urban high schools to illustrate our theoretical studies.  相似文献   

16.
In this note, it is proved that in computing percentage points of a distribution, Halley's method yields convergent solutions under most conditions, and we demonstrate the efficiency of the method with some examples.  相似文献   

17.
Time to failure due to fatigue is one of the common quality characteristics in material engineering applications. In this article, acceptance sampling plans are developed for the Birnbaum–Saunders distribution percentiles when the life test is truncated at a pre-specified time. The minimum sample size necessary to ensure the specified life percentile is obtained under a given customer's risk. The operating characteristic values (and curves) of the sampling plans as well as the producer's risk are presented. The R package named spbsq is developed to implement the developed sampling plans. Two examples with real data sets are also given as illustration.  相似文献   

18.
A single-outlier data set containing some independent random variables is considered such that all of observations expect one have the same distribution. To describe the model of interested, a location-scale family of distributions is used and the estimation problem of the parameters is studied when the data are collected under Type-II censoring scheme. Moreover, three different predictors are presented to predict the censored order statistics. They are also compared regarding both of mean squared prediction error and Pitman's measure of closeness criteria. The role of outlier parameter as well as censorship rate is studied on performance of proposed estimator and predictors. The results of the paper are illustrated via a real data set. Finally, some conclusions are stated.  相似文献   

19.
Since the early 1990s, there has been an increasing interest in statistical methods for detecting global spatial clustering in data sets. Tango's index is one of the most widely used spatial statistics for assessing whether spatially distributed disease rates are independent or clustered. Interestingly, this statistic can be partitioned into the sum of two terms: one term is similar to the usual chi-square statistic, being based on deviation patterns between the observed and expected values, and the other term, similar to Moran's I, is able to detect the proximity of similar values. In this paper, we examine this hybrid nature of Tango's index. The goal is to evaluate the possibility of distinguishing the spatial sources of clustering: lack of fit or spatial autocorrelation. To comply with the aims of the work, a simulation study is performed, by which examples of patterns driving the goodness-of-fit and spatial autocorrelation components of the statistic are provided. As for the latter aspect, it is worth noting that inducing spatial association among count data without adding lack of fit is not an easy task. In this respect, the overlapping sums method is adopted. The main findings of the simulation experiment are illustrated and a comparison with a previous research on this topic is also highlighted.  相似文献   

20.
It has long been known that, for many joint distributions, Kendall's τ and Spearman's ρ have different values, as they measure different aspects of the dependence structure. Although the classical inequalities between Kendall's τ and Spearman's ρ for pairs of random variables are given, the joint distributions which can attain the bounds between Kendall's τ and Spearman's ρ are difficult to find. We use the simulated annealing method to find the bounds for ρ in terms of τ and its corresponding joint distribution which can attain those bounds. Furthermore, using this same method, we find the improved bounds between τ and ρ, which is different from that given by Durbin and Stuart.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号