首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 79 毫秒
1.
In this paper, an alternative model is examined for the distribution arising out of ascertainment. The weighted beta-binomial is suggested for this purpose as it incorporates the variability in the parameter ø of the weighted binomial distribution. The latter distribution has been the model considered by Rao (1965, 1985) and Kocherlakota and Kocherlakota (1990). Techniques for separate families introduced in Kocherlakota and Kocherlakota (1986) are applied to demonstrate that the weighted beta-binomial model is more appropriate in this situation.  相似文献   

2.
The generalized variance plays on important and useful role as a measure to compare overall variability of different populations in biological sciences (Goodman, 1968; Kocherlakota and Kocherlakota, 1983; Sokai, 1965). Here we present simple and elegant multivariate analogues to Bartlett's and Hartley's tests of homogeneity. Large sample distributions of the statistics are presented and the practical usefulness of the tests are demonstrated throught several examples.  相似文献   

3.
In this paper, we are interested in the weighted distributions of a bivariate three parameter logarithmic series distribution studied by Kocherlakota and Kocherlakota (1990). The weighted versions of the model are derived with weight W(x,y) = x[r] y[s]. Explicit expressions for the probability mass function and probability generating functions are derived in the case r = s = l. The marginal and conditional distributions are derived in the general case. The maximum likelihood estimation of the parameters, in both two parameter and three parameter cases, is studied. A procedure for computer generation of bivariate data from a discrete distribution is described. This enables us to present two examples, in order to illustrate the methods developed, for finding the maximum likelihood estimates.  相似文献   

4.
Recently, Balakrishnan and Kocherlakota (1985) have proposed robust two sided tolerance limits based on the MML estimators. They have simulated the actual γ values attained by their procedure and the classical procedure based on [Xbar] and s. However, there seems to have been an error in their simulation. Here, we present the corrected table of the simulated values of γ which also reverses their recommendations.  相似文献   

5.
For testing the fit of a discrete distribution, use of the probability generating function and its empirical counterpart has been suggested in Koeherlakota and Kocherlakota (1986). In the present paper, a particular functional of the corresponding empirical probability generating function process is proposed as a measure to test the discrepancy between the evidence and the hypothesis. The asymptotic behavior of the empirical probability generating function when a parameter is estimated is obtained, The study is exemplified for the Poisson case only but the procedure can be extended to other discrete distributions.  相似文献   

6.
Summary.  We propose 'Dunnett-type' test procedures to test for simple tree order restrictions on the means of p independent normal populations. The new tests are based on the estimation procedures that were introduced by Hwang and Peddada and later by Dunbar, Conaway and Peddada. The procedures proposed are also extended to test for 'two-sided' simple tree order restrictions. For non-normal data, nonparametric versions based on ranked data are also suggested. Using computer simulations, we compare the proposed test procedures with some existing test procedures in terms of size and power. Our simulation study suggests that the procedures compete well with the existing procedures for both one-sided and two-sided simple tree alternatives. In some instances, especially in the case of two-sided alternatives or for non-normally distributed data, the gains in power due to the procedures proposed can be substantial.  相似文献   

7.
Summary: Commonly used standard statistical procedures for means and variances (such as the t–test for means or the F–test for variances and related confidence procedures) require observations from independent and identically normally distributed variables. These procedures are often routinely applied to financial data, such as asset or currency returns, which do not share these properties. Instead, they are nonnormal and show conditional heteroskedasticity, hence they are dependent. We investigate the effect of conditional heteroskedasticity (as modelled by GARCH(1,1)) on the level of these tests and the coverage probability of the related confidence procedures. It can be seen that conditional heteroskedasticity has no effect on procedures for means (at least in large samples). There is, however, a strong effect of conditional heteroskedasticity on procedures for variances. These procedures should therefore not be used if conditional heteroskedasticity is prevalent in the data.*We are grateful to the referees for their useful and constructive comments.  相似文献   

8.
This paper deals with the problem of selecting the best population from among k(≥ 2) two-parameter exponential populations. New selection procedures are proposed for selecting the unique best. The procedures include preliminary tests which allow the xperimenter to have an option to not select if the statistical evidence is not significant. Two probabilities, the probability to make a selection and the probability of a correct selection, are controlled by these selection procedures. Comparisons between the proposed selection procedures and certain earlier existing procedures are also made. The results show the superiority of the proposed selection procedures in terms of the required sample size.  相似文献   

9.
ABSTRACT

In modelling repeated count outcomes, generalized linear mixed-effects models are commonly used to account for within-cluster correlations. However, inconsistent results are frequently generated by various statistical R packages and SAS procedures, especially in case of a moderate or strong within-cluster correlation or overdispersion. We investigated the underlying numerical approaches and statistical theories on which these packages and procedures are built. We then compared the performance of these statistical packages and procedures by simulating both Poisson-distributed and overdispersed count data. The SAS NLMIXED procedure outperformed the others procedures in all settings.  相似文献   

10.
Summary Selection procedures of the better component in bivariate exponential (BVE) models are proposed. In this paper, we consider onlyBVE models proposed by Freund (1961) Marshall-Olkin (1967) and Block-Basu (1974). The probabilities of correct selection for the proposed procedures are compared by using the normal approximations. A numerical study on the determination of asymptotic relative efficiency (ARE) of the proposed procedures are presented.  相似文献   

11.
In this paper we consider conditional inference procedures for the Pareto and power function distributions. We develop procedures for obtaining confidence intervals for the location and scale parameters as well as upper and lower n probability tolerance intervals for a proportion g, given a Type-II right censored sample from the corresponding distribution. The intervals are exact, and are obtained by conditioning on the observed values of the ancillary statistics. Since, for each distribution, the procedures assume that a shape parameter x is known, a sensitivity analysis is also carried out to see how the procedures are affected by changes in x.  相似文献   

12.
New multiple comparison with a control (MCC) procedures are developed in repeated measures incomplete block design settings based on R-estimates. It is assumed that the errors within each subject are exchangeable random variables. The R-estimators of the treatment effects are obtained by minimizing a sum of Jaeckel (1972)-type dispersion functions. Based on the R-estimators, Dunnett-type multiple comparison procedures are developed for comparing test-treatments with a control-treatment. Under exchangeable errors, it is demonstrated that for Cox-type designs, the new procedures are more efficient than the existing nonparametric procedures. The new MCC procedures are applied to a data set in a clinical trial which consists of patients with reversible obstructive pulmonary disease.  相似文献   

13.
Many of the more useful and powerful nonparametric procedures may be presented in a unified manner by treating them as rank transformation procedures. Rank transformation procedures are ones in which the usual parametric procedure is applied to the ranks of the data instead of to the data themselves. This technique should be viewed as a useful tool for developing nonparametric procedures to solve new problems.  相似文献   

14.
In this paper, one-sided and two-sided test procedures for comparing several treatments with more than one control with respect to scale parameter are proposed. The proposed test procedures are inverted to obtain the associated simultaneous confidence intervals. The multiple comparisons of test treatments with the best control are also developed. The computation of the critical points, required to implement the proposed procedures, is discussed by taking the normal probability model. Applications of the proposed test procedures to two-parameter exponential probability model are also demonstrated.  相似文献   

15.
Methods for a sequential test of a dose-response effect in pre-clinical studies are investigated. The objective of the test procedure is to compare several dose groups with a zero-dose control. The sequential testing is conducted within a closed family of one-sided tests. The procedures investigated are based on a monotonicity assumption. These closed procedures strongly control the familywise error rate while providing information about the shape of the dose-responce relationship. Performance of sequential testing procedures are compared via a Monte Carlo simulation study. We illustrae the procedures by application to a real data set.  相似文献   

16.
Some properties of control procedures with variable sampling intervals (VSI) have been investigated in recent years by Amin, Renolds et al, and others. Such procedures have been shown to be more efficient when compared to the corresponding fixed sampling interval (FSI) charts with respect to the Average Time to Signal (ATS) when the Average Run Length (ARL) values for both types of procedures are held equal. Frequent switching between the different sampling intervals can be a complicating factor in the application of control charts with variable sampling intervals (VSI). This problem is being addressed in this article, and improved switching rules are presented and evaluated for Shewhart, CUSUM, and EWMA control procedures. The proposed rules considerably reduce the average number of switches between the sampling intervals and also improve the ATS properties of the control procedures when compared to the conventional variable sampling interval procedures  相似文献   

17.
Commonly applied diagnostic procedures in random-coefficient (multilevel) analysis are based on an inspection of the residuals, motivated by established procedures for ordinary regression. The deficiencies of such procedures are discussed and an alternative based on simulation from the fitted model (parametric bootstrap) is proposed. Although computationally intensive, the method proposed requires little programming effort additional to implementing the model fitting procedure. It can be tailored for specific kinds of outliers. Some computationally less demanding alternatives are described.  相似文献   

18.
One of the standard variable selection procedures in multiple linear regression is to use a penalisation technique in least‐squares (LS) analysis. In this setting, many different types of penalties have been introduced to achieve variable selection. It is well known that LS analysis is sensitive to outliers, and consequently outliers can present serious problems for the classical variable selection procedures. Since rank‐based procedures have desirable robustness properties compared to LS procedures, we propose a rank‐based adaptive lasso‐type penalised regression estimator and a corresponding variable selection procedure for linear regression models. The proposed estimator and variable selection procedure are robust against outliers in both response and predictor space. Furthermore, since rank regression can yield unstable estimators in the presence of multicollinearity, in order to provide inference that is robust against multicollinearity, we adjust the penalty term in the adaptive lasso function by incorporating the standard errors of the rank estimator. The theoretical properties of the proposed procedures are established and their performances are investigated by means of simulations. Finally, the estimator and variable selection procedure are applied to the Plasma Beta‐Carotene Level data set.  相似文献   

19.
Many estimation procedures for quantitative linear models with autocorrelated errors have been proposed in the literature. A number of these procedures have been compared in various ways for different sample sizes and autocorrelation parameters values and for structured or random explanatory vaiables. In this paper, we revisit three situations that were considered to some extent in previous studies, by comparing ten estimation procedures: Ordinary Least Squares (OLS), Generalized Least Squares (GLS), estimated Generalized Least Squares (six procedures), Maximum Likelihood (ML), and First Differences (FD). The six estimated GLS procedures and the ML procedure differ in the way the error autocovariance matrix is estimated. The three situations can be defined as follows: Case 1, the explanatory variable x in the simple linear regression is fixed; Case 2,x is purely random; and Case 3x is first-order autoregressive. Following a theoretical presentation, the ten estimation procedures are compared in a Monte Carlo study conducted in the time domain, where the errors are first-order autoregressive in Cases 1-3. The measure of comparison for the estimation procedures is their efficiency relative to OLS. It is evaluated as a function of the time series length and the magnitude and sign of the error autocorrelation parameter. Overall, knowledge of the model of the time series process generating the errors enhances efficiency in estimated GLS. Differences in the efficiency of estimation procedures between Case 1 and Cases 2 and 3 as well as differences in efficiency among procedures in a given situation are observed and discussed.  相似文献   

20.
The purpose of acceptance sampling is to develop decision rules to accept or reject production lots based on sample data. When testing is destructive or expensive, dependent sampling procedures cumulate results from several preceding lots. This chaining of past lot results reduces the required size of the samples. A large part of these procedures only chain past lot results when defects are found in the current sample. However, such selective use of past lot results only achieves a limited reduction of sample sizes. In this article, a modified approach for chaining past lot results is proposed that is less selective in its use of quality history and, as a result, requires a smaller sample size than the one required for commonly used dependent sampling procedures, such as multiple dependent sampling plans and chain sampling plans of Dodge. The proposed plans are applicable for inspection by attributes and inspection by variables. Several properties of their operating characteristic-curves are derived, and search procedures are given to select such modified chain sampling plans by using the two-point method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号