首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Six procedures which convert tests of homogeneity of variance into tests for mean equality for independent groups are compared. The tests are the analysis of variance (ANOVA) and Welch F statistics. The Welch statistics are included since it was anticipated that ANOVA would not provide a robust test when samples of unequal sizes are obtained from non-normal populations. However, the Welch tests are not found to be uniformly preferrable. In addition, a prior recommendation for Miller's jackknife procedure is not supported for the unequal sample size case. The data indicates that the current tests for variance heterogeneity are either sensitive to non-normality or, if robust, lacking in power. Therefore, these tests cannot be recommended for the purpose of testing the validity of the ANOVA homogeneity assumption.  相似文献   

2.
A random generalized solution to the Robin problem for Laplace's equation is defined in terms of the sections of the random boundary data. Existence, uniqueness, and properties are established for such a solution. Particularizations to the Dirichlet and Newmann problems as well as generalization to the Robin-Poisson problem are mentioned. Applications of the results are provided.  相似文献   

3.
This note introduces a family of skew and symmetric distributions containing the normal family and indexed by three parameters with clear meanings. Another respect in which this family compares favourably with families like the Pearson family, the Bessel-Gram-Charlier family and the Johnson family is ease of maximum likelihood fitting. Fitting by the method of moments is also considered. Asymptotic distributions of maximum likelihood and moment estimators are worked out. A test of symmetry and normality is suggested.  相似文献   

4.
Because the usual F test for equal means is not robust to unequal variances, Brown and Forsythe (1974a) suggest replacing F with the statistics F or W which are based on the Satterthwaite and Welch adjusted degrees of freedom procedures. This paper reports practical situations where both F and W give * unsatisfactory results. In particular, both F and W may not provide adequate control over Type I errors. Moreover, for equal variances, but unequal sample sizes, W should be avoided in favor of F (or F ), but for equal sample sizes, and possibly unequal variances, W was the only satisfactory statistic. New results on power are included as well. The paper also considers the effect of using F or W only after a significant test for equal variances has been obtained, and new results on the robustness of the F test are described. It is found that even for equal sample sizes as large as 50 per treatment group, there are practical situations where the F test does not provide adequately control over the probability of a Type I error.  相似文献   

5.
Errors of misclassification and their probabilities are studied for classification problems associated with univariate inverse Gaussian distributions. The effects of applying the linear discriminant function (LDF), based on normality, to inverse Gaussian populations are assessed by comparing probabilities (optimum and conditional) based on the LDF with those based on the likelihood ratio rule (LR) for the inverse Gaussian, Both theoretical and empirical results are presented  相似文献   

6.
ABSTRACT

In this article, a new two-step calibration technique of design weights is proposed. In the first step, the calibration weights are set proportional to the design weights in a given sample. In the second step, the constants of proportionality are determined based on different objectives of the investigator such as bias reduction or minimum mean squared error. Many estimators available in the literature can be shown to be special cases of the proposed two-step calibrated estimator. A simulation study, based on a real data set, is also included at the end. A few technical issues are raised with respect to the use of the proposed calibration technique: both limitations and benefits are discussed.  相似文献   

7.
Two seemingly different approaches to simplicity in the analysis of connected block designs, and their relationship to the concepts of balance are discussed.  相似文献   

8.
Uwe Küchler 《Statistics》2013,47(2):219-230
A common prior distribution and loss structure are set up to be appropriate for the sorting of batches using sampling inspection by variable and by attribute. Approxi¬mations to the exact optimal sampling plans are derived to gain a better understanding of BAYEsian sampling plans and to compare the plans using variable sampling and attribute sampling. It is assumed that the process quality distribution is normal with a known variance  相似文献   

9.
Summary: Job creation and destruction should be considered as key success or failure criteria of the economic policy. Job creation and destruction are both effects of economic policy, the degree of out- and in-sourcing, and the ability to create new ideas that can be transformed into jobs. Job creation and destruction are results of businesses attempting to maximize their economic outcome. One of the costs of this process is that employees have to move from destroyed jobs to created jobs. The development of this process probably depends on labor protection laws, habits, the educational system, and the whole UI-system. A flexible labor market ensures that scarce labor resources are used where they are most in demand. Thus, labor turnover is an essential factor in a well-functioning economy. This paper uses employer-employee data from the Danish registers of persons and workplaces to show where jobs have been destroyed and where they have been created over the last couple of business cycles. Jobs are in general destroyed and created simultaneously within each industry, but at the same time a major restructuring has taken place, so that jobs have been lost in Textile and Clothing, Manufacturing and the other “old industries”, while jobs have been created in Trade and Service industries. Out-sourcing has been one of the causes. This restructuring has caused a tremendous pressure on workers and their ability to find employment in expanding sectors. The paper shows how this has been accomplished. Especially, the paper shows what has happened to employees involved. Have they become unemployed, employed in the welfare sector or where? * First draft of this paper was presented at Deutsche Statistische Woche, Frankfurt, September 2004. Thanks to two referees for instructive comments. Financial support from The Danish Social Science Research Council through CCP is acknowledged.  相似文献   

10.
Blue of estimable linear functions and exact tests of hypotheses concerning such functions usually do not exist in the covariance model with random factors having unknown variances. This is true even in the equal subclass numbers case. This paper suggests alternative methods for finding linear unbiased estimators and presents methods for computing sampling variances which are linear functions of the unknown parameter variances. Also, higher level covariates are defined and nonestimability problems resulting from association of such covariates with fixed factors are discussed.  相似文献   

11.
A multivariate generalized beta distribution is introduced that extends the univariate generalized beta distribution and includes many multivariate distributions, such as the multivariate beta of the first and second kind, the generalized gamma, and the Burr and Dirichlet distributions as special and limiting cases. These interrelationships can be illustrated using a distributional family tree. The corresponding marginal distributions are univariate generalized beta distributions and their special cases. Selected expressions for the moments are reported, and an application to the joint distribution of income and wealth is presented. A simple transformation of the multivariate generalized beta distribution leads to what will be referred to as a multivariate exponential generalized beta distribution, which includes a multivariate form of the logistics and Burr distributions as special cases.  相似文献   

12.
Longitudinal count responses are often analyzed with a Poisson mixed model. However, under overdispersion, these responses are better described by a negative binomial mixed model. Estimators of the corresponding parameters are usually obtained by the maximum likelihood method. To investigate the stability of these maximum likelihood estimators, we propose a methodology of sensitivity analysis using local influence. As count responses are discrete, we are unable to perturb them with the standard scheme used in local influence. Then, we consider an appropriate perturbation for the means of these responses. The proposed methodology is useful in different applications, but particularly when medical data are analyzed, because the removal of influential cases can change the statistical results and then the medical decision. We study the performance of the methodology by using Monte Carlo simulation and applied it to real medical data related to epilepsy and headache. All of these numerical studies show the good performance and potential of the proposed methodology.  相似文献   

13.
Athletic records represent the best results in a given discipline, thus improving monotonically with time. As has already been shown, this should not be taken as an indication that the athletes' capabilities keep improving. In other words, a new record is not noteworthy just because it is a new record, instead it is necessary to assess by how much the record has improved. In this paper we derive formulae that can be used to show that athletic records continue to improve with time, even if athletic performance remains constant. We are considering two specific examples, the German championships and the world records in several athletic disciplines. The analysis shows that, for the latter, true improvements occur in 20-50% of the disciplines. The analysis is supplemented by an application of our record estimation approach to the prediction of the maximum body length of humans for a specified size of a population respectively population group from a representative sample.  相似文献   

14.
A recent study of the prevalence of middle ear infection among Australian aborigines reports, for different age groups, the number currently infected, the number currently not infected and the number who are currently not infected but have scarring of the ear drum showing evidence of a previous infection. Incidence and recovery are modelled by a semi-Markov model in which incidence of the infection is allowed to be nonstationary.  相似文献   

15.
This paper uses the decomposition framework from the economics literature to examine the statistical structure of treatment effects estimated with observational data compared to those estimated from randomized studies. It begins with the estimation of treatment effects using a dummy variable in regression models and then presents the decomposition method from economics which estimates separate regression models for the comparison groups and recovers the treatment effect using bootstrapping methods. This method shows that the overall treatment effect is a weighted average of structural relationships of patient features with outcomes within each treatment arm and differences in the distributions of these features across the arms. In large randomized trials, it is assumed that the distribution of features across arms is very similar. Importantly, randomization not only balances observed features but also unobserved. Applying high dimensional balancing methods such as propensity score matching to the observational data causes the distributional terms of the decomposition model to be eliminated but unobserved features may still not be balanced in the observational data. Finally, a correction for non-random selection into the treatment groups is introduced via a switching regime model. Theoretically, the treatment effect estimates obtained from this model should be the same as those from a randomized trial. However, there are significant challenges in identifying instrumental variables that are necessary for estimating such models. At a minimum, decomposition models are useful tools for understanding the relationship between treatment effects estimated from observational versus randomized data.  相似文献   

16.
In this paper, we consider a system consisting of two dependent components and we are interested in the average remaining life of the component that fails last when (i) the first failure occurs at time t and (ii) the first failure occurs after time t. For both the cases, expressions are derived in the case of general bivariate normal distribution and a class of bivariate exponential distribution including bivariate exponential distribution of Arnold and Strauss, absolutely continuous bivariate exponential distribution of Block and Basu, bivariate exponential distribution of Raftery, Freund's bivariate exponential distribution and Gumbel's bivariate exponential distribution.  相似文献   

17.
This is a survey of characterizations of discrete distributions via properties of record values. Characterization results based on records and weak records are presented. Then the concepts of kth records, strong kth records, and weak kth records are recalled. Finally, characterizations of the geometric parent involving these three types of kth records are discussed.  相似文献   

18.
谭斌  王菲 《统计教育》2010,(11):31-39
改革开放30年以来,特别是实施西部大开发以来,新疆的对外贸易,尤其是与中亚地区的贸易得到了快速发展。本文运用投入产出分析技术,从出口产品结构合理性、出口对经济影响两方面,分析了产品出口对新疆经济的影响状况,发现了现阶段新疆不同行业产品出口存在的优势与劣势,并根据实证分析的结论提出了今后完善其出口产品贸易的对策和建议。  相似文献   

19.
Sequences of independent random variables are observed and on the basis of these observations future values of the process are forecast. The Bayesian predictive density of k future observations for normal, exponential, and binomial sequences which change exactly once are analyzed for several cases. It is seen that the Bayesian predictive densities are mixtures of standard probability distributions. For example, with normal sequences the Bayesian predictive density is a mixture of either normal or t-distributions, depending on whether or not the common variance is known. The mixing probabilities are the same as those occurring in the corresponding posterior distribution of the mean(s) of the sequence. The predictive mass function of the number of future successes that will occur in a changing Bernoulli sequence is computed and point and interval predictors are illustrated.  相似文献   

20.
The joint asymptotic distribution of the upper and lower bounds for the Gini index derived by Gastwirth for grouped data are obtained. From them a conservative asymptotically distribution-free confidence interval for the population Gini index is presented. The methods also yield similar results for other indices of inequality (e.g., Theil's and Atkinson's).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号