首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider the problem of testing hypotheses on the difference of the coefficients of variation from several two-armed experiments with normally distributed outcomes. In particular, we deal with testing the homogeneity of the difference of the coefficients of variation and testing the equality of the difference of the coefficients of variation to a specified value. The test statistics proposed are derived in a limiting one-way classification with fixed effects and heteroscedastic error variances, using results from analysis of variance. By way of simulation, the performance of these test statistics is compared for both testing problems considered.  相似文献   

2.
Summary.  The paper extends the susceptible–exposed–infective–removed model to handle heterogeneity introduced by spatially arranged populations, biologically plausible distributional assumptions and incorporation of observations from additional diagnostic tests. These extensions are motivated by a desire to analyse disease transmission experiments in a more detailed fashion than before. Such experiments are performed by veterinarians to gain knowledge about the dynamics of an infectious disease. By fitting our spatial susceptible–exposed–infective–removed with diagnostic testing model to data for a specific disease and production environment a valuable decision support tool is obtained, e.g. when evaluating on-farm control measures. Partial observability of the epidemic process is an inherent problem when trying to estimate model parameters from experimental data. We therefore extend existing work on Markov chain Monte Carlo estimation in partially observable epidemics to the multitype epidemic set-up of our model. Throughout the paper, data from a Belgian classical swine fever virus transmission experiment are used as a motivating example.  相似文献   

3.
4.
This paper proposes new methodology for calculating the optimal sample size when a hypothesis test between two binomial proportions is conducted. The problem is addressed from the Bayesian point of view. Following the formulation by DasGupta and Vidakovic (1997, J. Statist. Plann. Inference 65, 335–347), the posterior risk is determined and set not to exceed a prespecified bound. A second constraint deals with the likelihood of data not satisfying the bound on the risk. The cases when the two proportions are equal to a fixed or to a random value are examined.  相似文献   

5.
In experiments designed to estimate a binomial parameter, sample sizes are often calculated to ensure that the point estimate will be within a desired distance from the true value with sufficiently high probability. Since exact calculations resulting from the standard formulation of this problem can be difficult, “conservative” and/or normal approximations are frequently used. In this paper, some problems with the current formulation are given, and a modified criterion that leads to some improvement is provided. A simple algorithm that calculates the exact sample sizes under the modified criterion is provided, and these sample sizes are compared to those given by the standard approximate criterion, as well as to an exact conservative Bayesian criterion.  相似文献   

6.
Computer Experiments, consisting of a number of runs of a computer model with different inputs, are now common-place in scientific research. Using a simple fire model for illustration some guidelines are given for the size of a computer experiment. A graph is provided relating the error of prediction to the sample size which should be of use when designing computer experiments.

Methods for augmenting computer experiments with extra runs are also described and illustrated. The simplest method involves adding one point at a time choosing that point with the maximum prediction variance. Another method that appears to work well is to choose points from a candidate set with maximum determinant of the variance covariance matrix of predictions.  相似文献   

7.
In this article, we develop inference tools for an effect size parameter in a paired experiment. A class of estimators is defined that includes natural, shrinkage and shrinkage preliminary test estimators. The shrinkage and preliminary test methods incorporate uncertain prior information on the parameter. This information may be available in the form of a realistic guess on the basis of the experimenter’s knowledge and experience, which can be incorporated into the estimation process to increase the efficiency of the estimator. Asymptotic properties of the proposed estimators are investigated both analytically and computationally. A simulation study is also conducted to assess the performance of the estimators for moderate and large samples. For illustration purposes, the method is applied to a data set.  相似文献   

8.
In this paper, we establish the optimal size of the choice sets in generic choice experiments for asymmetric attributes when estimating main effects only. We give an upper bound for the determinant of the information matrix when estimating main effects and all two-factor interactions for binary attributes. We also derive the information matrix for a choice experiment in which the choice sets are of different sizes and use this to determine the optimal sizes for the choice sets.  相似文献   

9.
Recently, many researchers have devoted themselves to the investigation on the number of replicates needed for experiments in blocks of size two. In practice, experiments in blocks of size four might be more useful than those in blocks of size two. To estimate the main effects and two-factor interactions from a two-level factorial experiment in blocks, we might need many replicates. This article investigates designs with the least number of replicates for factorial experiments in blocks of size four. The methods to obtain such designs are presented.  相似文献   

10.
Abstract

In this paper, we assume that the lifetimes have a two-parameter Pareto distribution and discuss some results of progressive Type-II censored sample. We obtain maximum likelihood estimators and Bayes estimators of the unknown parameters under squared error loss and a precautionary loss functions in progressively Type-II censored sample. Robust Bayes estimation of unknown parameters over three different classes of priors under progressively Type-II censored sample, squared error loss, and precautionary loss functions are obtained. We discuss estimation of unknown parameters on competing risks progressive Type-II censoring. Finally, we consider the problem of estimating the common scale parameter of two Pareto distributions when samples are progressively Type-II censored.  相似文献   

11.
In many experimental situations we need to test the hypothesis concerning the equality of parameters of two or more binomial populations. Of special interest is the knowledge of the sample sizes needed to detect certain differences among the parameters, for a specified power, and at a given level of significance. Al-Bayyati (1971) derived a rule of thumb for a quick calculation of the sample size needed to compare two binomial parameters. The rule is defined in terms of the difference desired to be detected between the two parameters.

In this paper, we introduce a generalization of Al-Bayyatifs rule to several independent proportions. The generalized rule gives a conservative estimate of the sample size needed to achieve a specified power in detecting certain differences among the binomial parameters at a given level of significance. The method is illustrated with an example  相似文献   

12.
An important question that arises in clinical trials is how many additional observations, if any, are required beyond those originally planned. This has satisfactorily been answered in the case of two-treatment double-blind clinical experiments. However, one may be interested in comparing a new treatment with its competitors, which may be more than one. This problem is addressed in this investigation involving responses from arbitrary distributions, in which the mean and the variance are not functionally related. First, a solution in determining the initial sample size for specified level of significance and power at a specified alternative is obtained. Then it is shown that when the initial sample size is large, the nominal level of significance and the power at the pre-specified alternative are fairly robust for the proposed sample size re-estimation procedure. An application of the results is made to the blood coagulation functionality problem considered by Kropf et al. [Multiple comparisons of treatments with stable multivariate tests in a two-stage adaptive design, including a test for non-inferiority, Biom. J. 42(8) (2000), pp. 951–965].  相似文献   

13.
With the advent of modern technology, manufacturing processes have become very sophisticated; a single quality characteristic can no longer reflect a product's quality. In order to establish performance measures for evaluating the capability of a multivariate manufacturing process, several new multivariate capability (NMC) indices, such as NMC p and NMC pm , have been developed over the past few years. However, the sample size determination for multivariate process capability indices has not been thoroughly considered in previous studies. Generally, the larger the sample size, the more accurate an estimation will be. However, too large a sample size may result in excessive costs. Hence, the trade-off between sample size and precision in estimation is a critical issue. In this paper, the lower confidence limits of NMC p and NMC pm indices are used to determine the appropriate sample size. Moreover, a procedure for conducting the multivariate process capability study is provided. Finally, two numerical examples are given to demonstrate that the proper determination of sample size for multivariate process indices can achieve a good balance between sampling costs and estimation precision.  相似文献   

14.
This paper provides closed form expressions for the sample size for two-level factorial experiments when the response is the number of defectives. The sample sizes are obtained by approximating the two-sided test for no effect through tests for the mean of a normal distribution, and borrowing the classical sample size solution for that problem. The proposals are appraised relative to the exact sample sizes computed numerically, without appealing to any approximation to the binomial distribution, and the use of the sample size tables provided is illustrated through an example.  相似文献   

15.
限额以下工业抽样调查策略几个问题的研究   总被引:3,自引:0,他引:3  
 限额以下工业包括限额以下工业企业和个体经营工业单位两部分。抽样调查的目标是分别估计这两部分的工业总产值总额、产品销售收入总额以及企业数目(个体经营者户数)。由国家统计局下达估计精确的控制数字,由各省(自治区、直辖市、计划单列市)分别组织实施。本文拟讨论在省一级组织实施完成上述调查任务时抽样策略的几个问题。  相似文献   

16.
Various programs in statistical packages for analysis of variance with unequal cell size give different results to the same data because of nonorthogonality of the main effects and interactions. This paper explains how these programs treat the problem of analysis of variance of unbalanced data.  相似文献   

17.
The underlying statistical concept that animates empirical strategies for extracting causal inferences from observational data is that observational data may be adjusted to resemble data that might have originated from a randomized experiment. This idea has driven the literature on matching methods. We explore an un-mined idea for making causal inferences with observational data – that any given observational study may contain a large number of indistinguishably balanced matched designs. We demonstrate how the absence of a unique best solution presents an opportunity for greater information retrieval in causal inference analysis based on the principle that many solutions teach us more about a given scientific hypothesis than a single study and improves our discernment with observational studies. The implementation can be achieved by integrating the statistical theories and models within a computational optimization framework that embodies the statistical foundations and reasoning.  相似文献   

18.
Many lifetime distribution models have successfully served as population models for risk analysis and reliability mechanisms. The Kumaraswamy distribution is one of these distributions which is particularly useful to many natural phenomena whose outcomes have lower and upper bounds or bounded outcomes in the biomedical and epidemiological research. This article studies point estimation and interval estimation for the Kumaraswamy distribution. The inverse estimators (IEs) for the parameters of the Kumaraswamy distribution are derived. Numerical comparisons with maximum likelihood estimation and biased-corrected methods clearly indicate the proposed IEs are promising. Confidence intervals for the parameters and reliability characteristics of interest are constructed using pivotal or generalized pivotal quantities. Then, the results are extended to the stress–strength model involving two Kumaraswamy populations with different parameter values. Construction of confidence intervals for the stress–strength reliability is derived. Extensive simulations are used to demonstrate the performance of confidence intervals constructed using generalized pivotal quantities.  相似文献   

19.
协整检验的DGP识别   总被引:1,自引:0,他引:1       下载免费PDF全文
本文用数值模拟的方法比较了不同DGP下协整检验迹统计量的分布特征,分析DGP误设对协整检验结果的重大影响;进而利用我国货币市场和股票市场的数据进行实证分析,通过真实的例子,揭示DGP误设可能导致的各种错误结果。实证分析中还进一步使用递归协整检验和协整关系的约束识别检验,从不同角度显示了协整关系检验结论的稳健性和可靠性。  相似文献   

20.
The first bibliography in the area of inference based on conditional specification was published in 1977, A second bibliography is compiled, and a combined subject index is given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号