首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
For animal carcinogenicity study with multiple dose groups, positive trend test and pairwise comparisons of treated groups with control are generally performed using the Cochran-Armitage, Peto test, or Poly-K test. These tests are asymptotically normal. The exact version of Cochran-Armitage and Peto tests are available based on the permutation test assuming fixed column and row totals. For Poly-K test column totals depend on the mortality pattern of the animals and can not be kept fixed over the permutations of the animals. In this work a modification of the permutation test is suggested that can be applied on exact Poly-K test.  相似文献   

2.
This paper deals with the asymptotics of a class of tests for association in 2-way contingency tables based on square forms in cell frequencies, given the total number of observations (multinomial sampling) or one set of marginal totals (stratified sampling). The case when both row and column marginal totals are fixed (hypergeometric sampling) was studied in Kulinskaya (1994), The class of tests under consideration includes a number of classical measures for association, Its two subclasses are the tests based on statistics using centralized cell frequencies (asymptotically distributed as weighted sums of central chi-squares) and those using the non-centralized cell frequencies (asymptotically normal). The parameters of asymptotic distributions depend on the sampling model and on true marginal probabilities. Maximum efficiency for asymptotically normal statistics is achieved under hypergeometric sampling, If the cell frequencies or the statistic as a whole are centralized using marginal proportions as estimates for marginal probabilities, the asymptotic distribution does not differ much between models and it is equivalent to that under hypergeometric sampling. These findings give an extra justification for the use of permutation tests for association (which are based on hypergeometric sampling). As an application, several well known measures of association are analysed.  相似文献   

3.
Given a two-way contingency table in which the rows and columns both define ordinal variables, there are many ways in which the informal idea of positive association between those variables might be defined. This paper considers a variety of definitions expressed as inequality constraints on cross-product ratios. Logical relationships between the definitions are explored. Each definition can serve as a composite alternative against which the null hypothesis of no association may be tested. For a broad class of such alternatives a decomposition of the log-likelihood gives both an explicit likelihood ratio statistic and its asymptotic null hypothesis distribution. Results are derived for multinomial sampling and for fully conditional sampling with row and column totals fixed.  相似文献   

4.
A balanced sampling design has the interesting property that Horvitz–Thompson estimators of totals for a set of balancing variables are equal to the totals we want to estimate, therefore the variance of Horvitz–Thompson estimators of variables of interest are reduced in function of their correlations with the balancing variables. Since it is hard to derive an analytic expression for the joint inclusion probabilities, we derive a general approximation of variance based on a residual technique. This approximation is useful even in the particular case of unequal probability sampling with fixed sample size. Finally, a set of numerical studies with an original methodology allows to validate this approximation.  相似文献   

5.
This paper considers a connected Markov chain for sampling 3 × 3 ×K contingency tables having fixed two‐dimensional marginal totals. Such sampling arises in performing various tests of the hypothesis of no three‐factor interactions. A Markov chain algorithm is a valuable tool for evaluating P‐values, especially for sparse datasets where large‐sample theory does not work well. To construct a connected Markov chain over high‐dimensional contingency tables with fixed marginals, algebraic algorithms have been proposed. These algorithms involve computations in polynomial rings using Gröbner bases. However, algorithms based on Gröbner bases do not incorporate symmetry among variables and are very time‐consuming when the contingency tables are large. We construct a minimal basis for a connected Markov chain over 3 × 3 ×K contingency tables. The minimal basis is unique. Some numerical examples illustrate the practicality of our algorithms.  相似文献   

6.
Two‐phase sampling is often used for estimating a population total or mean when the cost per unit of collecting auxiliary variables, x, is much smaller than the cost per unit of measuring a characteristic of interest, y. In the first phase, a large sample s1 is drawn according to a specific sampling design p(s1) , and auxiliary data x are observed for the units is1 . Given the first‐phase sample s1 , a second‐phase sample s2 is selected from s1 according to a specified sampling design {p(s2s1) } , and (y, x) is observed for the units is2 . In some cases, the population totals of some components of x may also be known. Two‐phase sampling is used for stratification at the second phase or both phases and for regression estimation. Horvitz–Thompson‐type variance estimators are used for variance estimation. However, the Horvitz–Thompson ( Horvitz & Thompson, J. Amer. Statist. Assoc. 1952 ) variance estimator in uni‐phase sampling is known to be highly unstable and may take negative values when the units are selected with unequal probabilities. On the other hand, the Sen–Yates–Grundy variance estimator is relatively stable and non‐negative for several unequal probability sampling designs with fixed sample sizes. In this paper, we extend the Sen–Yates–Grundy ( Sen , J. Ind. Soc. Agric. Statist. 1953; Yates & Grundy , J. Roy. Statist. Soc. Ser. B 1953) variance estimator to two‐phase sampling, assuming fixed first‐phase sample size and fixed second‐phase sample size given the first‐phase sample. We apply the new variance estimators to two‐phase sampling designs with stratification at the second phase or both phases. We also develop Sen–Yates–Grundy‐type variance estimators of the two‐phase regression estimators that make use of the first‐phase auxiliary data and known population totals of some of the auxiliary variables.  相似文献   

7.
A multiplicative seasonal forecasting model for cumulative events in which, conditional on end- of-season totals being given and seasonal shape being known, it is shown that events occurring within the season are multinomially distributed is presented. The model uses the information contained in the arrival of new events to obtain a posterior distribution for end-of-season totals. Bayesian forecasts are obtained recursively in two stages: first, by predicting the expected number and variance of event counts in future intervals within the remaining season, and then by predicting revised means and variances for end-of-season totals based on the most recent forecast error.  相似文献   

8.
Assignment of individuals to correct species or population of origin based on a comparison of allele profiles has in recent years become more accurate due to improvements in DNA marker technology. A method of assessing the error in such assignment problems is présentés. The method is based on the exact hypergeometric distributions of contingency tables conditioned on marginal totals. The result is a confidence region of fixed confidence level. This confidence level is calculable exactly in principle, and estimable very accurately by simulation, without knowledge of the true population allele frequencies. Various properties of these techniques are examined through application to several examples of actual DNA marker data and through simulation studies. Methods which may reduce computation time are discussed and illustrated.  相似文献   

9.
It is shown that when a parameter lying in a sufficiently small interval is to be estimated in a family of uniform distributions, a two point prior is least favourable under squared error loss. The unique Bayes estimator with respect to this prior is minimax. The Γ-minimax estimator is derived for sets Γ of priors consisting of all priors that give fixed probabilities to two specified subintervals of the parameter space if a two point prior is least favourable in Γ.  相似文献   

10.
We investigate combinatorial matrix problems that are related to restricted integer partitions. They arise from Survo puzzles, where the basic task is to fill an m×n table by integers 1, 2,?…?, mn, so that each number appears only once, when the column sums and the row sums are fixed. We present a new computational method for solving Survo puzzles with binary matrices that are recoded and combined using the Hadamard, Kronecker, and Khatri–Rao products. The idea of our method is based on using the matrix interpreter and other data analytic tools of Survo R, which represents the newest generation of the Survo computing environment, recently implemented as a multiplatform, open source R package. We illustrate our method with detailed examples.  相似文献   

11.
A procedure is proposed for the assessment of bioequivalence of variabilities between two formulations in bioavailability/bioequivalence studies. This procedure is essentially a two one-sided Pitman-Morgan’s tests procedure which is based on the correlation between crossover differences and subject totals. The nonparametric version of the proposed test is also discussed. A dataset of AUC from a 2×2 crossover bioequivalence trial is presented to illustrate the proposed procedures.  相似文献   

12.
A general computer program which generates either the Wilcoxon, Kruskal-Wallis, Friedman, or extended Friedman statistic (where the numbers of cell observations nij may be any positive integer or zero) can be formulated simply by using the computational algorithm for the Benard-Van Elteren statistic. It is shown that the Benard-Van Elteren statistic can be computed using matrix algebra subroutines including multiplication and inverse or g-inverse computational algorithms in the case where the rank of the matrix V of the variances and covariances of the column totals is k-1. For the case where the rank of V is less than k-1 the use of the g-inverse is shown to greatly reduce the labors of calculation. In addition, the use of the Benard-Van Elteren statistic in testing against ordered alternatives is indicated.  相似文献   

13.
Summary.  The paper estimates an index of coincident economic indicators for the US economy by using time series with different frequencies of observation (monthly and quarterly, possibly with missing values). The model that is considered is the dynamic factor model that was proposed by Stock and Watson, specified in the logarithms of the original variables and at the monthly frequency, which poses a problem of temporal aggregation with a non-linear observational constraint when quarterly time series are included. Our main methodological contribution is to provide an exact solution to this problem that hinges on conditional mode estimation by iteration of the extended Kalman filtering and smoothing equations. On the empirical side the contribution of the paper is to provide monthly estimates of quarterly indicators, among which is the gross domestic product, that are consistent with the quarterly totals. Two applications are considered: the first dealing with the construction of a coincident index for the US economy, whereas the second does the same with reference to the euro area.  相似文献   

14.
In official statistics, when a file of microdata must be delivered to external users, it is very difficult to propose them a file where missing values has been treated by multiple imputations. In order to overcome this difficulty, we propose a method of single imputation for qualitative data that respect numerous constraints. The imputation is balanced on totals previously estimated; editing rules can be respected; the imputation is random, but the totals are not affected by an imputation variance.  相似文献   

15.
The marginal totals of a contingency table can be rearranged to form a new table. If at least twoof these totals include the same cell of the original table, the new table cannot be treated as anordinary contingency table. An iterative method is proposed to calculate maximum likelihood estimators for the expected cell frequencies of the original table under the assumption that some marginal totals (or more generally, some linear functions) of these expected frequencies satisfy a log-linear model.In some cases, a table of correlated marginal totals is treated as if it was an ordinary contingency table. The effects of ignoring the special structure of the marginal table on thedistributionof the goodness-of-fit test statistics are discussed and illustrated, with special reference to stationary Markov chains.  相似文献   

16.
In this paper strategies based on a two-stage successive sampling scheme are proposed for estimating the ratio of the population totals of two characters on the most recent occasion. It is found that the proposed strategies perform better than the standard ratio when used in estimating the ratio of farmers' income to non-farmers' income. But when applied to the estimation of ratio of girth to height of teak trees, a loss in efficiency is observed.  相似文献   

17.
This paper surveys the fundamental principles of subjective Bayesian inference in econometrics and the implementation of those principles using posterior simulation methods. The emphasis is on the combination of models and the development of predictive distributions. Moving beyond conditioning on a fixed number of completely specified models, the paper introduces subjective Bayesian tools for formal comparison of these models with as yet incompletely specified models. The paper then shows how posterior simulators can facilitate communication between investigators (for example, econometricians) on the one hand and remote clients (for example, decision makers) on the other, enabling clients to vary the prior distributions and functions of interest employed by investigators. A theme of the paper is the practicality of subjective Bayesian methods. To this end, the paper describes publicly available software for Bayesian inference, model development, and communication and provides illustrations using two simple econometric models.  相似文献   

18.
This paper looks at various issues that are of interest to the sports gambler. First, an expression is obtained for the distribution of the final bankroll using fixed wagers with a specified initial bankroll. Second, fixed percentage wagers are considered where the Kelly method is extended to the case of simultaneous bets placed at various odds; a computational algorithm is presented to obtain the Kelly fractions. Finally, the paper considers the problem of determining whether a gambling system is profitable based on the historical results of bets placed at various odds.  相似文献   

19.
Much of the small‐area estimation literature focuses on population totals and means. However, users of survey data are often interested in the finite‐population distribution of a survey variable and in the measures (e.g. medians, quartiles, percentiles) that characterize the shape of this distribution at the small‐area level. In this paper we propose a model‐based direct estimator (MBDE, Chandra and Chambers) of the small‐area distribution function. The MBDE is defined as a weighted sum of sample data from the area of interest, with weights derived from the calibrated spline‐based estimate of the finite‐population distribution function introduced by Harms and Duchesne, under an appropriately specified regression model with random area effects. We also discuss the mean squared error estimation of the MBDE. Monte Carlo simulations based on both simulated and real data sets show that the proposed MBDE and its associated mean squared error estimator perform well when compared with alternative estimators of the area‐specific finite‐population distribution function.  相似文献   

20.
For the analysis of square contingency tables with ordered categories, Tomizawa (1991) considered the diagonal uniform association symmetry (DUS) model, which has a multiplicative form for cell probabilities and has the structure of uniform association in the tables constructed using two diagonals that are equidistant from the main diagonal. This paper proposes another DUS model which has a similar multiplicative form for cumulative probabilities. The model indicates that the odds that an observation will fall in row category i or below and column category i+k or above, instead of in column category i or below and row category i+k or above, increase (decrease) exponentially as the cutpoint i increases for a fixed k. Examples are given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号