首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract.  In many epidemiological studies, disease occurrences and their rates are naturally modelled by counting processes and their intensities, allowing an analysis based on martingale methods. Applied to the Mantel–Haenszel estimator, these methods lend themselves to the analysis of general control selection sampling designs and the accommodation of time-varying exposures.  相似文献   

2.
Estimation of nonlinear functions of a multinomial parameter vector is necessary in many categorical data problems. The first and second order jackknife are explored for the purpose of reduction of bias. The second order jackknife of a function g(.) of a multinomial parameter is shown to be asymptotically normal if all second order partials ?2g( p )?dpi?pj obey a Hölder condition with exponent α>1/2. Numerical results for the estimation of the log odds ratio in a 2times2 table demonstrate the efficiency of the jackknife method for reduction of mean-square-error and the construction of approximate confidence intervals.  相似文献   

3.
Frequently, contingency tables are generated in a multinomial sampling. Multinomial probabilities are then organized in a table assigning probabilities to each cell. A probability table can be viewed as an element in the simplex. The Aitchison geometry of the simplex identifies independent probability tables as a linear subspace. An important consequence is that, given a probability table, the nearest independent table is obtained by orthogonal projection onto the independent subspace. The nearest independent table is identified as that obtained by the product of geometric marginals, which do not coincide with the standard marginals, except in the independent case. The original probability table is decomposed into orthogonal tables, the independent and the interaction tables. The underlying model is log-linear, and a procedure to test independence of a contingency table, based on a multinomial simulation, is developed. Its performance is studied on an illustrative example.  相似文献   

4.
Technical advances in many areas have produced more complicated high‐dimensional data sets than the usual high‐dimensional data matrix, such as the fMRI data collected in a period for independent trials, or expression levels of genes measured in different tissues. Multiple measurements exist for each variable in each sample unit of these data. Regarding the multiple measurements as an element in a Hilbert space, we propose Principal Component Analysis (PCA) in Hilbert space. The principal components (PCs) thus defined carry information about not only the patterns of variations in individual variables but also the relationships between variables. To extract the features with greatest contributions to the explained variations in PCs for high‐dimensional data, we also propose sparse PCA in Hilbert space by imposing a generalized elastic‐net constraint. Efficient algorithms to solve the optimization problems in our methods are provided. We also propose a criterion for selecting the tuning parameter.  相似文献   

5.
Trend tests in dose-response have been central problems in medicine. The likelihood ratio test is often used to test hypotheses involving a stochastic order. Stratified contingency tables are common in practice. The distribution theory of likelihood ratio test has not been full developed for stratified tables and more than two stochastically ordered distributions. Under c strata of m × r tables, for testing the conditional independence against simple stochastic order alternative, this article introduces a model-free test method and gives the asymptotic distribution of the test statistic, which is a chi-bar-squared distribution. A real data set concerning an ordered stratified table will be used to show the validity of this test method.  相似文献   

6.
In contingency table analysis, a likelihood ratio test for linear inequality constraints is discussed. The restriction condition considered in this article is much more general than usual stochastic order restriction conditions. Asymptotic property of test statistic is shown. Simulation study is conducted to compare the empirical power and size performed via the proposed method and others. Several real data sets are used to illustrate our theoretical result. The idea used in this article can be applied to test the problems with nonlinear inequality constraints.  相似文献   

7.
Three modified tests for homogeneity of the odds ratio for a series of 2 × 2 tables are studied when the data are clustered. In the case of clustered data, the standard tests for homogeneity of odds ratios ignore the variance inflation caused by positive correlation among responses of subjects within the same cluster, and therefore have inflated Type I error. The modified tests adjust for the variance inflation in the three existing standard tests: Breslow–Day, Tarone and the conditional score test. The degree of clustering effect is measured by the intracluster correlation coefficient, ρ. A variance correction factor derived from ρ is then applied to the variance estimator in the standard tests of homogeneity of the odds ratio. The proposed tests are an application of the variance adjustment method commonly used in correlated data analysis and are shown to maintain the nominal significance level in a simulation study. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

8.
Some new tests of odds ratio homogeneity for fourfold tables are compared with the mixture model score test in the sparse-data case (many tables, small margins per table). Based on general empirical Bayes inequalities, the new tests have competitive power for 1:R matched designs, and superior power for more balanced designs.  相似文献   

9.
The Lomax (Pareto II) distribution has found wide application in a variety of fields. We analyze the second-order bias of the maximum likelihood estimators of its parameters for finite sample sizes, and show that this bias is positive. We derive an analytic bias correction which reduces the percentage bias of these estimators by one or two orders of magnitude, while simultaneously reducing relative mean squared error. Our simulations show that this performance is very similar to that of a parametric bootstrap correction based on a linear bias function. Three examples with actual data illustrate the application of our bias correction.  相似文献   

10.
Compositional tables – a continuous counterpart to the contingency tables – carry relative information about relationships between row and column factors; thus, for their analysis, only ratios between cells of a table are informative. Consequently, the standard Euclidean geometry should be replaced by the Aitchison geometry on the simplex that enables decomposition of the table into its independent and interactive parts. The aim of the paper is to find interpretable coordinate representation for independent and interaction tables (in sense of balances and odds ratios of cells, respectively), where further statistical processing of compositional tables can be performed. Theoretical results are applied to real‐world problems from a health survey and in macroeconomics.  相似文献   

11.
The purpose of this paper is to review briefly the three main formulations of no Interaction hypotheses in contingency tables and to consider the formulation on a linear scale in some detail.More specifically we (i) present a situation in 2×2 tables where such a formulation may be more appropriate than others, (ii) study the geometry for this problem, (iii) give contrast-type or parametric ANOVA type formulations in the general n-dimensional tables, (iv) discuss estimation and testing procedures and (v) consider collapsibility of contingency tables in relation to the hypotheses of no interaction on a linear scale.  相似文献   

12.
In many case-control studies the risk factors are categorized in order to clarify the analysis and presentation of the data. However, inconsistent categorization of continuous risk factors may make interpretation difficult. This paper attempts to evaluate the effect of the categorization procedure on the odds ratio and several measures of association. Often the risk factor is dichotomized and the data linking the risk factor and the disease is presented in a 2 x 2 table. We show that the odds ratio obtained from the 2x2 table is usually considerably larger than the comparable statistic that would have been obtained had a large number of outpoints been used. Also, if 2 x 2, 2 x 3, or 2 x 4 tables are obtained by using a few outpoints on the risk factor, the measures of association for these tables are usually greater than the measure that would have been obtained had a large number of cntpoints been used. We propose an odds ratio measure that more closely approximates the odds ratio between the continuous risk factor and disease. A corresponding measure of association is also proposed for 2 x 2, 2x3, and 2x4 tables.  相似文献   

13.
Crude and adjusted odds ratios, calculated from a collapsed 2×2 table or a stratified 2×2×K table, can be very similar or quite different when significant associations are found between each dichotomous variable and the K-level stratifying variable. It is demonstrated here that the magnitude of the difference between the logs of the two estimators can be approximated by 4 times the covariance between log linear interactions describing the associations of each of the binary variables with the stratifying variable. Two data examples illustrate the usefulness of the variability and covariability of the interactions in providing a statistical accounting for the magnitude of the difference between the logs of the crude and adjusted odds ratios. Other interpretations and applications of the variances and covariances of the log linear interactions are discussed.  相似文献   

14.
Here we consider a multinomial probit regression model where the number of variables substantially exceeds the sample size and only a subset of the available variables is associated with the response. Thus selecting a small number of relevant variables for classification has received a great deal of attention. Generally when the number of variables is substantial, sparsity-enforcing priors for the regression coefficients are called for on grounds of predictive generalization and computational ease. In this paper, we propose a sparse Bayesian variable selection method in multinomial probit regression model for multi-class classification. The performance of our proposed method is demonstrated with one simulated data and three well-known gene expression profiling data: breast cancer data, leukemia data, and small round blue-cell tumors. The results show that compared with other methods, our method is able to select the relevant variables and can obtain competitive classification accuracy with a small subset of relevant genes.  相似文献   

15.
In this paper, we are employing the generalized linear model (GLM) in the form 𝓁ij= to decompose the symmetry model into the class of models discussed in Tomizawa (1992 Tomizawa, S. 1992. Quasi-diagonals-parameter symmetry model for square contingency tables with ordered categories. Calcutta Statist. Assoc. Bull., 39: 5361.  [Google Scholar]). In this formulation, the random component would be the observed counts f ij with an underlying Poisson distribution. This approach utilizes the non-standard log-linear model and our focus in this paper therefore relates to models that are decompositions of the complete symmetry model. That is, models that are implied by the symmetry models. We develop factor and regression variables required for the implementation of these models in SAS PROC GENMOD and SPSS PROC GENLOG. We apply this methodology to analyse the three 4×4 contingency table, one of which is the Japanese Unaided distance vision data. Results obtained in this study are consistent with those from the numerous literature on the subject. We further extend our applications to the 6×6 Brazilian social mobility data. We found that both the quasi linear diagonal-parameters symmetry (QLDPS) and the quasi 2-ratios parameter symmetry (Q2RPS) models fit the Brazilian data very well. Parsimonious models being the QLDPS and the quasi-conditional symmetry (QCS) models. The SAS and SPSS programs for implementing the models discussed in this paper are presented in Appendices A, B and C.  相似文献   

16.
Non‐parametric generalized likelihood ratio test is a popular method of model checking for regressions. However, there are two issues that may be the barriers for its powerfulness: existing bias term and curse of dimensionality. The purpose of this paper is thus twofold: a bias reduction is suggested and a dimension reduction‐based adaptive‐to‐model enhancement is recommended to promote the power performance. The proposed test statistic still possesses the Wilks phenomenon and behaves like a test with only one covariate. Thus, it converges to its limit at a much faster rate and is much more sensitive to alternative models than the classical non‐parametric generalized likelihood ratio test. As a by‐product, we also prove that the bias‐corrected test is more efficient than the one without bias reduction in the sense that its asymptotic variance is smaller. Simulation studies and a real data analysis are conducted to evaluate of proposed tests.  相似文献   

17.
In a cluster randomized controlled trial (RCT), the number of randomized units is typically considerably smaller than in trials where the unit of randomization is the patient. If the number of randomized clusters is small, there is a reasonable chance of baseline imbalance between the experimental and control groups. This imbalance threatens the validity of inferences regarding post‐treatment intervention effects unless an appropriate statistical adjustment is used. Here, we consider application of the propensity score adjustment for cluster RCTs. For the purpose of illustration, we apply the propensity adjustment to a cluster RCT that evaluated an intervention to reduce suicidal ideation and depression. This approach to adjusting imbalance had considerable bearing on the interpretation of results. A simulation study demonstrates that the propensity adjustment reduced well over 90% of the bias seen in unadjusted models for the specifications examined. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

18.
This article is concerned with the parameter estimation in linear regression model when it is suspected that the regression coefficients are the subspace of the equality restrictions. The objective of this article is to introduce the preliminary test almost unbiased Liu estimators (PTAULE) based on the Wald (W), the likelihood ratio (LR), and the Lagrangian multiplier (LM) tests and compare the proposed estimators in the sense of the quadratic bias and mean square error (MSE) criterion.  相似文献   

19.
In this paper, we develop a methodology for the dynamic Bayesian analysis of generalized odds ratios in contingency tables. It is a standard practice to assume a normal distribution for the random effects in the dynamic system equations. Nevertheless, the normality assumption may be unrealistic in some applications and hence the validity of inferences can be dubious. Therefore, we assume a multivariate skew-normal distribution for the error terms in the system equation at each step. Moreover, we introduce a moving average approach to elicit the hyperparameters. Both simulated data and real data are analyzed to illustrate the application of this methodology.  相似文献   

20.
The current estimator of the degree of insect control by an insecticide in a field experiment laid out in randomized blocks is equal to one minus the cross-product ratio of a two way table of total insect counts over blocks. Since much work has been done on estimation of the common odds ratio of a number of strata in medical studies, a series of Monte Carlo studies was performed to investigate the possible use of these estimators and their standard errors in estimating the common degree of inject control of a number of blocks. Maximum likelihood, Mantel-Haenszel, and empirical logit estimators were evaluated and compared with back-transformed means over blocks, of cross-product ratios on the arithmetic, logarithmic, and arcsine scales. Maximum likelihood and Mantel-Haenszel estimators had the smallest mean squared errors, but their standard error estimates were only appropriate when sampling distributions were approximately Poisson and there was little heterogeneity among plots within blocks in the natural rates of population change.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号