共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper proposes an approach for detecting multiple confounders which combines the advantages of two causal models, the potential outcome model and the causal diagram. The approach need not use a complete causal diagram as long as it is known that a known covariate set Z contains the parent set of the exposure E . On the other hand, whether a covariate is or not a confounder may depend on its categorization. We introduce uniform non-confounding which implies non-confounding in any subpopulation defined by the interval of a covariate (or any pooled level for a discrete covariate). We show that the conditions in Miettinen and Cook's criteria for non-confounding also imply uniform non-confounding. Further we present an algorithm for deleting non-confounders from the potential confounder set Z, which extends Greenland et al.'s [1999a. Causal diagrams for epidemiologic research. Epidemiology 10, 37–48] approach by splitting Z into a series of potential confounder subsets. We also discuss conditions for non-confounding bias in the subpopulations in which we are interested, where the subpopulations may be defined by non-confounders. 相似文献
2.
The National Cancer Institute (NCI) suggests a sudden reduction in prostate cancer mortality rates, likely due to highly successful treatments and screening methods for early diagnosis. We are interested in understanding the impact of medical breakthroughs, treatments, or interventions, on the survival experience for a population. For this purpose, estimating the underlying hazard function, with possible time change points, would be of substantial interest, as it will provide a general picture of the survival trend and when this trend is disrupted. Increasing attention has been given to testing the assumption of a constant failure rate against a failure rate that changes at a single point in time. We expand the set of alternatives to allow for the consideration of multiple change-points, and propose a model selection algorithm using sequential testing for the piecewise constant hazard model. These methods are data driven and allow us to estimate not only the number of change points in the hazard function but where those changes occur. Such an analysis allows for better understanding of how changing medical practice affects the survival experience for a patient population. We test for change points in prostate cancer mortality rates using the NCI Surveillance, Epidemiology, and End Results dataset. 相似文献
3.
Vishva Manohara Danthurebandara Jie Yu Martina Vandebroek 《Journal of statistical planning and inference》2011,141(7):2276-2286
Conjoint choice experiments have become a powerful tool to explore individual preferences. The consistency of respondents' choices depends on the choice complexity. For example, it is easier to make a choice between two alternatives with few attributes than between five alternatives with several attributes. In the latter case it will be much harder to choose the preferred alternative which is reflected in a higher response error. Several authors have dealt with this choice complexity in the estimation stage but very little attention has been paid to set up designs that take this complexity into account. The core issue of this paper is to find out whether it is worthwhile to take this complexity into account in the design stage. We construct efficient semi-Bayesian D-optimal designs for the heteroscedastic conditional logit model which is used to model the across respondent variability that occurs due to the choice complexity. The degree of complexity is measured by the entropy, as suggested by Swait and Adamowicz (2001). The proposed designs are compared with a semi-Bayesian D-optimal design constructed without taking the complexity into account. The simulation study shows that it is much better to take the choice complexity into account when constructing conjoint choice experiments. 相似文献
4.
Duane Meeter 《统计学通讯:理论与方法》2013,42(11):4213-4223
An algorithm is developed for calculating the probability distribution of the number of matches between two specified rows of a matrix of zeroes and ones. Cases covered include row totals fixed, column totals fixed, and column totals and the two specified rows' totals fixed. The results are applied to presence-absence data on six species of ground finches on 23 Galàpagos islands and two constructed examples. 相似文献
5.
In this paper, we establish the optimal size of the choice sets in generic choice experiments for asymmetric attributes when estimating main effects only. We give an upper bound for the determinant of the information matrix when estimating main effects and all two-factor interactions for binary attributes. We also derive the information matrix for a choice experiment in which the choice sets are of different sizes and use this to determine the optimal sizes for the choice sets. 相似文献
6.
7.
Zerbet and Nikulin presented the new statistic Z k for detecting outliers in exponential distribution. They also compared this statistic with Dixon's statistic D k . In this article, we extend this approach to gamma distribution and compare the result with Dixon's statistic. The results show that the test based on statistic Z k is more powerful than the test based on the Dixon's statistic. 相似文献
8.
Toshio Sakata 《统计学通讯:理论与方法》2013,42(3):641-655
The likelihood ratio test for testing for a change in a sequence of variances of normal populations is derived. The alternative hypothesis considered is of a one-sided nature. For the test, the conservativeness of the Sidak bound is shown and the asymptotic version of the Sidak bound is also constructed. These bound are compared with the Bonferroni bound and the Worsley bound, using the Monte Carlo method. Finally Hsu's data of stock market returns is reanalysed, using the test. 相似文献
9.
Mahayaudin M. Mansor David A. Green Andrew V. Metcalfe 《The American statistician》2020,74(3):258-266
AbstractDirectionality can be seen in many stationary time series from various disciplines, but it is overlooked when fitting linear models with Gaussian errors. Moreover, we cannot rely on distinguishing directionality by comparing a plot of a time series in time order with a plot in reverse time order. In general, a statistical measure is required to detect and quantify directionality. There are several quite different qualitative forms of directionality, and we distinguish: rapid rises followed by slow recessions; rapid increases and rapid decreases from the mean followed by slow recovery toward the mean; directionality above or below some threshold; and intermittent directionality. The first objective is to develop a suite of statistical measures that will detect directionality and help classify its nature. The second objective is to demonstrate the potential benefits of detecting directionality. We consider applications from business, environmental science, finance, and medicine. Time series data are collected from many processes, both natural and anthropogenic, by a wide range of organizations, and directionality can easily be monitored as part of routine analysis. We suggest that doing so may provide new insights to the processes. 相似文献
10.
11.
Review of OPTIMAL CONTROL, EXPECTATIONS AND UNCERTAINTY by Sean Holly and Andrew Hughes Hallett by Scott David Hakala Southern Methodist Univ., Dept. of Economics, Dallas, Tx 75275. 相似文献
12.
Why study Pseudo-R2,s for limited dependent variable models? After all, even in the much clearer ordinary least squares case, R2 is a poor guide to model selection, at least when used by itself, because it never decreases and typically increases whenever an independent variable is added. There are even cases where R2 will tend to one when there is no relationship among the (nonstationary) variables whatsoever (Granger and Newbold, 1974). Surely applied researchers would not want to bother with such a statistic in the limited dependent variable case, particularly when the intuitive explainedvariation- to- total- variation interpretation is no longer available. 相似文献
13.
In vitro dissolution similarity has been suggested as a surrogate for assessing equivalence between the pre-changed and post-changed formulations for postapproval changes of a drug. The difference factor f1, based on the absolute mean difference, has been proposed as a criterion for evaluating similarity between dissolution profiles. Statistical properties including density function, bias, and asymptotic distribution of a consistent estimator are investigated. Due to complexity of the distribution of the estimator, we suggest the use of the confidence intervals obtained from the bootstrap method for evaluation of dissolution similarity. A simulation was conducted to examine the size and power of the proposed CI procedure. Comparisons with other criteria such as similarity factor are also provided. Numerical examples are used to illustrate the proposed CI procedure. 相似文献
14.
Consider a sequence of independent observations which change their marginal distribution at most once somewhere in the sequence and one is not certain where the change has occurred. One would be interested in detecting the change and determining the two distributions which would describe the sequence. On the other hand if no change had occurred, one would want to know the common distribution of the observations. This study develops a Bayesian test for detecting a switch from one linear model to another. The test is based on the marginal posterior mass function of the switch point and the posterior probability of a stable model. This test and an informal sequential procedure of Smith are illustrated with data generated from an unstable linear regression model, which changes the linear relationship between the dependent and independent variables 相似文献
15.
Chia-Shang James Chu 《Econometric Reviews》2013,32(2):241-266
This paper applies recent theories of testing for parameter constancy to the conditional variance in a GARCH model. The supremum Lagrange multiplier test for conditional Gaussian GARCH models and its robustified variants are discussed. The asymptotic null distribution of the test statistics are derived from the weak convergence of the scores, and the critical values from the hitting probability of squared Bessel process. Monte Carlo studies on the finite sample size and power performance of the supremum LM tests are conducted. Applications of these tests to S&P 500 indicate that the hypothesis of stable conditional variance parameters can be rejected. 相似文献
16.
C. M. Barros G. J. A. Amaral A. D. C. Nascimento A. H. M. A. Cysneiros 《统计学通讯:理论与方法》2017,46(14):6882-6898
A method for detecting outliers in axial data has been proposed by Best and Fisher (1986). For extending that work, we propose four new methods. Two of them are suitable for outlier detection and they depend on the classic geodesic distance and a modified version of this distance. The other two procedures, which are designed for influential observation detection, are based on the Kullback–Leibler and Cook’s distances. Some simulation experiments are performed to compare all considered methods. Detection and error rates are used as comparison criteria. Numerical results provide evidence in favor of the KL distance. 相似文献
17.
Unreplicated factorial designs pose a difficult problem in analysis because there are no degrees of freedom left to estimate the error. Daniel [Technometrics 1 (1959), pp. 311-341] proposed an ingenious graphical method that does not require σ to be estimated. Here we try to put Daniel's method into a formal framework and lift the subjectiveness that carries. A simulation study has been conducted that shows that the proposed method behaves better than Lenth's [Technometrics 31 (1989), pp. 469-473] popular method. 相似文献
18.
Detecting parameter shift in garch models 总被引:1,自引:0,他引:1
Chia-Shang James Chu 《Econometric Reviews》1995,14(2):241-266
This paper applies recent theories of testing for parameter constancy to the conditional variance in a GARCH model. The supremum Lagrange multiplier test for conditional Gaussian GARCH models and its robustified variants are discussed. The asymptotic null distribution of the test statistics are derived from the weak convergence of the scores, and the critical values from the hitting probability of squared Bessel process.
Monte Carlo studies on the finite sample size and power performance of the supremum LM tests are conducted. Applications of these tests to S&P 500 indicate that the hypothesis of stable conditional variance parameters can be rejected. 相似文献
Monte Carlo studies on the finite sample size and power performance of the supremum LM tests are conducted. Applications of these tests to S&P 500 indicate that the hypothesis of stable conditional variance parameters can be rejected. 相似文献
19.
Statistics are developed for predicting the effect of data transformations on the F statistic when the assumptions of homoscedasticity and normality underlying the AN OVA are not necessarily satisfied. These statistics are useful for determining whether and how to transform, They are developed by partitioning the change in the observed value of the jF-statistic under the transformation, into two expressions, one of which depends on the "truth" of HQ while the other does not. Using this partition, desirable properties are derived for transformations. Criteria are developed defining transformations which tend to preserve the type 1 error while increasing power when needed. Using these criteria, the notion of model robustness is introduced. It is shown that the Box-Cox methodology for selecting a power transform may, under certain conditions, produce a transformation which does not permit inferences to be made about the parent population from the transformed population. An alternative approach suggested here does permit such inferences. 相似文献
20.
Frank A. G. Windmeijer 《Econometric Reviews》1995,14(1):101-116
In this paper, a review is given of various goodness-of-fit measures that have been proposed for the binary choice model in the last two decades. The relative behaviour of several pseudo-R2 measures is analysed in a series of misspecified binary choice models, the misspecification being omitted variables or an included irrelevant variable. A comparison is made with the OLS-R2 of the underlying latent variable model and with the squared sample correlation coefficient of the true and predicted probabilities. Further, it is investigated how the values of the measures change with a changing frequency rate of successes. 相似文献