首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
A number of topics of statistical methodology in weather modification are discussed. The time sequence of unit definition, classification and randomization is shown to affect the types of units that can be used validly, and this casts doubt on the value of blocking. Re-randomization (permutation) tests are recommended as the only reliable method of confirmatory inference for weather experiments. Some aspects of such tests are examined, including a procedure for multiple comparisons. The plague of multiplicity of tests is discussed and warned against. Doubts about cumulative evaluations of "all" experiment's are expressed. A case is argued for examination of some non-randomized seeding operations. Consid¬ering the dearth of randomized data, it is argued that careful evaluation of seeding operations should be undertaken.  相似文献   

3.
Data from a weather modification experiment are examined and a number of statistical analyses reported. The validity of earlier inferences is studied as are the utilities of various statistical methods. The experiment is described. The original analysis of North American Weather Consultants, who conducted the experiment, is reviewed. Data summarization is reported. A major approach to analysis is through the use of cloud-physics covari-ates in regression analyses. Finally, a multivariate analysis is discussed. It appears that the covariates may have been affected by treatment (cloud seeding) and that their use is invalid, not only reducing error variances but removing treatment effect. Some recommendations for improved design of similar future experiments are given in a concluding section, including preliminary trial use of blocking by storms.  相似文献   

4.
Various statistical tests have been developed for testing the equality of means in matched pairs with missing values. However, most existing methods are commonly based on certain distributional assumptions such as normality, 0-symmetry or homoscedasticity of the data. The aim of this paper is to develop a statistical test that is robust against deviations from such assumptions and also leads to valid inference in case of heteroscedasticity or skewed distributions. This is achieved by applying a clever randomization approach to handle missing data. The resulting test procedure is not only shown to be asymptotically correct but is also finitely exact if the distribution of the data is invariant with respect to the considered randomization group. Its small sample performance is further studied in an extensive simulation study and compared to existing methods. Finally, an illustrative data example is analysed.  相似文献   

5.
This article provides some views on the statistical design and analysis of weather modification experiments. Perspectives were developed from experience with analyses of the Santa Barbara Phase I experiment summarized in Section 2, Randomization analvses are reported and compared with previously published parametric analyses. The parametric significance levels of tests for a cloud seeding effect agree well with the significance levels of the new corresponding randomization tests, These results, along with similar results of others, suggest that parametric analyses may be used as approximations to randomization analyses in exploratory analyses or reanalyses of weather modification experimental data.  相似文献   

6.
The effect of a test compound on neurogenically induced vasodilation in marmosets was studied using a non‐standard experimental design with overlapping dosage groups and repeated measurements. In this study, the assumption that the data were normally distributed seemed inappropriate, so no traditional data analyses could be used. As an alternative, a new permutation trend test was designed based on the Jonckheere–Terpstra test statistic. This test protects the type I error without any further assumptions. Statistically significant differences in trend between treatment groups were detected. The effect of the compound was then shown across doses using subsequent Wilcoxon rank‐sum tests against ordered alternatives. In all, the permutation test proved quite useful in this context. This nonparametric approach to the analysis may easily be adapted to other applications. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

7.
We consider a situation which is common in epidemiology, in which several transformations of an explanatory variable are tried in a Cox model and the most significant test is retained. The p-value should then be corrected to take account of the multiplicity of tests. Bonferroni method is often too conservative because the tests may be highly positively correlated. We propose an asymptotically exact correction of the p-value. The method uses the fact that the tests are asymptotically normal to compute numerically the distribution of the maximum of several tests. Counting processes theory is used to derive estimators of the correlations between tests. The method is illustrated by a simulation and an analysis of the relation between concentration of aluminum in drinking water and risk of dementia.  相似文献   

8.
An algorithm is presented for computing the probability value associated with a recently-developed test of statistical inference for matched pairs. The exact probability value is provided for small samples; otherwise, an approximate probability value is computed.  相似文献   

9.
Consistency of some nonparametric tests with real variables has been studied by several authors under the assumption that population variance is finite and/or in the presence of some violations of the data exchangeability between samples. Since main inferential conclusions of permutation tests concern the actual dataset, where sample sizes are held fixed, we consider the notion of consistency in the weak version (in probability). Here, we characterize weak consistency of permutation tests assuming population mean is finite and without assuming existence of population variance. Moreover, since permutation test statistics do not require to be standardized, we do not assume that data are homoscedastic in the alternative. Several application examples to mostly used test statistics are discussed. A simulation study and some hints for robust testing procedures are also presented.  相似文献   

10.
Summary This paper deals with nonparametric methods for combining dependent permutation or randomization tests. Particularly, they are nonparametric with respect to the underlying dependence structure. The methods are based on a without replacement resampling procedure (WRRP) conditional on the observed data, also called conditional simulation, which provide suitable estimates, as good as computing time permits, of the permutational distribution of any statistic. A class C of combining functions is characterized in such a way that all its members, under suitable and reasonable conditions, are found to be consistent and unbiased. Moreover, for some of its members, their almost sure asymptotic equivalence with respect to best tests, in particular cases, is shown. An applicational example to a multivariate permutationalt-paired test is also discussed.  相似文献   

11.
The author proposes an adaptive method which produces confidence intervals that are often narrower than those obtained by the traditional procedures. The proposed methods use both a weighted least squares approach to reduce the length of the confidence interval and a permutation technique to insure that its coverage probability is near the nominal level. The author reports simulations comparing the adaptive intervals to the traditional ones for the difference between two population means, for the slope in a simple linear regression, and for the slope in a multiple linear regression having two correlated exogenous variables. He is led to recommend adaptive intervals for sample sizes superior to 40 when the error distribution is not known to be Gaussian.  相似文献   

12.
13.
14.
Missing data pose a serious challenge to the integrity of randomized clinical trials, especially of treatments for prolonged illnesses such as schizophrenia, in which long‐term impact assessment is of great importance, but the follow‐up rates are often no more than 50%. Sensitivity analysis using Bayesian modeling for missing data offers a systematic approach to assessing the sensitivity of the inferences made on the basis of observed data. This paper uses data from an 18‐month study of veterans with schizophrenia to demonstrate this approach. Data were obtained from a randomized clinical trial involving 369 patients diagnosed with schizophrenia that compared long‐acting injectable risperidone with a psychiatrist's choice of oral treatment. Bayesian analysis utilizing a pattern‐mixture modeling approach was used to validate the reported results by detecting bias due to non‐random patterns of missing data. The analysis was applied to several outcomes including standard measures of schizophrenia symptoms, quality of life, alcohol use, and global mental status. The original study results for several measures were confirmed against a wide range of patterns of non‐random missingness. Robustness of the conclusions was assessed using sensitivity parameters. The missing data in the trial did not likely threaten the validity of previously reported results. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

15.
Bayesian analysis of predictive values and related parameters of a diagnostic test are derived. In one case, the estimates are conditional on values of the prevalence of the disease; in the second case, the corresponding unconditional estimates are presented. Small-sample point estimates, posterior moments, and credibility intervals for all related parameters are obtained. Numerical methods of solution are also discussed.  相似文献   

16.
Analyzing repeated difference tests aims in significance testing for differences as well as in estimating the mean discrimination ability of the consumers. In addition to the average success probability, the proportion of consumers that may detect the difference between two products and therefore account for any increase of this probability is of interest. While some authors address the first two goals, for the latter one only an estimator directly linked to the average probability seems to be used. However, this may lead to unreasonable results. Therefore we propose a new approach based on multiple test theory. We define a suitable set of hypotheses that is closed under intersection. From this, we derive a series of hypotheses that may be sequentially tested while the overall significance level will not be violated. By means of this procedure we may determine a minimal number of assessors that must have perceived the difference between the products at least once in a while. From this, we can find a conservative lower bound for the proportion of perceivers within the consumers. In several examples, we give some insight into the properties of this new method and show that the knowledge about this lower bound might indeed be valuable for the investigator. Finally, an adaption of this approach for similarity tests will be proposed.  相似文献   

17.
There has been a paradigm shift in diagnostic conceptualization of Alzheimer's disease (AD) based on the current evidence suggesting that structure and biology changes start to occur before clinical symptoms emerge. Consequently, therapeutic drug development is also shifting to treat early AD patients using biomarkers for enrichment in clinical trials. A similar paradigm shift is occurring for Parkinson disease. In the absence of acceptable biomarkers that could be combined with a clinical endpoint to demonstrate a disease modification (DM) effect in neurodegenerative disorders, a delayed‐start design can be applied to demonstrate a lasting effect on the disease course. The delayed‐start design includes two treatment periods, where in period 1, patients are randomized to receive an active treatment or placebo, and in period 2, placebo patients are switched to the active treatment while patients in the active treatment arm will continue the same treatment. The hypothesis is that patients who start the active treatment later will fail to catch up to the treatment benefit achieved by patients who receive the active treatment in both periods. A usual analytical approach has sought to demonstrate the divergence of slope during period 1 and the parallelism of slopes during period 2 as the DM effect. However, due to heterogeneity in timing and the magnitude of maximal effect among patients, nonlinear response over time could be observed within the two treatment arms in both periods. We propose an approach to evaluate the DM effect with the linearity assumption for treatment differences, but not for each arm separately.  相似文献   

18.
Nonstationary time series are frequently detrended in empirical investigations by regressing the series on time or a function of time. The effects of the detrending on the tests for causal relationships in the sense of Granger are investigated using quarterly U.S. data. The causal relationships between nominal or real GNP and M1, inferred from the Granger–Sims tests, are shown to depend very much on, among other factors, whether or not series are detrended. Detrending tends to remove or weaken causal relationships, and conversely, failure to detrend tends to introduce or enhance causal relationships. The study suggests that we need a more robust test or a better definition of causality.  相似文献   

19.
Standard serial correlation tests are derived assuming that the disturbances are homoscedastic, but this study shows that asympotic critical values are not accurate when this assumption is violated. Asymptotic critical values for the ARCH(2)-corrected LM, BP and BL tests are valid only when the underlying ARCH process is strictly stationary, whereas Wooldridge's robust LM test has good properties overall. These tests exhibit similar bahaviour even when the underlying process is GARCH (1,1). When the regressors include lagged dependent variables, the rejection frequencies under both the null and alternative hypotheses depend on the coefficientsof the lagged dependent variables and the other model parameters. They appear to be robust across various disturbance distributions under the null hypothesis.  相似文献   

20.
ABSTRACT

Physical phenomena are commonly modelled by time consuming numerical simulators, function of many uncertain parameters whose influences can be measured via a global sensitivity analysis. The usual variance-based indices require too many simulations, especially as the inputs are numerous. To address this limitation, we consider recent advances in dependence measures, focusing on the distance correlation and the Hilbert–Schmidt independence criterion. We study and use these indices for a screening purpose. Numerical tests reveal differences between variance-based indices and dependence measures. Then, two approaches are proposed to use the latter for a screening purpose. The first approach uses independence tests, with existing asymptotic versions and spectral extensions; bootstrap versions are also proposed. The second considers a linear model with dependence measures, coupled to a bootstrap selection method or a Lasso penalization. Numerical experiments show their potential in the presence of many non-influential inputs and give successful results for a nuclear reliability application.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号