首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   35篇
  免费   0篇
管理学   2篇
理论方法论   1篇
统计学   32篇
  2019年   2篇
  2017年   1篇
  2016年   1篇
  2014年   1篇
  2013年   12篇
  2012年   2篇
  2010年   1篇
  2009年   1篇
  2008年   2篇
  2007年   1篇
  2005年   2篇
  2003年   2篇
  2002年   1篇
  2000年   2篇
  1999年   2篇
  1984年   1篇
  1983年   1篇
排序方式: 共有35条查询结果,搜索用时 31 毫秒
1.
Missing data and, more generally, imperfections in implementing a study design are an endemic problem in large scale studies involving human subjects. We present an analysis of an experiment in the interaction between general practitioners and their patients, in which the issue of missing data is addressed by a sensitivity analysis using multiple imputation. Instead of specifying a model for missingness we explore certain extreme ways of departing from the assumption of data missing at random and establish the largest extent of such departures which would still fail to supplant the evidence about the studied effect. An important advantage of the approach is that the algorithm intended for the complete data, to fit generalized linear models with random effects, is used without any alteration.  相似文献   
2.
3.
In two observational studies, one investigating the effects of minimum wage laws on employment and the other of the effects of exposures to lead, an estimated treatment effect's sensitivity to hidden bias is examined. The estimate uses the combined quantile averages that were introduced in 1981 by B. M. Brown as simple, efficient, robust estimates of location admitting both exact and approximate confidence intervals and significance tests. Closely related to Gastwirth's estimate and Tukey's trimean, the combined quantile average has asymptotic efficiency for normal data that is comparable with that of a 15% trimmed mean, and higher efficiency than the trimean, but it has resistance to extreme observations or breakdown comparable with that of the trimean and better than the 15% trimmed mean. Combined quantile averages provide consistent estimates of an additive treatment effect in a matched randomized experiment. Sensitivity analyses are discussed for combined quantile averages when used in a matched observational study in which treatments are not randomly assigned. In a sensitivity analysis in an observational study, subjects are assumed to differ with respect to an unobserved covariate that was not adequately controlled by the matching, so that treatments are assigned within pairs with probabilities that are unequal and unknown. The sensitivity analysis proposed here uses significance levels, point estimates and confidence intervals based on combined quantile averages and examines how these inferences change under a range of assumptions about biases due to an unobserved covariate. The procedures are applied in the studies of minimum wage laws and exposures to lead. The first example is also used to illustrate sensitivity analysis with an instrumental variable.  相似文献   
4.
A correspondence rule is suggested for the choice of a sampling design when prior knowledge concerning a finite population is available. Designs satisfying the correspondence rule are discussed in the case of random permutations models. A general optimality theorem is given for strategies under such models. Approximate correspondences satisfied by systematic sampling and πps sampling are also indicated.  相似文献   
5.
The famous theorem of Birnbaum, stating that the likelihood principle follows from the conditionality principle together with the sufficiency principle, has caused much discussion among statisticians. Briefly, many writers dislike the consequences of the likelihood principle (among other things, confidence coefficients and levels of tests are dismissed as meaningless), but at the same time they feel that both the conditionality principle and the sufficiency principle are intuitively obvious. In the present article we give examples to show that the conditionality principle should not be taken to be of universal validity, and we discuss some consequences of these examples.  相似文献   
6.
ABSTRACT

In this article, we develop a new method, called regenerative randomization, for the transient analysis of continuous time Markov models with absorbing states. The method has the same good properties as standard randomization: numerical stability, well-controlled computation error, and ability to specify the computation error in advance. The method has a benign behavior for large t and is significantly less costly than standard randomization for large enough models and large enough t. For a class of models, class C, including typical failure/repair reliability models with exponential failure and repair time distributions and repair in every state with failed components, stronger theoretical results are available assessing the efficiency of the method in terms of “visible” model characteristics. A large example belonging to that class is used to illustrate the performance of the method and to show that it can indeed be much faster than standard randomization.  相似文献   
7.
Abstract

A crisis of validity has emerged from three related crises of science, that is, the crises of statistical significance and complete randomization, of replication, and of reproducibility. Guinnessometrics takes commonplace assumptions and methods of statistical science and stands them on their head, from little p-values to unstructured Big Data. Guinnessometrics focuses instead on the substantive significance which emerges from a small series of independent and economical yet balanced and repeated experiments. Originally developed and market-tested by William S. Gosset aka “Student” in his job as Head Experimental Brewer at the Guinness Brewery in Dublin, Gosset’s economic and common sense approach to statistical inference and scientific method has been unwisely neglected. In many areas of science and life, the 10 principles of Guinnessometrics or G-values outlined here can help. Other things equal, the larger the G-values, the better the science and judgment. By now a colleague, neighbor, or YouTube junkie has probably shown you one of those wacky psychology experiments in a video involving a gorilla, and testing the limits of human cognition. In one video, a person wearing a gorilla suit suddenly appears on the scene among humans, who are themselves engaged in some ordinary, mundane activity such as passing a basketball. The funny thing is, prankster researchers have discovered, when observers are asked to think about the mundane activity (such as by counting the number of observed passes of a basketball), the unexpected gorilla is frequently unseen (for discussion see Kahneman 2011 Kahneman, D. (2011), Thinking Fast and Slow, New York: Farrar, Straus and Giroux. [Google Scholar]). The gorilla is invisible. People don’t see it.  相似文献   
8.
If a crossover design with more than two treatments is carryover balanced, then the usual randomization of experimental units and periods would destroy the neighbour structure of the design. As an alternative, Bailey [1985. Restricted randomization for neighbour-balanced designs. Statist. Decisions Suppl. 2, 237–248] considered randomization of experimental units and of treatment labels, which leaves the neighbour structure intact. She has shown that, if there are no carryover effects, this randomization validates the row–column model, provided the starting design is a generalized Latin square. We extend this result to generalized Youden designs where either the number of experimental units is a multiple of the number of treatments or the number of periods is equal to the number of treatments. For the situation when there are carryover effects we show for so-called totally balanced designs that the variance of the estimates of treatment differences does not change in the presence of carryover effects, while the estimated variance of this estimate becomes conservative.  相似文献   
9.
We consider the problem of scheduling a set of equal-length intervals arriving online, where each interval is associated with a weight and the objective is to maximize the total weight of completed intervals. An optimal 4-competitive algorithm has long been known in the deterministic case, but the randomized case remains open. We give the first randomized algorithm for this problem, achieving a competitive ratio of 3.5822. We also prove a randomized lower bound of 4/3, which is an improvement over the previous 5/4 result. Then we show that the techniques can be carried to the deterministic multiprocessor case, giving a 3.5822-competitive 2-processor algorithm, and a 4/3 lower bound for any number of processors. We also give a lower bound of 2 for the case of two processors. A preliminary version of this paper appeared in the Proceedings of COCOON 2007, LNCS, vol. 4598, pp. 176–186. The work described in this paper was fully supported by a grant from City University of Hong Kong (SRG 7001969), and NSFC Grant No. 70525004 and 70702030.  相似文献   
10.

The problem of comparing several samples to decide whether the means and/or variances are significantly different is considered. It is shown that with very non-normal distributions even a very robust test to compare the means has poor properties when the distributions have different variances, and therefore a new testing scheme is proposed. This starts by using an exact randomization test for any significant difference (in means or variances) between the samples. If a non-significant result is obtained then testing stops. Otherwise, an approximate randomization test for mean differences (but allowing for variance differences) is carried out, together with a bootstrap procedure to assess whether this test is reliable. A randomization version of Levene's test is also carried out for differences in variation between samples. The five possible conclusions are then that (i) there is no evidence of any differences, (ii) evidence for mean differences only, (iii) evidence for variance differences only, (iv) evidence for mean and variance differences, or (v) evidence for some indeterminate differences. A simulation experiment to assess the properties of the proposed scheme is described. From this it is concluded that the scheme is useful as a robust, conservative method for comparing samples in cases where they may be from very non-normal distributions.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号