全文获取类型
收费全文 | 712篇 |
免费 | 27篇 |
国内免费 | 5篇 |
专业分类
管理学 | 19篇 |
民族学 | 6篇 |
人口学 | 5篇 |
丛书文集 | 24篇 |
理论方法论 | 22篇 |
综合类 | 188篇 |
社会学 | 26篇 |
统计学 | 454篇 |
出版年
2023年 | 11篇 |
2022年 | 4篇 |
2021年 | 15篇 |
2020年 | 12篇 |
2019年 | 21篇 |
2018年 | 22篇 |
2017年 | 35篇 |
2016年 | 20篇 |
2015年 | 28篇 |
2014年 | 25篇 |
2013年 | 150篇 |
2012年 | 65篇 |
2011年 | 40篇 |
2010年 | 31篇 |
2009年 | 34篇 |
2008年 | 23篇 |
2007年 | 32篇 |
2006年 | 17篇 |
2005年 | 28篇 |
2004年 | 18篇 |
2003年 | 19篇 |
2002年 | 12篇 |
2001年 | 9篇 |
2000年 | 12篇 |
1999年 | 13篇 |
1998年 | 9篇 |
1997年 | 5篇 |
1996年 | 4篇 |
1995年 | 4篇 |
1994年 | 6篇 |
1993年 | 2篇 |
1992年 | 7篇 |
1991年 | 3篇 |
1990年 | 1篇 |
1989年 | 1篇 |
1987年 | 1篇 |
1984年 | 1篇 |
1983年 | 2篇 |
1982年 | 1篇 |
1975年 | 1篇 |
排序方式: 共有744条查询结果,搜索用时 468 毫秒
1.
When a candidate predictive marker is available, but evidence on its predictive ability is not sufficiently reliable, all‐comers trials with marker stratification are frequently conducted. We propose a framework for planning and evaluating prospective testing strategies in confirmatory, phase III marker‐stratified clinical trials based on a natural assumption on heterogeneity of treatment effects across marker‐defined subpopulations, where weak rather than strong control is permitted for multiple population tests. For phase III marker‐stratified trials, it is expected that treatment efficacy is established in a particular patient population, possibly in a marker‐defined subpopulation, and that the marker accuracy is assessed when the marker is used to restrict the indication or labelling of the treatment to a marker‐based subpopulation, ie, assessment of the clinical validity of the marker. In this paper, we develop statistical testing strategies based on criteria that are explicitly designated to the marker assessment, including those examining treatment effects in marker‐negative patients. As existing and developed statistical testing strategies can assert treatment efficacy for either the overall patient population or the marker‐positive subpopulation, we also develop criteria for evaluating the operating characteristics of the statistical testing strategies based on the probabilities of asserting treatment efficacy across marker subpopulations. Numerical evaluations to compare the statistical testing strategies based on the developed criteria are provided. 相似文献
2.
In studies with recurrent event endpoints, misspecified assumptions of event rates or dispersion can lead to underpowered trials or overexposure of patients. Specification of overdispersion is often a particular problem as it is usually not reported in clinical trial publications. Changing event rates over the years have been described for some diseases, adding to the uncertainty in planning. To mitigate the risks of inadequate sample sizes, internal pilot study designs have been proposed with a preference for blinded sample size reestimation procedures, as they generally do not affect the type I error rate and maintain trial integrity. Blinded sample size reestimation procedures are available for trials with recurrent events as endpoints. However, the variance in the reestimated sample size can be considerable in particular with early sample size reviews. Motivated by a randomized controlled trial in paediatric multiple sclerosis, a rare neurological condition in children, we apply the concept of blinded continuous monitoring of information, which is known to reduce the variance in the resulting sample size. Assuming negative binomial distributions for the counts of recurrent relapses, we derive information criteria and propose blinded continuous monitoring procedures. The operating characteristics of these are assessed in Monte Carlo trial simulations demonstrating favourable properties with regard to type I error rate, power, and stopping time, ie, sample size. 相似文献
3.
Proportional hazards are a common assumption when designing confirmatory clinical trials in oncology. This assumption not only affects the analysis part but also the sample size calculation. The presence of delayed effects causes a change in the hazard ratio while the trial is ongoing since at the beginning we do not observe any difference between treatment arms, and after some unknown time point, the differences between treatment arms will start to appear. Hence, the proportional hazards assumption no longer holds, and both sample size calculation and analysis methods to be used should be reconsidered. The weighted log‐rank test allows a weighting for early, middle, and late differences through the Fleming and Harrington class of weights and is proven to be more efficient when the proportional hazards assumption does not hold. The Fleming and Harrington class of weights, along with the estimated delay, can be incorporated into the sample size calculation in order to maintain the desired power once the treatment arm differences start to appear. In this article, we explore the impact of delayed effects in group sequential and adaptive group sequential designs and make an empirical evaluation in terms of power and type‐I error rate of the of the weighted log‐rank test in a simulated scenario with fixed values of the Fleming and Harrington class of weights. We also give some practical recommendations regarding which methodology should be used in the presence of delayed effects depending on certain characteristics of the trial. 相似文献
4.
英国的刑事上诉制度研究 总被引:2,自引:0,他引:2
孙长永 《湘潭大学学报(哲学社会科学版)》2002,26(5):33-42
英国对刑事案件基本上实行两级上诉制度 :对于刑事法院的一审裁判可以依次向上诉法院和上议院上诉 ;对治安法院的裁判 ,可以分别向刑事法院或者高等法院以及上议院上诉 ,但是 ,二者的程序规则不完全相同。2 0世纪 70年代以来 ,英国上诉制度逐渐向大陆法系靠拢 ,最近英国政府再次提议进一步扩大控诉方的上诉权 ,英国上诉制度可能将发生重大变化。 相似文献
5.
黄雄发 《中国矿业大学学报(社会科学版)》2007,9(4):119-121
二元对立现象广泛存在于文学作品之中,二元对立原则也成为一种重要的文学分析方法。运用此原则研究体现二元对立内涵的文学作品,有助于更全面,深入地发掘作品的内涵和艺术精髓。 相似文献
6.
John D. Emerson David C. Hoaglin Frederick Mosteller 《Statistical Methods and Applications》1993,2(3):269-290
Summary Meta-analyses of sets of clinical trials often combine risk differences from several 2×2 tables according to a random-effects
model. The DerSimonian-Laird random-effects procedure, widely used for estimating the populaton mean risk difference, weights
the risk difference from each primary study inversely proportional to an estimate of its variance (the sum of the between-study
variance and the conditional within-study variance). Because those weights are not independent of the risk differences, however,
the procedure sometimes exhibits bias and unnatural behavior. The present paper proposes a modified weighting scheme that
uses the unconditional within-study variance to avoid this source of bias. The modified procedure has variance closer to that
available from weighting by ideal weights when such weights are known. We studied the modified procedure in extensive simulation
experiments using situations whose parameters resemble those of actual studies in medical research. For comparison we also
included two unbiased procedures, the unweighted mean and a sample-size-weighted mean; their relative variability depends
on the extent of heterogeneity among the primary studies. An example illustrates the application of the procedures to actual
data and the differences among the results.
This research was supported by Grant HS 05936 from the Agency for Health Care Policy and Research to Harvard University. 相似文献
7.
Longitudinal data often contain missing observations, and it is in general difficult to justify particular missing data mechanisms, whether random or not, that may be hard to distinguish. The authors describe a likelihood‐based approach to estimating both the mean response and association parameters for longitudinal binary data with drop‐outs. They specify marginal and dependence structures as regression models which link the responses to the covariates. They illustrate their approach using a data set from the Waterloo Smoking Prevention Project They also report the results of simulation studies carried out to assess the performance of their technique under various circumstances. 相似文献
8.
Summary Based on 14 case studies of highly effective therapies and the reasons they succeeded less frequently than they could, we
propose a variety of steps to improve the health care system of the U.S.A. Whatever proposal emerges from current national
debates until innovations are shown to be safe and effective, they should not be supported; when slightly better technologies
are much more expensive than other good ones we need to consider appropriate choices carefully; simplified billing and bookkeping
would reduce our costs; when a technology is rapidly introduced cautionnary measures may be needed; tracking immunization
and repairing their omissions requires a new system; educational programs such as seen effective in hypertension should be
applied in other areas such as vaccination; in organ transplantation the nation should consider “presumed consent”; our payment
system sometimes creates perverse incentives and therefore needs review; and the preferences of the public in allocation of
health resources need to be discovered once the public is informed about the issues.
Research supported by Andrew W. Mellon Foundation. 相似文献
9.
Chris J. Lloyd 《Australian & New Zealand Journal of Statistics》2002,44(1):75-86
This paper presents a method of estimating a receiver operating characteristic (ROC) curve when the underlying diagnostic variable X is continuous and fully observed. The new method is based on modelling the probability of response given X , rather than the distribution of X given response. The method offers advantages in modelling flexibility and computational simplicity. The resulting ROC curve estimates are semi-parametric and can, in principle, take an infinite variety of shapes. Moreover, model selection can be based on standard methods within the binomial regression framework. Statistical accuracy of the curve estimate is provided by a simply implemented bootstrap approach. 相似文献
10.
The authors consider the optimal design of sampling schedules for binary sequence data. They propose an approach which allows a variety of goals to be reflected in the utility function by including deterministic sampling cost, a term related to prediction, and if relevant, a term related to learning about a treatment effect To this end, they use a nonparametric probability model relying on a minimal number of assumptions. They show how their assumption of partial exchangeability for the binary sequence of data allows the sampling distribution to be written as a mixture of homogeneous Markov chains of order k. The implementation follows the approach of Quintana & Müller (2004), which uses a Dirichlet process prior for the mixture. 相似文献