全文获取类型
收费全文 | 898篇 |
免费 | 35篇 |
国内免费 | 47篇 |
专业分类
管理学 | 11篇 |
民族学 | 2篇 |
人口学 | 12篇 |
丛书文集 | 39篇 |
理论方法论 | 83篇 |
综合类 | 163篇 |
社会学 | 166篇 |
统计学 | 504篇 |
出版年
2024年 | 1篇 |
2023年 | 25篇 |
2022年 | 1篇 |
2021年 | 23篇 |
2020年 | 23篇 |
2019年 | 40篇 |
2018年 | 39篇 |
2017年 | 49篇 |
2016年 | 31篇 |
2015年 | 26篇 |
2014年 | 35篇 |
2013年 | 266篇 |
2012年 | 64篇 |
2011年 | 42篇 |
2010年 | 30篇 |
2009年 | 30篇 |
2008年 | 40篇 |
2007年 | 35篇 |
2006年 | 30篇 |
2005年 | 24篇 |
2004年 | 19篇 |
2003年 | 8篇 |
2002年 | 22篇 |
2001年 | 17篇 |
2000年 | 10篇 |
1999年 | 11篇 |
1998年 | 7篇 |
1997年 | 6篇 |
1996年 | 4篇 |
1995年 | 2篇 |
1994年 | 2篇 |
1993年 | 1篇 |
1992年 | 6篇 |
1990年 | 1篇 |
1987年 | 1篇 |
1985年 | 1篇 |
1984年 | 1篇 |
1983年 | 2篇 |
1981年 | 2篇 |
1980年 | 1篇 |
1977年 | 1篇 |
1975年 | 1篇 |
排序方式: 共有980条查询结果,搜索用时 15 毫秒
101.
Raghunath Arnab 《统计学通讯:理论与方法》2019,48(16):4154-4170
Gupta et al. and Huang considered optional randomized response techniques where the probability of choosing the randomized (or direct) response is fixed for all the respondents. In this paper the assumption of the constant probability of choosing the option has been relaxed by dividing respondents into two groups: one group provides direct response and the second a randomized response. The method of estimation of the population mean and variances under the modified assumption are obtained. Relative efficiencies of the proposed techniques are compared theoretically and empirically. 相似文献
102.
The Simon's two‐stage design is the most commonly applied among multi‐stage designs in phase IIA clinical trials. It combines the sample sizes at the two stages in order to minimize either the expected or the maximum sample size. When the uncertainty about pre‐trial beliefs on the expected or desired response rate is high, a Bayesian alternative should be considered since it allows to deal with the entire distribution of the parameter of interest in a more natural way. In this setting, a crucial issue is how to construct a distribution from the available summaries to use as a clinical prior in a Bayesian design. In this work, we explore the Bayesian counterparts of the Simon's two‐stage design based on the predictive version of the single threshold design. This design requires specifying two prior distributions: the analysis prior, which is used to compute the posterior probabilities, and the design prior, which is employed to obtain the prior predictive distribution. While the usual approach is to build beta priors for carrying out a conjugate analysis, we derived both the analysis and the design distributions through linear combinations of B‐splines. The motivating example is the planning of the phase IIA two‐stage trial on anti‐HER2 DNA vaccine in breast cancer, where initial beliefs formed from elicited experts' opinions and historical data showed a high level of uncertainty. In a sample size determination problem, the impact of different priors is evaluated. 相似文献
103.
In studies with recurrent event endpoints, misspecified assumptions of event rates or dispersion can lead to underpowered trials or overexposure of patients. Specification of overdispersion is often a particular problem as it is usually not reported in clinical trial publications. Changing event rates over the years have been described for some diseases, adding to the uncertainty in planning. To mitigate the risks of inadequate sample sizes, internal pilot study designs have been proposed with a preference for blinded sample size reestimation procedures, as they generally do not affect the type I error rate and maintain trial integrity. Blinded sample size reestimation procedures are available for trials with recurrent events as endpoints. However, the variance in the reestimated sample size can be considerable in particular with early sample size reviews. Motivated by a randomized controlled trial in paediatric multiple sclerosis, a rare neurological condition in children, we apply the concept of blinded continuous monitoring of information, which is known to reduce the variance in the resulting sample size. Assuming negative binomial distributions for the counts of recurrent relapses, we derive information criteria and propose blinded continuous monitoring procedures. The operating characteristics of these are assessed in Monte Carlo trial simulations demonstrating favourable properties with regard to type I error rate, power, and stopping time, ie, sample size. 相似文献
104.
Proportional hazards are a common assumption when designing confirmatory clinical trials in oncology. This assumption not only affects the analysis part but also the sample size calculation. The presence of delayed effects causes a change in the hazard ratio while the trial is ongoing since at the beginning we do not observe any difference between treatment arms, and after some unknown time point, the differences between treatment arms will start to appear. Hence, the proportional hazards assumption no longer holds, and both sample size calculation and analysis methods to be used should be reconsidered. The weighted log‐rank test allows a weighting for early, middle, and late differences through the Fleming and Harrington class of weights and is proven to be more efficient when the proportional hazards assumption does not hold. The Fleming and Harrington class of weights, along with the estimated delay, can be incorporated into the sample size calculation in order to maintain the desired power once the treatment arm differences start to appear. In this article, we explore the impact of delayed effects in group sequential and adaptive group sequential designs and make an empirical evaluation in terms of power and type‐I error rate of the of the weighted log‐rank test in a simulated scenario with fixed values of the Fleming and Harrington class of weights. We also give some practical recommendations regarding which methodology should be used in the presence of delayed effects depending on certain characteristics of the trial. 相似文献
105.
When a candidate predictive marker is available, but evidence on its predictive ability is not sufficiently reliable, all‐comers trials with marker stratification are frequently conducted. We propose a framework for planning and evaluating prospective testing strategies in confirmatory, phase III marker‐stratified clinical trials based on a natural assumption on heterogeneity of treatment effects across marker‐defined subpopulations, where weak rather than strong control is permitted for multiple population tests. For phase III marker‐stratified trials, it is expected that treatment efficacy is established in a particular patient population, possibly in a marker‐defined subpopulation, and that the marker accuracy is assessed when the marker is used to restrict the indication or labelling of the treatment to a marker‐based subpopulation, ie, assessment of the clinical validity of the marker. In this paper, we develop statistical testing strategies based on criteria that are explicitly designated to the marker assessment, including those examining treatment effects in marker‐negative patients. As existing and developed statistical testing strategies can assert treatment efficacy for either the overall patient population or the marker‐positive subpopulation, we also develop criteria for evaluating the operating characteristics of the statistical testing strategies based on the probabilities of asserting treatment efficacy across marker subpopulations. Numerical evaluations to compare the statistical testing strategies based on the developed criteria are provided. 相似文献
106.
This article is devoted to the construction and asymptotic study of adaptive, group‐sequential, covariate‐adjusted randomized clinical trials analysed through the prism of the semiparametric methodology of targeted maximum likelihood estimation. We show how to build, as the data accrue group‐sequentially, a sampling design that targets a user‐supplied optimal covariate‐adjusted design. We also show how to carry out sound statistical inference based on such an adaptive sampling scheme (therefore extending some results known in the independent and identically distributed setting only so far), and how group‐sequential testing applies on top of it. The procedure is robust (i.e. consistent even if the working model is mis‐specified). A simulation study confirms the theoretical results and validates the conjecture that the procedure may also be efficient. 相似文献
107.
《Journal of Statistical Computation and Simulation》2012,82(8):1105-1114
The estimation of incremental cost–effectiveness ratio (ICER) has received increasing attention recently. It is expressed in terms of the ratio of the change in costs of a therapeutic intervention to the change in the effects of the intervention. Despite the intuitive interpretation of ICER as an additional cost per additional benefit unit, it is a challenge to estimate the distribution of a ratio of two stochastically dependent distributions. A vast literature regarding the statistical methods of ICER has developed in the past two decades, but none of these methods provide an unbiased estimator. Here, to obtain the unbiased estimator of the cost–effectiveness ratio (CER), the zero intercept of the bivariate normal regression is assumed. In equal sample sizes, the Iman–Conover algorithm is applied to construct the desired variance–covariance matrix of two random bivariate samples, and the estimation then follows the same approach as CER to obtain the unbiased estimator of ICER. The bootstrapping method with the Iman–Conover algorithm is employed for unequal sample sizes. Simulation experiments are conducted to evaluate the proposed method. The regression-type estimator performs overwhelmingly better than the sample mean estimator in terms of mean squared error in all cases. 相似文献
108.
Ha M. Dang Mark D. Krailo Todd A. Alonzo Wendy J. Mack John A. Kairalla 《Pharmaceutical statistics》2023,22(6):1031-1045
There is considerable debate surrounding the choice of methods to estimate information fraction for futility monitoring in a randomized non-inferiority maximum duration trial. This question was motivated by a pediatric oncology study that aimed to establish non-inferiority for two primary outcomes. While non-inferiority was determined for one outcome, the futility monitoring of the other outcome failed to stop the trial early, despite accumulating evidence of inferiority. For a one-sided trial design for which the intervention is inferior to the standard therapy, futility monitoring should provide the opportunity to terminate the trial early. Our research focuses on the Total Control Only (TCO) method, which is defined as a ratio of observed events to total events exclusively within the standard treatment regimen. We investigate its properties in stopping a trial early in favor of inferiority. Simulation results comparing the TCO method with alternative methods, one based on the assumption of an inferior treatment effect (TH0), and the other based on a specified hypothesis of a non-inferior treatment effect (THA), were provided under various pediatric oncology trial design settings. The TCO method is the only method that provides unbiased information fraction estimates regardless of the hypothesis assumptions and exhibits a good power and a comparable type I error rate at each interim analysis compared to other methods. Although none of the methods is uniformly superior on all criteria, the TCO method possesses favorable characteristics, making it a compelling choice for estimating the information fraction when the aim is to reduce cancer treatment-related adverse outcomes. 相似文献
109.
Virgil L. Gregory Jr. 《Journal of social service research》2013,39(5):460-469
ABSTRACT The purposes of this review article are to orient clinical social workers to cognitive-behavioral theory, intervention, and research on bipolar disorder (BD); identify pros and cons of applying cognitive-behavioral therapy (CBT) to social work clients with BD; and identify specific implications for clinical social work practice. Of the 545 articles that were obtained via the systematic review, 18 studies were identified as being potentially eligible for inclusion, and 9 of those studies ultimately satisfied the inclusion criteria. The results of each study were summarized via identifying statistically significant (p< .05) differences that existed between experimental cohorts who received CBT (plus pharmacotherapy) and control cohorts who received treatment as usual. Outcomes showed CBT cohorts as having significant improvement over their respective control groups. The review's implications for clinical social workers and the need for future research are discussed. 相似文献
110.
Edward Fried 《Accountability in research》2013,20(4):349-375
In this article, I examine a skeptical argument against the possibility of ethically justifying risky human subject research (rHSR). That argument asserts that such research is unethical because it holds the possibility of wronging subjects who are harmed and whose consent to participate was less than fully voluntary. I conclude that the skeptical argument is not in the end sufficient to undermine the ethical foundation of rHSR because it fails to take account of the special positive duty researchers owe their clients and future patients. Although the skeptical argument is defeated, it exacts certain novel concessions from the pro‐research position. Of particular importance are the admissions (a) that researchers presumptively owe a fiduciary duty to research subjects, (b) that because the most important risks of rHSR are unknown and unquantifiable that duty must be explicitly waived by all subjects before they participate in any protocol, and (c) that such waivers must be made by individuals who satisfy objective criteria of competence for giving fully voluntary consent. The implementation of procedures responsive to these concerns might have a dampening effect on the conduct of research. However, the article concludes with a consideration of the likely benefits to researchers and society of a more cautious ethical regime. 相似文献