首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   292篇
  免费   7篇
  国内免费   1篇
管理学   26篇
人口学   2篇
丛书文集   3篇
理论方法论   2篇
综合类   48篇
社会学   5篇
统计学   214篇
  2023年   3篇
  2021年   5篇
  2020年   9篇
  2019年   15篇
  2018年   14篇
  2017年   15篇
  2016年   12篇
  2015年   3篇
  2014年   8篇
  2013年   56篇
  2012年   18篇
  2011年   7篇
  2010年   10篇
  2009年   11篇
  2008年   11篇
  2007年   12篇
  2006年   9篇
  2005年   12篇
  2004年   8篇
  2003年   8篇
  2002年   3篇
  2001年   4篇
  2000年   8篇
  1999年   7篇
  1998年   4篇
  1997年   5篇
  1996年   2篇
  1995年   3篇
  1994年   3篇
  1993年   2篇
  1992年   3篇
  1991年   1篇
  1990年   1篇
  1989年   1篇
  1987年   2篇
  1986年   2篇
  1983年   1篇
  1982年   2篇
排序方式: 共有300条查询结果,搜索用时 15 毫秒
1.
When a candidate predictive marker is available, but evidence on its predictive ability is not sufficiently reliable, all‐comers trials with marker stratification are frequently conducted. We propose a framework for planning and evaluating prospective testing strategies in confirmatory, phase III marker‐stratified clinical trials based on a natural assumption on heterogeneity of treatment effects across marker‐defined subpopulations, where weak rather than strong control is permitted for multiple population tests. For phase III marker‐stratified trials, it is expected that treatment efficacy is established in a particular patient population, possibly in a marker‐defined subpopulation, and that the marker accuracy is assessed when the marker is used to restrict the indication or labelling of the treatment to a marker‐based subpopulation, ie, assessment of the clinical validity of the marker. In this paper, we develop statistical testing strategies based on criteria that are explicitly designated to the marker assessment, including those examining treatment effects in marker‐negative patients. As existing and developed statistical testing strategies can assert treatment efficacy for either the overall patient population or the marker‐positive subpopulation, we also develop criteria for evaluating the operating characteristics of the statistical testing strategies based on the probabilities of asserting treatment efficacy across marker subpopulations. Numerical evaluations to compare the statistical testing strategies based on the developed criteria are provided.  相似文献   
2.
The probability of illness caused by very low doses of pathogens cannot generally be tested due to the numbers of subjects that would be needed, though such assessments of illness dose response are needed to evaluate drinking water standards. A predictive Bayesian dose-response assessment method was proposed previously to assess the unconditional probability of illness from available information and avoid the inconsistencies of confidence-based approaches. However, the method uses knowledge of the conditional dose-response form, and this form is not well established for the illness endpoint. A conditional parametric dose-response function for gastroenteric illness is proposed here based on simple numerical models of self-organized host-pathogen systems and probabilistic arguments. In the models, illnesses terminate when the host evolves by processes of natural selection to a self-organized critical value of wellness. A generalized beta-Poisson illness dose-response form emerges for the population as a whole. Use of this form is demonstrated in a predictive Bayesian dose-response assessment for cryptosporidiosis. Results suggest that a maximum allowable dose of 5.0 x 10(-7) oocysts/exposure (e.g., 2.5 x 10(-7) oocysts/L water) would correspond with the original goals of the U.S. Environmental Protection Agency Surface Water Treatment Rule, considering only primary illnesses resulting from Poisson-distributed pathogen counts. This estimate should be revised to account for non-Poisson distributions of Cryptosporidium parvum in drinking water and total response, considering secondary illness propagation in the population.  相似文献   
3.
Abstract. This paper reviews some of the key statistical ideas that are encountered when trying to find empirical support to causal interpretations and conclusions, by applying statistical methods on experimental or observational longitudinal data. In such data, typically a collection of individuals are followed over time, then each one has registered a sequence of covariate measurements along with values of control variables that in the analysis are to be interpreted as causes, and finally the individual outcomes or responses are reported. Particular attention is given to the potentially important problem of confounding. We provide conditions under which, at least in principle, unconfounded estimation of the causal effects can be accomplished. Our approach for dealing with causal problems is entirely probabilistic, and we apply Bayesian ideas and techniques to deal with the corresponding statistical inference. In particular, we use the general framework of marked point processes for setting up the probability models, and consider posterior predictive distributions as providing the natural summary measures for assessing the causal effects. We also draw connections to relevant recent work in this area, notably to Judea Pearl's formulations based on graphical models and his calculus of so‐called do‐probabilities. Two examples illustrating different aspects of causal reasoning are discussed in detail.  相似文献   
4.
This article develops two block bootstrap-based panel predictability test procedures that are valid under very general conditions. Some of the allowable features include cross-sectional dependence, heterogeneous predictive slopes, persistent predictors, and complex error dynamics, including cross-unit endogeneity. While the first test procedure tests if there is any predictability at all, the second procedure determines the units for which predictability holds in case of a rejection by the first. A weak unit root framework is adopted to allow persistent predictors, and a novel theory is developed to establish asymptotic validity of the proposed bootstrap. Simulations are used to evaluate the performance of our tests in small samples, and their implementation is illustrated through an empirical application to stock returns.  相似文献   
5.
6.
The Bayesian paradigm provides an ideal platform to update uncertainties and carry them over into the future in the presence of data. Bayesian predictive power (BPP) reflects our belief in the eventual success of a clinical trial to meet its goals. In this paper we derive mathematical expressions for the most common types of outcomes, to make the BPP accessible to practitioners, facilitate fast computations in adaptive trial design simulations that use interim futility monitoring, and propose an organized BPP-based phase II-to-phase III design framework.  相似文献   
7.
Prior information is often incorporated informally when planning a clinical trial. Here, we present an approach on how to incorporate prior information, such as data from historical clinical trials, into the nuisance parameter–based sample size re‐estimation in a design with an internal pilot study. We focus on trials with continuous endpoints in which the outcome variance is the nuisance parameter. For planning and analyzing the trial, frequentist methods are considered. Moreover, the external information on the variance is summarized by the Bayesian meta‐analytic‐predictive approach. To incorporate external information into the sample size re‐estimation, we propose to update the meta‐analytic‐predictive prior based on the results of the internal pilot study and to re‐estimate the sample size using an estimator from the posterior. By means of a simulation study, we compare the operating characteristics such as power and sample size distribution of the proposed procedure with the traditional sample size re‐estimation approach that uses the pooled variance estimator. The simulation study shows that, if no prior‐data conflict is present, incorporating external information into the sample size re‐estimation improves the operating characteristics compared to the traditional approach. In the case of a prior‐data conflict, that is, when the variance of the ongoing clinical trial is unequal to the prior location, the performance of the traditional sample size re‐estimation procedure is in general superior, even when the prior information is robustified. When considering to include prior information in sample size re‐estimation, the potential gains should be balanced against the risks.  相似文献   
8.
One of the objectives of personalized medicine is to take treatment decisions based on a biomarker measurement. Therefore, it is often interesting to evaluate how well a biomarker can predict the response to a treatment. To do so, a popular methodology consists of using a regression model and testing for an interaction between treatment assignment and biomarker. However, the existence of an interaction is not sufficient for a biomarker to be predictive. It is only necessary. Hence, the use of the marker‐by‐treatment predictiveness curve has been recommended. In addition to evaluate how well a single continuous biomarker predicts treatment response, it can further help to define an optimal threshold. This curve displays the risk of a binary outcome as a function of the quantiles of the biomarker, for each treatment group. Methods that assume a binary outcome or rely on a proportional hazard model for a time‐to‐event outcome have been proposed to estimate this curve. In this work, we propose some extensions for censored data. They rely on a time‐dependent logistic model, and we propose to estimate this model via inverse probability of censoring weighting. We present simulations results and three applications to prostate cancer, liver cirrhosis, and lung cancer data. They suggest that a large number of events need to be observed to define a threshold with sufficient accuracy for clinical usefulness. They also illustrate that when the treatment effect varies with the time horizon which defines the outcome, then the optimal threshold also depends on this time horizon.  相似文献   
9.
传统方法解决大规模时序曲线的预测建模问题,需要对每条曲线逐一建模,使得建模工作量相当庞大,在实际应用中缺乏可操作性。文章提出一种解决此问题的新方法——曲线分类建模方法。该方法先减少曲线的模型种类,再进行曲线分类和分类建模,在尽可能保留原始信息的前提下较大程度地降低了建模的工作量。文章阐述了该方法的原理和计算过程,并以应用于多地区GDP曲线的预测案例说明该方法的实用性和有效性。  相似文献   
10.
采用分层抽样的问卷调查方法,对京渝两地的556名民众进行了"非典"疫情中风险认知及其社会心理行为预测指标的研究.结果表明:(1)"非典"期问,两地民众的社会心理总的趋势上具有一致性,他们在风险事件的认知、影响风险认知的信息因素和社会心理顶警指标几方面总体表现是正常和适度的.调查期间政府的防控措施对稳定民众心理起到了重要作用;(2)"非典"期间,处在不同疫情状态下的两地民众的社会心理也存在一些明显的差异.这些差异主要是因所处的不同疫情环境而引起的,而且也仅是程度上的而非性质上的.本调查结果可为我国突发性公共卫生事件的有效防控和民众心理行为的有效琉导提供对策建议,为未来建立我国民众社会心理行为预警系统提供理论和方法依据.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号