首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1127篇
  免费   40篇
  国内免费   17篇
管理学   50篇
民族学   6篇
人口学   45篇
丛书文集   66篇
理论方法论   28篇
综合类   349篇
社会学   31篇
统计学   609篇
  2023年   12篇
  2022年   9篇
  2021年   22篇
  2020年   19篇
  2019年   24篇
  2018年   38篇
  2017年   50篇
  2016年   41篇
  2015年   24篇
  2014年   55篇
  2013年   185篇
  2012年   87篇
  2011年   67篇
  2010年   69篇
  2009年   44篇
  2008年   50篇
  2007年   50篇
  2006年   56篇
  2005年   51篇
  2004年   37篇
  2003年   33篇
  2002年   28篇
  2001年   31篇
  2000年   22篇
  1999年   23篇
  1998年   11篇
  1997年   9篇
  1996年   5篇
  1995年   11篇
  1994年   4篇
  1993年   2篇
  1992年   4篇
  1991年   4篇
  1989年   1篇
  1987年   2篇
  1984年   1篇
  1983年   3篇
排序方式: 共有1184条查询结果,搜索用时 15 毫秒
131.
While body fat is the most accurate measure of obesity, its measurement requires special equipment that can be costly and time consuming to operate. Attention has thus typically focused on the easier to calculate body mass index (BMI). However, the ability of BMI to accurately identify obesity has been increasingly questioned. This paper focuses attention on whether more general body mass indices are appropriate measures of body fat. Using a data set of body fat, height, and weight measurements, general models are estimated which nest a wide variety of weight–height indices as special cases. In the absence of a race and gender categorisation, the conventional BMI was found to be the appropriate index with which to predict body fat. When such a categorisation was made, however, the BMI was never selected as the appropriate index. In general, predicted female body fat was some 10 kg higher than that of a male of identical build and predicted % body fat was over 11 percentage points higher, but age effects were smaller for females. Considerable racial differences in predicted body fat were found for males, but such differences were less marked for females. The implications of this finding for interpreting recent research on the effect of obesity on health, society, and economic factors are considered.  相似文献   
132.
Non-parametric Regression with Dependent Censored Data   总被引:1,自引:0,他引:1  
Abstract.  Let ( X i , Y i ) ( i = 1 ,…, n ) be n replications of a random vector ( X , Y  ), where Y is supposed to be subject to random right censoring. The data ( X i , Y i ) are assumed to come from a stationary α -mixing process. We consider the problem of estimating the function m ( x ) = E ( φ ( Y ) |  X = x ), for some known transformation φ . This problem is approached in the following way: first, we introduce a transformed variable     , that is not subject to censoring and satisfies the relation     , and then we estimate m ( x ) by applying local linear regression techniques. As a by-product, we obtain a general result on the uniform rate of convergence of kernel type estimators of functionals of an unknown distribution function, under strong mixing assumptions.  相似文献   
133.
The authors consider the empirical likelihood method for the regression model of mean quality‐adjusted lifetime with right censoring. They show that an empirical log‐likelihood ratio for the vector of the regression parameters is asymptotically a weighted sum of independent chi‐squared random variables. They adjust this empirical log‐likelihood ratio so that the limiting distribution is a standard chi‐square and construct corresponding confidence regions. Simulation studies lead them to conclude that empirical likelihood methods outperform the normal approximation methods in terms of coverage probability. They illustrate their methods with a data example from a breast cancer clinical trial study.  相似文献   
134.
Time-to-event data such as time to death are broadly used in medical research and drug development to understand the efficacy of a therapeutic. For time-to-event data, right censoring (data only observed up to a certain point of time) is common and easy to recognize. Methods that use right censored data, such as the Kaplan–Meier estimator and the Cox proportional hazard model, are well established. Time-to-event data can also be left truncated, which arises when patients are excluded from the sample because their events occur before a specific milestone, potentially resulting in an immortal time bias. For example, in a study evaluating the association between biomarker status and overall survival, patients who did not live long enough to receive a genomic test were not observed in the study. Left truncation causes selection bias and often leads to an overestimate of survival time. In this tutorial, we used a nationwide electronic health record-derived de-identified database to demonstrate how to analyze left truncated and right censored data without bias using example code from SAS and R.  相似文献   
135.
While randomized controlled trials (RCTs) are the gold standard for estimating treatment effects in medical research, there is increasing use of and interest in using real-world data for drug development. One such use case is the construction of external control arms for evaluation of efficacy in single-arm trials, particularly in cases where randomization is either infeasible or unethical. However, it is well known that treated patients in non-randomized studies may not be comparable to control patients—on either measured or unmeasured variables—and that the underlying population differences between the two groups may result in biased treatment effect estimates as well as increased variability in estimation. To address these challenges for analyses of time-to-event outcomes, we developed a meta-analytic framework that uses historical reference studies to adjust a log hazard ratio estimate in a new external control study for its additional bias and variability. The set of historical studies is formed by constructing external control arms for historical RCTs, and a meta-analysis compares the trial controls to the external control arms. Importantly, a prospective external control study can be performed independently of the meta-analysis using standard causal inference techniques for observational data. We illustrate our approach with a simulation study and an empirical example based on reference studies for advanced non-small cell lung cancer. In our empirical analysis, external control patients had lower survival than trial controls (hazard ratio: 0.907), but our methodology is able to correct for this bias. An implementation of our approach is available in the R package ecmeta .  相似文献   
136.
In pre-clinical oncology studies, tumor-bearing animals are treated and observed over a period of time in order to measure and compare the efficacy of one or more cancer-intervention therapies along with a placebo/standard of care group. A data analysis is typically carried out by modeling and comparing tumor volumes, functions of tumor volumes, or survival. Data analysis on tumor volumes is complicated because animals under observation may be euthanized prior to the end of the study for one or more reasons, such as when an animal's tumor volume exceeds an upper threshold. In such a case, the tumor volume is missing not-at-random for the time remaining in the study. To work around the non-random missingness issue, several statistical methods have been proposed in the literature, including the rate of change in log tumor volume and partial area under the curve. In this work, an examination and comparison of the test size and statistical power of these and other popular methods for the analysis of tumor volume data is performed through realistic Monte Carlo computer simulations. The performance, advantages, and drawbacks of popular statistical methods for animal oncology studies are reported. The recommended methods are applied to a real data set.  相似文献   
137.
There is considerable debate surrounding the choice of methods to estimate information fraction for futility monitoring in a randomized non-inferiority maximum duration trial. This question was motivated by a pediatric oncology study that aimed to establish non-inferiority for two primary outcomes. While non-inferiority was determined for one outcome, the futility monitoring of the other outcome failed to stop the trial early, despite accumulating evidence of inferiority. For a one-sided trial design for which the intervention is inferior to the standard therapy, futility monitoring should provide the opportunity to terminate the trial early. Our research focuses on the Total Control Only (TCO) method, which is defined as a ratio of observed events to total events exclusively within the standard treatment regimen. We investigate its properties in stopping a trial early in favor of inferiority. Simulation results comparing the TCO method with alternative methods, one based on the assumption of an inferior treatment effect (TH0), and the other based on a specified hypothesis of a non-inferior treatment effect (THA), were provided under various pediatric oncology trial design settings. The TCO method is the only method that provides unbiased information fraction estimates regardless of the hypothesis assumptions and exhibits a good power and a comparable type I error rate at each interim analysis compared to other methods. Although none of the methods is uniformly superior on all criteria, the TCO method possesses favorable characteristics, making it a compelling choice for estimating the information fraction when the aim is to reduce cancer treatment-related adverse outcomes.  相似文献   
138.
The socio-economic literature has focused much on how overall inequality in income distribution (frequently measured by the Gini coefficient) undermines the “trickle down” effect. In other words, the higher the inequality in the income distribution, the lower is the growth elasticity of poverty. However, with the publication of Piketty’s magnum opus (2014), and a subsequent study by Chancel and Piketty (2017) of evolution of income inequality in India since 1922, the focus has shifted to the income disparity between the richest 1% (or 0.01%) and the bottom 50%. Their central argument is that the rapid growth of income at the top end of millionaires and billionaires is a by-product of growth. The present study extends this argument by linking it to poverty indices in India. Based on the India Human Development Survey 2005–12 – a nationwide panel survey-we examine the links between poverty and income inequality, especially in the upper tail relative to the bottom 50%, state affluence (measured in per capita income) and their interaction or their joint effect. Another feature of our research is that we analyse their effects on the FGT class of poverty indices. The results are similar in as much as direction of association is concerned but the elasticities vary with the poverty index. The growth elasticities are negative and significant for all poverty indices. In all three cases, the disparity between the income share of the top 1% and share of the bottom 50% is associated with greater poverty. These elasticities are much higher than the (absolute) income elasticities except in the case of the poverty gap. The largest increase occurs in the poverty gap squared – a 1% greater income disparity is associated with a 1.24% higher value of this index. Thus the consequences of even a small increase in the income disparity are alarming for the poorest.  相似文献   
139.
140.
In late-phase confirmatory clinical trials in the oncology field, time-to-event (TTE) endpoints are commonly used as primary endpoints for establishing the efficacy of investigational therapies. Among these TTE endpoints, overall survival (OS) is always considered as the gold standard. However, OS data can take years to mature, and its use for measurement of efficacy can be confounded by the use of post-treatment rescue therapies or supportive care. Therefore, to accelerate the development process and better characterize the treatment effect of new investigational therapies, other TTE endpoints such as progression-free survival and event-free survival (EFS) are applied as primary efficacy endpoints in some confirmatory trials, either as a surrogate for OS or as a direct measure of clinical benefits. For evaluating novel treatments for acute myeloid leukemia, EFS has been gradually recognized as a direct measure of clinical benefits. However, the application of an EFS endpoint is still controversial mainly due to the debate surrounding definition of treatment failure (TF) events. In this article, we investigate the EFS endpoint with the most conservative definition for the timing of TF, which is Day 1 since randomization. Specifically, the corresponding non-proportional hazard pattern of the EFS endpoint is investigated with both analytical and numerical approaches.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号