首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   116篇
  免费   5篇
管理学   15篇
民族学   1篇
人口学   2篇
丛书文集   1篇
理论方法论   7篇
综合类   1篇
社会学   26篇
统计学   68篇
  2022年   1篇
  2021年   2篇
  2020年   2篇
  2019年   7篇
  2018年   3篇
  2017年   5篇
  2016年   5篇
  2015年   3篇
  2014年   3篇
  2013年   15篇
  2012年   7篇
  2011年   7篇
  2010年   4篇
  2009年   4篇
  2008年   6篇
  2007年   9篇
  2006年   4篇
  2005年   5篇
  2004年   4篇
  2003年   8篇
  2002年   1篇
  2001年   4篇
  1999年   4篇
  1998年   3篇
  1997年   2篇
  1992年   1篇
  1990年   1篇
  1980年   1篇
排序方式: 共有121条查询结果,搜索用时 46 毫秒
31.
Missing data in clinical trials is a well‐known problem, and the classical statistical methods used can be overly simple. This case study shows how well‐established missing data theory can be applied to efficacy data collected in a long‐term open‐label trial with a discontinuation rate of almost 50%. Satisfaction with treatment in chronically constipated patients was the efficacy measure assessed at baseline and every 3 months postbaseline. The improvement in treatment satisfaction from baseline was originally analyzed with a paired t‐test ignoring missing data and discarding the correlation structure of the longitudinal data. As the original analysis started from missing completely at random assumptions regarding the missing data process, the satisfaction data were re‐examined, and several missing at random (MAR) and missing not at random (MNAR) techniques resulted in adjusted estimate for the improvement in satisfaction over 12 months. Throughout the different sensitivity analyses, the effect sizes remained significant and clinically relevant. Thus, even for an open‐label trial design, sensitivity analysis, with different assumptions for the nature of dropouts (MAR or MNAR) and with different classes of models (selection, pattern‐mixture, or multiple imputation models), has been found useful and provides evidence towards the robustness of the original analyses; additional sensitivity analyses could be undertaken to further qualify robustness. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   
32.
In the conventional linear mixed-effects model, four structures can be distinguished: fixed effects, random effects, measurement error and serial correlation. The latter captures the phenomenon that the correlation structure within a subject depends on the time lag between two measurements. While the general linear mixed model is rather flexible, the need has arisen to further increase flexibility. In addition to work done in the area, we propose the use of spline-based modeling of the serial correlation function, so as to allow for additional flexibility. This approach is applied to data from a pre-clinical experiment in dementia which studied the eating and drinking behavior in mice.  相似文献   
33.
This paper presents an analytical framework for effective management of projects with uncertain iterations. The framework is based upon: (1) the combination of two complementary techniques, one focused on improving iterative process architectures, the Design Structure Matrix, and one focused on predicting project performance, the Graphical Evaluation Review Technique; and (2) the introduction of an activity set-based criticality measure. The intent of the framework is to help project managers and researchers identify and evaluate alternative process architectures, in order to help them determine the alternative which best balances risk and other project performance parameters, as illustrated through an example application.  相似文献   
34.
Summary.  A common objective in longitudinal studies is the joint modelling of a longitudinal response with a time-to-event outcome. Random effects are typically used in the joint modelling framework to explain the interrelationships between these two processes. However, estimation in the presence of random effects involves intractable integrals requiring numerical integration. We propose a new computational approach for fitting such models that is based on the Laplace method for integrals that makes the consideration of high dimensional random-effects structures feasible. Contrary to the standard Laplace approximation, our method requires much fewer repeated measurements per individual to produce reliable results.  相似文献   
35.
Quantitative risk assessment involves the determination of a safe level of exposure. Recent techniques use the estimated dose-response curve to estimate such a safe dose level. Although such methods have attractive features, a low-dose extrapolation is highly dependent on the model choice. Fractional polynomials, basically being a set of (generalized) linear models, are a nice extension of classical polynomials, providing the necessary flexibility to estimate the dose-response curve. Typically, one selects the best-fitting model in this set of polynomials and proceeds as if no model selection were carried out. We show that model averaging using a set of fractional polynomials reduces bias and has better precision in estimating a safe level of exposure (say, the benchmark dose), as compared to an estimator from the selected best model. To estimate a lower limit of this benchmark dose, an approximation of the variance of the model-averaged estimator, as proposed by Burnham and Anderson, can be used. However, this is a conservative method, often resulting in unrealistically low safe doses. Therefore, a bootstrap-based method to more accurately estimate the variance of the model averaged parameter is proposed.  相似文献   
36.
Expert opinion plays an important role when selecting promising clusters of chemical compounds in the drug discovery process. Indeed, experts can qualitatively assess the potential of each cluster, and with appropriate statistical methods, these qualitative assessments can be quantified into a success probability for each of them. However, one crucial element often overlooked is the procedure by which the clusters are assigned to/selected by the experts for evaluation. In the present work, the impact such a procedure may have on the statistical analysis and the entire evaluation process is studied. It has been shown that some implementations of the selection procedure may seriously compromise the validity of the evaluation even when the rating and selection processes are independent. Consequently, the fully random allocation of the clusters to the experts is strongly advocated. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   
37.
A review of the empirical literature was conducted to establish the relation between teacher and student ethnicity, and cognitive and noncognitive student outcomes. It was hypothesized that ethnic teacher–student congruence results in more favorable outcomes for especially minority students. A total of 24 quantitative studies focusing on primary and secondary education in the United States were reviewed. The results show that there is as yet little unambiguous empirical evidence that a stronger degree of ethnic match be it in the form of a one-to-one coupling of a teacher to students with the same ethnic background, or a larger share of ethnic minority teachers at an ethnically mixed school, leads to predominantly positive results. Insofar positive effects were found, they apply to a greater extent to subjective teacher evaluations than to objective achievement outcome measures.  相似文献   
38.
The paper applies classical statistical principles to yield new tools for risk assessment and makes new use of epidemiological data for human risk assessment. An extensive clinical and epidemiological study of workers engaged in the manufacturing and formulation of aldrin and dieldrin provides occupational hygiene and biological monitoring data on individual exposures over the years of employment and provides unusually accurate measures of individual lifetime average daily doses. In the cancer dose-response modeling, each worker is treated as a separate experimental unit with his own unique dose. Maximum likelihood estimates of added cancer risk are calculated for multistage, multistage-Weibull, and proportional hazards models. Distributional characterizations of added cancer risk are based on bootstrap and relative likelihood techniques. The cancer mortality data on these male workers suggest that low-dose exposures to aldrin and dieldrin do not significantly increase human cancer risk and may even decrease the human hazard rate for all types of cancer combined at low doses (e.g., 1 g/kg/day). The apparent hormetic effect in the best fitting dose-response models for this data set is statistically significant. The decrease in cancer risk at low doses of aldrin and dieldrin is in sharp contrast to the U.S. Environmental Protection Agency's upper bound on cancer potency based on mouse liver tumors. The EPA's upper bound implies that lifetime average daily doses of 0.0000625 and 0.00625 g/kg body weight/day would correspond to increased cancer risks of 0.000001 and 0.0001, respectively. However, the best estimate from the Pernis epidemiological data is that there is no increase in cancer risk in these workers at these doses or even at doses as large as 2 g/kg/day.  相似文献   
39.
Over the last decades, the evaluation of potential surrogate endpoints in clinical trials has steadily been growing in importance, not only thanks to the availability of ever more potential markers and surrogate endpoints, also because more methodological development has become available. While early work has been devoted, to a large extent, to Gaussian, binary, and longitudinal endpoints, the case of time-to-event endpoints is in need of careful scrutiny as well, owing to the strong presence of such endpoints in oncology and beyond. While work had been done in the past, it was often cumbersome to use such tools in practice, because of the need for fitting copula or frailty models that were further embedded in a hierarchical or two-stage modeling approach. In this paper, we present a methodologically elegant and easy-to-use approach based on information theory. We resolve essential issues, including the quantification of “surrogacy” based on such an approach. Our results are put to the test in a simulation study and are applied to data from clinical trials in oncology. The methodology has been implemented in R.  相似文献   
40.
Probabilistic sensitivity analysis (SA) allows to incorporate background knowledge on the considered input variables more easily than many other existing SA techniques. Incorporation of such knowledge is performed by constructing a joint density function over the input domain. However, it rarely happens that available knowledge directly and uniquely translates into such a density function. A naturally arising question is then to what extent the choice of density function determines the values of the considered sensitivity measures. In this paper we perform simulation studies to address this question. Our empirical analysis suggests some guidelines, but also cautions to practitioners in the field of probabilistic SA.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号