首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   208篇
  免费   32篇
管理学   143篇
综合类   11篇
社会学   1篇
统计学   85篇
  2023年   10篇
  2022年   1篇
  2021年   7篇
  2020年   7篇
  2019年   4篇
  2018年   6篇
  2017年   13篇
  2016年   8篇
  2015年   12篇
  2014年   9篇
  2013年   15篇
  2012年   16篇
  2011年   12篇
  2010年   6篇
  2009年   4篇
  2008年   9篇
  2007年   7篇
  2006年   9篇
  2005年   7篇
  2004年   10篇
  2003年   5篇
  2002年   5篇
  2001年   4篇
  2000年   5篇
  1999年   6篇
  1998年   7篇
  1997年   4篇
  1996年   4篇
  1995年   5篇
  1994年   5篇
  1993年   4篇
  1992年   1篇
  1991年   1篇
  1990年   4篇
  1989年   1篇
  1988年   3篇
  1987年   1篇
  1986年   1篇
  1985年   1篇
  1983年   1篇
排序方式: 共有240条查询结果,搜索用时 15 毫秒
11.
A novel method was used to incorporate in vivo host–pathogen dynamics into a new robust outbreak model for legionellosis. Dose‐response and time‐dose‐response (TDR) models were generated for Legionella longbeachae exposure to mice via the intratracheal route using a maximum likelihood estimation approach. The best‐fit TDR model was then incorporated into two L. pneumophila outbreak models: an outbreak that occurred at a spa in Japan, and one that occurred in a Melbourne aquarium. The best‐fit TDR from the murine dosing study was the beta‐Poisson with exponential‐reciprocal dependency model, which had a minimized deviance of 32.9. This model was tested against other incubation distributions in the Japan outbreak, and performed consistently well, with reported deviances ranging from 32 to 35. In the case of the Melbourne outbreak, the exponential model with exponential dependency was tested against non‐time‐dependent distributions to explore the performance of the time‐dependent model with the lowest number of parameters. This model reported low minimized deviances around 8 for the Weibull, gamma, and lognormal exposure distribution cases. This work shows that the incorporation of a time factor into outbreak distributions provides models with acceptable fits that can provide insight into the in vivo dynamics of the host‐pathogen system.  相似文献   
12.
In recent years, seamless phase I/II clinical trials have drawn much attention, as they consider both toxicity and efficacy endpoints in finding an optimal dose (OD). Engaging an appropriate number of patients in a trial is a challenging task. This paper attempts a dynamic stopping rule to save resources in phase I/II trials. That is, the stopping rule aims to save patients from unnecessary toxic or subtherapeutic doses. We allow a trial to stop early when widths of the confidence intervals for the dose-response parameters become narrower or when the sample size is equal to a predefined size, whichever comes first. The simulation study of dose-response scenarios in various settings demonstrates that the proposed stopping rule can engage an appropriate number of patients. Therefore, we suggest its use in clinical trials.  相似文献   
13.
The aim of this study is to estimate the reference level of lifetime cadmium intake (LCd) as the benchmark doses (BMDs) and their 95% lower confidence limits (BMDLs) for various renal effects by applying a hybrid approach. The participants comprised 3,013 (1,362 men and 1,651 women) and 278 (129 men and 149 women) inhabitants of the Cd‐polluted and nonpolluted areas, respectively, in the environmentally exposed Kakehashi River basin. Glucose, protein, aminonitrogen, metallothionein, and β2‐microglobulin in urine were measured as indicators of renal dysfunction. The BMD and BMDL that corresponded to an additional risk of 5% were calculated with background risk at zero exposure set at 5%. The obtained BMDLs of LCd were 3.7 g (glucose), 3.2 g (protein), 3.7 g (aminonitrogen), 1.7 g (metallothionein), and 1.8 g (β2‐microglobulin) in men and 2.9 g (glucose), 2.5 g (protein), 2.0 g (aminonitrogen), 1.6 g (metallothionein), and 1.3 g (β2‐microglobulin) in women. The lowest BMDL was 1.7 g (metallothionein) and 1.3 g (β2‐microglobulin) in men and women, respectively. The lowest BMDL of LCd (1.3 g) was somewhat lower than the representative threshold LCd (2.0 g) calculated in the previous studies. The obtained BMDLs may contribute to further discussion on the health risk assessment of cadmium exposure.  相似文献   
14.
The application of the exponential model is extended by the inclusion of new nonhuman primate (NHP), rabbit, and guinea pig dose‐lethality data for inhalation anthrax. Because deposition is a critical step in the initiation of inhalation anthrax, inhaled doses may not provide the most accurate cross‐species comparison. For this reason, species‐specific deposition factors were derived to translate inhaled dose to deposited dose. Four NHP, three rabbit, and two guinea pig data sets were utilized. Results from species‐specific pooling analysis suggested all four NHP data sets could be pooled into a single NHP data set, which was also true for the rabbit and guinea pig data sets. The three species‐specific pooled data sets could not be combined into a single generic mammalian data set. For inhaled dose, NHPs were the most sensitive (relative lowest LD50) species and rabbits the least. Improved inhaled LD50s proposed for use in risk assessment are 50,600, 102,600, and 70,800 inhaled spores for NHP, rabbit, and guinea pig, respectively. Lung deposition factors were estimated for each species using published deposition data from Bacillus spore exposures, particle deposition studies, and computer modeling. Deposition was estimated at 22%, 9%, and 30% of the inhaled dose for NHP, rabbit, and guinea pig, respectively. When the inhaled dose was adjusted to reflect deposited dose, the rabbit animal model appears the most sensitive with the guinea pig the least sensitive species.  相似文献   
15.
The main purpose of dose‐escalation trials is to identify the dose(s) that is/are safe and efficacious for further investigations in later studies. In this paper, we introduce dose‐escalation designs that incorporate both the dose‐limiting events and dose‐limiting toxicities (DLTs) and indicative responses of efficacy into the procedure. A flexible nonparametric model is used for modelling the continuous efficacy responses while a logistic model is used for the binary DLTs. Escalation decisions are based on the combination of the probabilities of DLTs and expected efficacy through a gain function. On the basis of this setup, we then introduce 2 types of Bayesian adaptive dose‐escalation strategies. The first type of procedures, called “single objective,” aims to identify and recommend a single dose, either the maximum tolerated dose, the highest dose that is considered as safe, or the optimal dose, a safe dose that gives optimum benefit risk. The second type, called “dual objective,” aims to jointly estimate both the maximum tolerated dose and the optimal dose accurately. The recommended doses obtained under these dose‐escalation procedures provide information about the safety and efficacy profile of the novel drug to facilitate later studies. We evaluate different strategies via simulations based on an example constructed from a real trial on patients with type 2 diabetes, and the use of stopping rules is assessed. We find that the nonparametric model estimates the efficacy responses well for different underlying true shapes. The dual‐objective designs give better results in terms of identifying the 2 real target doses compared to the single‐objective designs.  相似文献   
16.
The main goal of phase I cancer clinical trials is to determine the highest dose of a new therapy associated with an acceptable level of toxicity for the use in a subsequent phase II trial. The continual reassessment method (CRM) [O’Quigley, J., Pepe, M., Fisher, L., 1990. Continual reassessment method: a practical design for phase I clinical trials in cancer. Biometrics 46, 33–48] and escalation with overdose control (EWOC) [Babb, J., Rogatko, A., Zacks, S., 1998. Cancer phase I clinical trials: efficient dose escalation with overdose control. Statist. Med. 17 (10), 1103–1120] are two model-based designs used for phase I cancer clinical trials. A few modifications of the (original) CRM and EWOC have been made by many authors. In this paper, we show how CRM and EWOC can be unified and present a hybrid design. We study the characteristics of the approach of the hybrid design. The comparisons of the three designs (CRM, EWOC, and the hybrid design) are presented by convergence rates and overdose proportions. The simulation results show that the hybrid design generally has faster convergence rates than EWOC and smaller overdose proportions than CRM, especially when the true maximum tolerated dose (MTD) is above the mid-level of the dose range considered. The performance of these three designs is also evaluated in terms of sensitivity to outliers.  相似文献   
17.
For a dose finding study in cancer, the most successful dose (MSD), among a group of available doses, is that dose at which the overall success rate is the highest. This rate is the product of the rate of seeing non-toxicities together with the rate of tumor response. A successful dose finding trial in this context is one where we manage to identify the MSD in an efficient manner. In practice we may also need to consider algorithms for identifying the MSD which can incorporate certain restrictions, the most common restriction maintaining the estimated toxicity rate alone below some maximum rate. In this case the MSD may correspond to a different level than that for the unconstrained MSD and, in providing a final recommendation, it is important to underline that it is subject to the given constraint. We work with the approach described in O'Quigley et al. [Biometrics 2001; 57(4):1018-1029]. The focus of that work was dose finding in HIV where both information on toxicity and efficacy were almost immediately available. Recent cancer studies are beginning to fall under this same heading where, as before, toxicity can be quickly evaluated and, in addition, we can rely on biological markers or other measures of tumor response. Mindful of the particular context of cancer, our purpose here is to consider the methodology developed by O'Quigley et al. and its practical implementation. We also carry out a study on the doubly under-parameterized model, developed by O'Quigley et al. but not  相似文献   
18.
Patient heterogeneity may complicate dose‐finding in phase 1 clinical trials if the dose‐toxicity curves differ between subgroups. Conducting separate trials within subgroups may lead to infeasibly small sample sizes in subgroups having low prevalence. Alternatively,it is not obvious how to conduct a single trial while accounting for heterogeneity. To address this problem,we consider a generalization of the continual reassessment method on the basis of a hierarchical Bayesian dose‐toxicity model that borrows strength between subgroups under the assumption that the subgroups are exchangeable. We evaluate a design using this model that includes subgroup‐specific dose selection and safety rules. A simulation study is presented that includes comparison of this method to 3 alternative approaches,on the basis of nonhierarchical models,that make different types of assumptions about within‐subgroup dose‐toxicity curves. The simulations show that the hierarchical model‐based method is recommended in settings where the dose‐toxicity curves are exchangeable between subgroups. We present practical guidelines for application and provide computer programs for trial simulation and conduct.  相似文献   
19.
Spatial and/or temporal clustering of pathogens will invalidate the commonly used assumption of Poisson‐distributed pathogen counts (doses) in quantitative microbial risk assessment. In this work, the theoretically predicted effect of spatial clustering in conventional “single‐hit” dose‐response models is investigated by employing the stuttering Poisson distribution, a very general family of count distributions that naturally models pathogen clustering and contains the Poisson and negative binomial distributions as special cases. The analysis is facilitated by formulating the dose‐response models in terms of probability generating functions. It is shown formally that the theoretical single‐hit risk obtained with a stuttering Poisson distribution is lower than that obtained with a Poisson distribution, assuming identical mean doses. A similar result holds for mixed Poisson distributions. Numerical examples indicate that the theoretical single‐hit risk is fairly insensitive to moderate clustering, though the effect tends to be more pronounced for low mean doses. Furthermore, using Jensen's inequality, an upper bound on risk is derived that tends to better approximate the exact theoretical single‐hit risk for highly overdispersed dose distributions. The bound holds with any dose distribution (characterized by its mean and zero inflation index) and any conditional dose‐response model that is concave in the dose variable. Its application is exemplified with published data from Norovirus feeding trials, for which some of the administered doses were prepared from an inoculum of aggregated viruses. The potential implications of clustering for dose‐response assessment as well as practical risk characterization are discussed.  相似文献   
20.
Estimation of benchmark doses (BMDs) in quantitative risk assessment traditionally is based upon parametric dose‐response modeling. It is a well‐known concern, however, that if the chosen parametric model is uncertain and/or misspecified, inaccurate and possibly unsafe low‐dose inferences can result. We describe a nonparametric approach for estimating BMDs with quantal‐response data based on an isotonic regression method, and also study use of corresponding, nonparametric, bootstrap‐based confidence limits for the BMD. We explore the confidence limits’ small‐sample properties via a simulation study, and illustrate the calculations with an example from cancer risk assessment. It is seen that this nonparametric approach can provide a useful alternative for BMD estimation when faced with the problem of parametric model uncertainty.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号