全文获取类型
收费全文 | 5210篇 |
免费 | 686篇 |
国内免费 | 12篇 |
专业分类
管理学 | 1232篇 |
民族学 | 27篇 |
人口学 | 69篇 |
丛书文集 | 65篇 |
理论方法论 | 933篇 |
综合类 | 574篇 |
社会学 | 1993篇 |
统计学 | 1015篇 |
出版年
2024年 | 4篇 |
2023年 | 24篇 |
2022年 | 18篇 |
2021年 | 121篇 |
2020年 | 205篇 |
2019年 | 379篇 |
2018年 | 254篇 |
2017年 | 409篇 |
2016年 | 410篇 |
2015年 | 380篇 |
2014年 | 424篇 |
2013年 | 769篇 |
2012年 | 431篇 |
2011年 | 302篇 |
2010年 | 296篇 |
2009年 | 189篇 |
2008年 | 226篇 |
2007年 | 147篇 |
2006年 | 149篇 |
2005年 | 149篇 |
2004年 | 150篇 |
2003年 | 119篇 |
2002年 | 122篇 |
2001年 | 115篇 |
2000年 | 85篇 |
1999年 | 12篇 |
1998年 | 3篇 |
1997年 | 2篇 |
1996年 | 2篇 |
1995年 | 1篇 |
1994年 | 3篇 |
1993年 | 5篇 |
1991年 | 1篇 |
1989年 | 1篇 |
1988年 | 1篇 |
排序方式: 共有5908条查询结果,搜索用时 15 毫秒
931.
Response‐adaptive randomisation (RAR) can considerably improve the chances of a successful treatment outcome for patients in a clinical trial by skewing the allocation probability towards better performing treatments as data accumulates. There is considerable interest in using RAR designs in drug development for rare diseases, where traditional designs are not either feasible or ethically questionable. In this paper, we discuss and address a major criticism levelled at RAR: namely, type I error inflation due to an unknown time trend over the course of the trial. The most common cause of this phenomenon is changes in the characteristics of recruited patients—referred to as patient drift. This is a realistic concern for clinical trials in rare diseases due to their lengthly accrual rate. We compute the type I error inflation as a function of the time trend magnitude to determine in which contexts the problem is most exacerbated. We then assess the ability of different correction methods to preserve type I error in these contexts and their performance in terms of other operating characteristics, including patient benefit and power. We make recommendations as to which correction methods are most suitable in the rare disease context for several RAR rules, differentiating between the 2‐armed and the multi‐armed case. We further propose a RAR design for multi‐armed clinical trials, which is computationally efficient and robust to several time trends considered. 相似文献
932.
Prior information is often incorporated informally when planning a clinical trial. Here, we present an approach on how to incorporate prior information, such as data from historical clinical trials, into the nuisance parameter–based sample size re‐estimation in a design with an internal pilot study. We focus on trials with continuous endpoints in which the outcome variance is the nuisance parameter. For planning and analyzing the trial, frequentist methods are considered. Moreover, the external information on the variance is summarized by the Bayesian meta‐analytic‐predictive approach. To incorporate external information into the sample size re‐estimation, we propose to update the meta‐analytic‐predictive prior based on the results of the internal pilot study and to re‐estimate the sample size using an estimator from the posterior. By means of a simulation study, we compare the operating characteristics such as power and sample size distribution of the proposed procedure with the traditional sample size re‐estimation approach that uses the pooled variance estimator. The simulation study shows that, if no prior‐data conflict is present, incorporating external information into the sample size re‐estimation improves the operating characteristics compared to the traditional approach. In the case of a prior‐data conflict, that is, when the variance of the ongoing clinical trial is unequal to the prior location, the performance of the traditional sample size re‐estimation procedure is in general superior, even when the prior information is robustified. When considering to include prior information in sample size re‐estimation, the potential gains should be balanced against the risks. 相似文献
933.
Strong orthogonal arrays (SOAs) were recently introduced and studied as a class of space‐filling designs for computer experiments. An important problem that has not been addressed in the literature is that of design selection for such arrays. In this article, we conduct a systematic investigation into this problem, and we focus on the most useful SOA(n,m,4,2 + )s and SOA(n,m,4,2)s. This article first addresses the problem of design selection for SOAs of strength 2+ by examining their three‐dimensional projections. Both theoretical and computational results are presented. When SOAs of strength 2+ do not exist, we formulate a general framework for the selection of SOAs of strength 2 by looking at their two‐dimensional projections. The approach is fruitful, as it is applicable when SOAs of strength 2+ do not exist and it gives rise to them when they do. The Canadian Journal of Statistics 47: 302–314; 2019 © 2019 Statistical Society of Canada 相似文献
934.
The Simon's two‐stage design is the most commonly applied among multi‐stage designs in phase IIA clinical trials. It combines the sample sizes at the two stages in order to minimize either the expected or the maximum sample size. When the uncertainty about pre‐trial beliefs on the expected or desired response rate is high, a Bayesian alternative should be considered since it allows to deal with the entire distribution of the parameter of interest in a more natural way. In this setting, a crucial issue is how to construct a distribution from the available summaries to use as a clinical prior in a Bayesian design. In this work, we explore the Bayesian counterparts of the Simon's two‐stage design based on the predictive version of the single threshold design. This design requires specifying two prior distributions: the analysis prior, which is used to compute the posterior probabilities, and the design prior, which is employed to obtain the prior predictive distribution. While the usual approach is to build beta priors for carrying out a conjugate analysis, we derived both the analysis and the design distributions through linear combinations of B‐splines. The motivating example is the planning of the phase IIA two‐stage trial on anti‐HER2 DNA vaccine in breast cancer, where initial beliefs formed from elicited experts' opinions and historical data showed a high level of uncertainty. In a sample size determination problem, the impact of different priors is evaluated. 相似文献
935.
Proportional hazards are a common assumption when designing confirmatory clinical trials in oncology. This assumption not only affects the analysis part but also the sample size calculation. The presence of delayed effects causes a change in the hazard ratio while the trial is ongoing since at the beginning we do not observe any difference between treatment arms, and after some unknown time point, the differences between treatment arms will start to appear. Hence, the proportional hazards assumption no longer holds, and both sample size calculation and analysis methods to be used should be reconsidered. The weighted log‐rank test allows a weighting for early, middle, and late differences through the Fleming and Harrington class of weights and is proven to be more efficient when the proportional hazards assumption does not hold. The Fleming and Harrington class of weights, along with the estimated delay, can be incorporated into the sample size calculation in order to maintain the desired power once the treatment arm differences start to appear. In this article, we explore the impact of delayed effects in group sequential and adaptive group sequential designs and make an empirical evaluation in terms of power and type‐I error rate of the of the weighted log‐rank test in a simulated scenario with fixed values of the Fleming and Harrington class of weights. We also give some practical recommendations regarding which methodology should be used in the presence of delayed effects depending on certain characteristics of the trial. 相似文献
936.
In this paper, we consider the deterministic trend model where the error process is allowed to be weakly or strongly correlated and subject to non‐stationary volatility. Extant estimators of the trend coefficient are analysed. We find that under heteroskedasticity, the Cochrane–Orcutt‐type estimator (with some initial condition) could be less efficient than Ordinary Least Squares (OLS) when the process is highly persistent, whereas it is asymptotically equivalent to OLS when the process is less persistent. An efficient non‐parametrically weighted Cochrane–Orcutt‐type estimator is then proposed. The efficiency is uniform over weak or strong serial correlation and non‐stationary volatility of unknown form. The feasible estimator relies on non‐parametric estimation of the volatility function, and the asymptotic theory is provided. We use the data‐dependent smoothing bandwidth that can automatically adjust for the strength of non‐stationarity in volatilities. The implementation does not require pretesting persistence of the process or specification of non‐stationary volatility. Finite‐sample evaluation via simulations and an empirical application demonstrates the good performance of proposed estimators. 相似文献
937.
Informative identification of the within‐subject correlation is essential in longitudinal studies in order to forecast the trajectory of each subject and improve the validity of inferences. In this paper, we fit this correlation structure by employing a time adaptive autoregressive error process. Such a process can automatically accommodate irregular and possibly subject‐specific observations. Based on the fitted correlation structure, we propose an efficient two‐stage estimator of the unknown coefficient functions by using a local polynomial approximation. This procedure does not involve within‐subject covariance matrices and hence circumvents the instability of calculating their inverses. The asymptotic normality of resulting estimators is established. Numerical experiments were conducted to check the finite sample performance of our method and an example of an application involving a set of medical data is also illustrated. 相似文献
938.
Bioequivalence (BE) studies are designed to show that two formulations of one drug are equivalent and they play an important role in drug development. When in a design stage, it is possible that there is a high degree of uncertainty on variability of the formulations and the actual performance of the test versus reference formulation. Therefore, an interim look may be desirable to stop the study if there is no chance of claiming BE at the end (futility), or claim BE if evidence is sufficient (efficacy), or adjust the sample size. Sequential design approaches specially for BE studies have been proposed previously in publications. We applied modification to the existing methods focusing on simplified multiplicity adjustment and futility stopping. We name our method modified sequential design for BE studies (MSDBE). Simulation results demonstrate comparable performance between MSDBE and the original published methods while MSDBE offers more transparency and better applicability. The R package MSDBE is available at https://sites.google.com/site/modsdbe/ . Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
939.
940.
Abdulkadir A. Hussein Sévérien Nkurunziza Katrina Tomanelli 《Australian & New Zealand Journal of Statistics》2014,56(1):15-26
Aalen's nonparametric additive model in which the regression coefficients are assumed to be unspecified functions of time is a flexible alternative to Cox's proportional hazards model when the proportionality assumption is in doubt. In this paper, we incorporate a general linear hypothesis into the estimation of the time‐varying regression coefficients. We combine unrestricted least squares estimators and estimators that are restricted by the linear hypothesis and produce James‐Stein‐type shrinkage estimators of the regression coefficients. We develop the asymptotic joint distribution of such restricted and unrestricted estimators and use this to study the relative performance of the proposed estimators via their integrated asymptotic distributional risks. We conduct Monte Carlo simulations to examine the relative performance of the estimators in terms of their integrated mean square errors. We also compare the performance of the proposed estimators with a recently devised LASSO estimator as well as with ridge‐type estimators both via simulations and data on the survival of primary billiary cirhosis patients. 相似文献