全文获取类型
收费全文 | 2284篇 |
免费 | 77篇 |
国内免费 | 6篇 |
专业分类
管理学 | 124篇 |
民族学 | 2篇 |
人口学 | 27篇 |
丛书文集 | 21篇 |
理论方法论 | 41篇 |
综合类 | 316篇 |
社会学 | 72篇 |
统计学 | 1764篇 |
出版年
2024年 | 1篇 |
2023年 | 30篇 |
2022年 | 16篇 |
2021年 | 23篇 |
2020年 | 43篇 |
2019年 | 99篇 |
2018年 | 102篇 |
2017年 | 164篇 |
2016年 | 61篇 |
2015年 | 62篇 |
2014年 | 83篇 |
2013年 | 552篇 |
2012年 | 181篇 |
2011年 | 64篇 |
2010年 | 71篇 |
2009年 | 72篇 |
2008年 | 80篇 |
2007年 | 75篇 |
2006年 | 71篇 |
2005年 | 65篇 |
2004年 | 45篇 |
2003年 | 50篇 |
2002年 | 49篇 |
2001年 | 36篇 |
2000年 | 44篇 |
1999年 | 39篇 |
1998年 | 29篇 |
1997年 | 22篇 |
1996年 | 20篇 |
1995年 | 14篇 |
1994年 | 10篇 |
1993年 | 10篇 |
1992年 | 14篇 |
1991年 | 15篇 |
1990年 | 9篇 |
1989年 | 2篇 |
1988年 | 10篇 |
1987年 | 1篇 |
1986年 | 3篇 |
1985年 | 6篇 |
1984年 | 5篇 |
1983年 | 7篇 |
1982年 | 5篇 |
1981年 | 2篇 |
1980年 | 2篇 |
1979年 | 2篇 |
1977年 | 1篇 |
排序方式: 共有2367条查询结果,搜索用时 15 毫秒
1.
Proportional hazards are a common assumption when designing confirmatory clinical trials in oncology. This assumption not only affects the analysis part but also the sample size calculation. The presence of delayed effects causes a change in the hazard ratio while the trial is ongoing since at the beginning we do not observe any difference between treatment arms, and after some unknown time point, the differences between treatment arms will start to appear. Hence, the proportional hazards assumption no longer holds, and both sample size calculation and analysis methods to be used should be reconsidered. The weighted log‐rank test allows a weighting for early, middle, and late differences through the Fleming and Harrington class of weights and is proven to be more efficient when the proportional hazards assumption does not hold. The Fleming and Harrington class of weights, along with the estimated delay, can be incorporated into the sample size calculation in order to maintain the desired power once the treatment arm differences start to appear. In this article, we explore the impact of delayed effects in group sequential and adaptive group sequential designs and make an empirical evaluation in terms of power and type‐I error rate of the of the weighted log‐rank test in a simulated scenario with fixed values of the Fleming and Harrington class of weights. We also give some practical recommendations regarding which methodology should be used in the presence of delayed effects depending on certain characteristics of the trial. 相似文献
2.
Bioequivalence (BE) studies are designed to show that two formulations of one drug are equivalent and they play an important role in drug development. When in a design stage, it is possible that there is a high degree of uncertainty on variability of the formulations and the actual performance of the test versus reference formulation. Therefore, an interim look may be desirable to stop the study if there is no chance of claiming BE at the end (futility), or claim BE if evidence is sufficient (efficacy), or adjust the sample size. Sequential design approaches specially for BE studies have been proposed previously in publications. We applied modification to the existing methods focusing on simplified multiplicity adjustment and futility stopping. We name our method modified sequential design for BE studies (MSDBE). Simulation results demonstrate comparable performance between MSDBE and the original published methods while MSDBE offers more transparency and better applicability. The R package MSDBE is available at https://sites.google.com/site/modsdbe/ . Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
3.
If a population contains many zero values and the sample size is not very large, the traditional normal approximation‐based confidence intervals for the population mean may have poor coverage probabilities. This problem is substantially reduced by constructing parametric likelihood ratio intervals when an appropriate mixture model can be found. In the context of survey sampling, however, there is a general preference for making minimal assumptions about the population under study. The authors have therefore investigated the coverage properties of nonparametric empirical likelihood confidence intervals for the population mean. They show that under a variety of hypothetical populations, these intervals often outperformed parametric likelihood intervals by having more balanced coverage rates and larger lower bounds. The authors illustrate their methodology using data from the Canadian Labour Force Survey for the year 2000. 相似文献
4.
Amy H. Herring Joseph G. Ibrahim Stuart R. Lipsitz 《Journal of the Royal Statistical Society. Series C, Applied statistics》2004,53(2):293-310
Summary. Non-ignorable missing data, a serious problem in both clinical trials and observational studies, can lead to biased inferences. Quality-of-life measures have become increasingly popular in clinical trials. However, these measures are often incompletely observed, and investigators may suspect that missing quality-of-life data are likely to be non-ignorable. Although several recent references have addressed missing covariates in survival analysis, they all required the assumption that missingness is at random or that all covariates are discrete. We present a method for estimating the parameters in the Cox proportional hazards model when missing covariates may be non-ignorable and continuous or discrete. Our method is useful in reducing the bias and improving efficiency in the presence of missing data. The methodology clearly specifies assumptions about the missing data mechanism and, through sensitivity analysis, helps investigators to understand the potential effect of missing data on study results. 相似文献
5.
Merging information for semiparametric density estimation 总被引:1,自引:0,他引:1
Konstantinos Fokianos 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2004,66(4):941-958
Summary. The density ratio model specifies that the likelihood ratio of m −1 probability density functions with respect to the m th is of known parametric form without reference to any parametric model. We study the semiparametric inference problem that is related to the density ratio model by appealing to the methodology of empirical likelihood. The combined data from all the samples leads to more efficient kernel density estimators for the unknown distributions. We adopt variants of well-established techniques to choose the smoothing parameter for the density estimators proposed. 相似文献
6.
Biao Zhang 《Australian & New Zealand Journal of Statistics》2004,46(3):407-423
Demonstrated equivalence between a categorical regression model based on case‐control data and an I‐sample semiparametric selection bias model leads to a new goodness‐of‐fit test. The proposed test statistic is an extension of an existing Kolmogorov–Smirnov‐type statistic and is the weighted average of the absolute differences between two estimated distribution functions in each response category. The paper establishes an optimal property for the maximum semiparametric likelihood estimator of the parameters in the I‐sample semiparametric selection bias model. It also presents a bootstrap procedure, some simulation results and an analysis of two real datasets. 相似文献
7.
基于复杂适应系统的作战理论哲学反思 总被引:1,自引:0,他引:1
传统的作战理论与方法已经不能适应像现代信息化战争系统这类充满“活”的个体和变化因素的复杂系统,需要进行理论创新。而复杂适应系统理论是当代系统科学的一个新发展。有望成为创新作战理论的突破口。本文在分析比较作战系统的基础上,认为作战系统实质是复杂的适应系统,作战系统内的作战双方都力图以增强自身的适应性和复杂性,削弱对方的适应性和复杂性取得作战的胜利。 相似文献
8.
We discuss Bayesian analyses of traditional normal-mixture models for classification and discrimination. The development involves application of an iterative resampling approach to Monte Carlo inference, commonly called Gibbs sampling, and demonstrates routine application. We stress the benefits of exact analyses over traditional classification and discrimination techniques, including the ease with which such analyses may be performed in a quite general setting, with possibly several normal-mixture components having different covariance matrices, the computation of exact posterior classification probabilities for observed data and for future cases to be classified, and posterior distributions for these probabilities that allow for assessment of second-level uncertainties in classification. 相似文献
9.
Stephen Walker 《Statistics and Computing》1995,5(4):311-315
Laud et al. (1993) describe a method for random variate generation from D-distributions. In this paper an alternative method using substitution sampling is given. An algorithm for the random variate generation from SD-distributions is also given. 相似文献
10.
描述了影响DBF系统特性的主要因素,研究了阵元间互耦对自适应方向图旁瓣和零深的影响及校正方法,讨论了在DBF阵中校正接收通道幅、相误差和I/Q支路正交误差的技术途径。计算机模拟和测试证明,按照所述方法进行校正可以得到满意的结果。另外,为了减小I/Q支路产生正交误差,建议采用中频直接采样和数字化的接收机方案。 相似文献