全文获取类型
收费全文 | 1501篇 |
免费 | 67篇 |
国内免费 | 3篇 |
专业分类
管理学 | 136篇 |
民族学 | 4篇 |
人口学 | 72篇 |
丛书文集 | 32篇 |
理论方法论 | 21篇 |
综合类 | 275篇 |
社会学 | 55篇 |
统计学 | 976篇 |
出版年
2023年 | 14篇 |
2022年 | 10篇 |
2021年 | 18篇 |
2020年 | 30篇 |
2019年 | 51篇 |
2018年 | 60篇 |
2017年 | 79篇 |
2016年 | 40篇 |
2015年 | 44篇 |
2014年 | 68篇 |
2013年 | 394篇 |
2012年 | 113篇 |
2011年 | 91篇 |
2010年 | 56篇 |
2009年 | 74篇 |
2008年 | 53篇 |
2007年 | 63篇 |
2006年 | 42篇 |
2005年 | 44篇 |
2004年 | 32篇 |
2003年 | 30篇 |
2002年 | 23篇 |
2001年 | 15篇 |
2000年 | 21篇 |
1999年 | 18篇 |
1998年 | 14篇 |
1997年 | 10篇 |
1996年 | 9篇 |
1995年 | 10篇 |
1994年 | 6篇 |
1993年 | 6篇 |
1992年 | 6篇 |
1991年 | 3篇 |
1990年 | 3篇 |
1989年 | 4篇 |
1988年 | 3篇 |
1987年 | 1篇 |
1985年 | 2篇 |
1984年 | 5篇 |
1983年 | 3篇 |
1982年 | 1篇 |
1980年 | 1篇 |
1978年 | 1篇 |
排序方式: 共有1571条查询结果,搜索用时 15 毫秒
1.
In this article, we propose a novel approach for testing the equality of two log-normal populations using a computational approach test (CAT) that does not require explicit knowledge of the sampling distribution of the test statistic. Simulation studies demonstrate that the proposed approach can perform hypothesis testing with satisfying actual size even at small sample sizes. Overall, it is superior to other existing methods. Also, a CAT is proposed for testing about reliability of two log-normal populations when the means are the same. Simulations show that the actual size of this new approach is close to nominal level and better than the score test. At the end, the proposed methods are illustrated using two examples. 相似文献
2.
When a candidate predictive marker is available, but evidence on its predictive ability is not sufficiently reliable, all‐comers trials with marker stratification are frequently conducted. We propose a framework for planning and evaluating prospective testing strategies in confirmatory, phase III marker‐stratified clinical trials based on a natural assumption on heterogeneity of treatment effects across marker‐defined subpopulations, where weak rather than strong control is permitted for multiple population tests. For phase III marker‐stratified trials, it is expected that treatment efficacy is established in a particular patient population, possibly in a marker‐defined subpopulation, and that the marker accuracy is assessed when the marker is used to restrict the indication or labelling of the treatment to a marker‐based subpopulation, ie, assessment of the clinical validity of the marker. In this paper, we develop statistical testing strategies based on criteria that are explicitly designated to the marker assessment, including those examining treatment effects in marker‐negative patients. As existing and developed statistical testing strategies can assert treatment efficacy for either the overall patient population or the marker‐positive subpopulation, we also develop criteria for evaluating the operating characteristics of the statistical testing strategies based on the probabilities of asserting treatment efficacy across marker subpopulations. Numerical evaluations to compare the statistical testing strategies based on the developed criteria are provided. 相似文献
3.
In studies with recurrent event endpoints, misspecified assumptions of event rates or dispersion can lead to underpowered trials or overexposure of patients. Specification of overdispersion is often a particular problem as it is usually not reported in clinical trial publications. Changing event rates over the years have been described for some diseases, adding to the uncertainty in planning. To mitigate the risks of inadequate sample sizes, internal pilot study designs have been proposed with a preference for blinded sample size reestimation procedures, as they generally do not affect the type I error rate and maintain trial integrity. Blinded sample size reestimation procedures are available for trials with recurrent events as endpoints. However, the variance in the reestimated sample size can be considerable in particular with early sample size reviews. Motivated by a randomized controlled trial in paediatric multiple sclerosis, a rare neurological condition in children, we apply the concept of blinded continuous monitoring of information, which is known to reduce the variance in the resulting sample size. Assuming negative binomial distributions for the counts of recurrent relapses, we derive information criteria and propose blinded continuous monitoring procedures. The operating characteristics of these are assessed in Monte Carlo trial simulations demonstrating favourable properties with regard to type I error rate, power, and stopping time, ie, sample size. 相似文献
4.
Bioequivalence (BE) studies are designed to show that two formulations of one drug are equivalent and they play an important role in drug development. When in a design stage, it is possible that there is a high degree of uncertainty on variability of the formulations and the actual performance of the test versus reference formulation. Therefore, an interim look may be desirable to stop the study if there is no chance of claiming BE at the end (futility), or claim BE if evidence is sufficient (efficacy), or adjust the sample size. Sequential design approaches specially for BE studies have been proposed previously in publications. We applied modification to the existing methods focusing on simplified multiplicity adjustment and futility stopping. We name our method modified sequential design for BE studies (MSDBE). Simulation results demonstrate comparable performance between MSDBE and the original published methods while MSDBE offers more transparency and better applicability. The R package MSDBE is available at https://sites.google.com/site/modsdbe/ . Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
5.
Merging information for semiparametric density estimation 总被引:1,自引:0,他引:1
Konstantinos Fokianos 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2004,66(4):941-958
Summary. The density ratio model specifies that the likelihood ratio of m −1 probability density functions with respect to the m th is of known parametric form without reference to any parametric model. We study the semiparametric inference problem that is related to the density ratio model by appealing to the methodology of empirical likelihood. The combined data from all the samples leads to more efficient kernel density estimators for the unknown distributions. We adopt variants of well-established techniques to choose the smoothing parameter for the density estimators proposed. 相似文献
6.
Peter Hall Qiwei Yao 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2003,65(2):425-442
Summary. We develop a general methodology for tilting time series data. Attention is focused on a large class of regression problems, where errors are expressed through autoregressive processes. The class has a range of important applications and in the context of our work may be used to illustrate the application of tilting methods to interval estimation in regression, robust statistical inference and estimation subject to constraints. The method can be viewed as 'empirical likelihood with nuisance parameters'. 相似文献
7.
唐钧 《广州大学学报(社会科学版)》2006,5(10):30-35
在政治体制改革中,公务员或国家干部人数的多少并不是一个关键因素,重要的是必须把政府的社会定位(包括角色、权力和责任)划分清楚。与世界各国相比较,我国197.69∶1(或116.27∶1)的“民官比”并不高,因此,今后的政治体制改革不需要再把精力集中在“精简”上,而应按邓小平的部署,首先把党政分开和权力下放的工作做好。毕竟“小政府”是指“政府权力受限制”,而非“人数更少”;“大社会”是指“社会权利更广泛”,而非“人数更多”。 相似文献
8.
Stuart Barber Guy P. Nason 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2004,66(4):927-939
Summary. Wavelet shrinkage is an effective nonparametric regression technique, especially when the underlying curve has irregular features such as spikes or discontinuities. The basic idea is simple: take the discrete wavelet transform of data consisting of a signal corrupted by noise; shrink or remove the wavelet coefficients to remove the noise; then invert the discrete wavelet transform to form an estimate of the true underlying curve. Various researchers have proposed increasingly sophisticated methods of doing this by using real-valued wavelets. Complex-valued wavelets exist but are rarely used. We propose two new complex-valued wavelet shrinkage techniques: one based on multiwavelet style shrinkage and the other using Bayesian methods. Extensive simulations show that our methods almost always give significantly more accurate estimates than methods based on real-valued wavelets. Further, our multiwavelet style shrinkage method is both simpler and dramatically faster than its competitors. To understand the excellent performance of this method we present a new risk bound on its hard thresholded coefficients. 相似文献
9.
D. R. Cox Man Yu Wong 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2004,66(2):395-400
Summary. Given a large number of test statistics, a small proportion of which represent departures from the relevant null hypothesis, a simple rule is given for choosing those statistics that are indicative of departure. It is based on fitting by moments a mixture model to the set of test statistics and then deriving an estimated likelihood ratio. Simulation suggests that the procedure has good properties when the departure from an overall null hypothesis is not too small. 相似文献
10.