全文获取类型
收费全文 | 3806篇 |
免费 | 116篇 |
国内免费 | 26篇 |
专业分类
管理学 | 243篇 |
民族学 | 15篇 |
人口学 | 14篇 |
丛书文集 | 240篇 |
理论方法论 | 100篇 |
综合类 | 1354篇 |
社会学 | 83篇 |
统计学 | 1899篇 |
出版年
2024年 | 4篇 |
2023年 | 22篇 |
2022年 | 37篇 |
2021年 | 35篇 |
2020年 | 60篇 |
2019年 | 88篇 |
2018年 | 113篇 |
2017年 | 198篇 |
2016年 | 101篇 |
2015年 | 90篇 |
2014年 | 147篇 |
2013年 | 687篇 |
2012年 | 252篇 |
2011年 | 177篇 |
2010年 | 150篇 |
2009年 | 163篇 |
2008年 | 175篇 |
2007年 | 171篇 |
2006年 | 171篇 |
2005年 | 201篇 |
2004年 | 163篇 |
2003年 | 141篇 |
2002年 | 122篇 |
2001年 | 104篇 |
2000年 | 87篇 |
1999年 | 48篇 |
1998年 | 32篇 |
1997年 | 33篇 |
1996年 | 25篇 |
1995年 | 19篇 |
1994年 | 15篇 |
1993年 | 21篇 |
1992年 | 17篇 |
1991年 | 12篇 |
1990年 | 8篇 |
1989年 | 5篇 |
1988年 | 7篇 |
1987年 | 7篇 |
1986年 | 3篇 |
1985年 | 5篇 |
1984年 | 3篇 |
1983年 | 9篇 |
1982年 | 10篇 |
1981年 | 2篇 |
1980年 | 2篇 |
1979年 | 1篇 |
1978年 | 3篇 |
1976年 | 1篇 |
1975年 | 1篇 |
排序方式: 共有3948条查询结果,搜索用时 125 毫秒
1.
农地产权结构是搭建土地增值收益分配与宪法秩序有效链接的重要桥梁.从农地产权结构变迁的视角看,我国传统土地增值收益的国有化机制实际在很大程度上符合1982年宪法所确立的社会主义地租国有原则及地利共享秩序.然而,由于宪法土地条款一开始就被注入了地利分配具有倾向性的思想基因,长此以往便忽视了农民作为整体分享土地增值收益的"另一半宪法秩序".新型城镇化背景下,国家进行了"同地同权"、赋权于"民"的农村土地管理制度改革,其实质是宪法秩序的延续而非替代,我国土地增值收益分配的主要机制仍是征地补偿制度.要实现我国土地增值收益的公平分享,必须回到社会主义"国家—集体"一元论的完整地利共享秩序中来,并遵循实质平等的要求,通过以"人的城镇化"为目标对区片综合地价进行限定、采取倾向于农村及农业发展的"土地财政"政策、打破城乡户籍二元制实现城乡一体化发展等方案,推动农民全过程参与、共享土地利益. 相似文献
2.
AbstractCharacterizing relations via Rényi entropy of m-generalized order statistics are considered along with examples and related stochastic orderings. Previous results for common order statistics are included. 相似文献
3.
A conformance proportion is an important and useful index to assess industrial quality improvement. Statistical confidence limits for a conformance proportion are usually required not only to perform statistical significance tests, but also to provide useful information for determining practical significance. In this article, we propose approaches for constructing statistical confidence limits for a conformance proportion of multiple quality characteristics. Under the assumption that the variables of interest are distributed with a multivariate normal distribution, we develop an approach based on the concept of a fiducial generalized pivotal quantity (FGPQ). Without any distribution assumption on the variables, we apply some confidence interval construction methods for the conformance proportion by treating it as the probability of a success in a binomial distribution. The performance of the proposed methods is evaluated through detailed simulation studies. The results reveal that the simulated coverage probability (cp) for the FGPQ-based method is generally larger than the claimed value. On the other hand, one of the binomial distribution-based methods, that is, the standard method suggested in classical textbooks, appears to have smaller simulated cps than the nominal level. Two alternatives to the standard method are found to maintain their simulated cps sufficiently close to the claimed level, and hence their performances are judged to be satisfactory. In addition, three examples are given to illustrate the application of the proposed methods. 相似文献
4.
Stephen J. Ruberg Frank E. Harrell Jr. Margaret Gamalo-Siebers Lisa LaVange J. Jack Lee Karen Price 《The American statistician》2019,73(1):319-327
ABSTRACTThe cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making. 相似文献
5.
Strong orthogonal arrays (SOAs) were recently introduced and studied as a class of space‐filling designs for computer experiments. An important problem that has not been addressed in the literature is that of design selection for such arrays. In this article, we conduct a systematic investigation into this problem, and we focus on the most useful SOA(n,m,4,2 + )s and SOA(n,m,4,2)s. This article first addresses the problem of design selection for SOAs of strength 2+ by examining their three‐dimensional projections. Both theoretical and computational results are presented. When SOAs of strength 2+ do not exist, we formulate a general framework for the selection of SOAs of strength 2 by looking at their two‐dimensional projections. The approach is fruitful, as it is applicable when SOAs of strength 2+ do not exist and it gives rise to them when they do. The Canadian Journal of Statistics 47: 302–314; 2019 © 2019 Statistical Society of Canada 相似文献
6.
David R. Bickel 《统计学通讯:理论与方法》2020,49(11):2703-2712
AbstractConfidence sets, p values, maximum likelihood estimates, and other results of non-Bayesian statistical methods may be adjusted to favor sampling distributions that are simple compared to others in the parametric family. The adjustments are derived from a prior likelihood function previously used to adjust posterior distributions. 相似文献
7.
In this paper, the quantile-based flattened logistic distribution has been studied. Some classical and quantile-based properties of the distribution have been obtained. Closed form expressions of L-moments, L-moment ratios and expectation of order statistics of the distribution have been obtained. A quantile-based analysis concerning the method of matching L-moments estimation is employed to estimate the parameters of the proposed model. We further derive the asymptotic variance–covariance matrix of the matching L-Moments estimators of the proposed model. Finally, we apply the proposed model to simulated as well as two real life datasets and compare the fit with the logistic distribution. 相似文献
8.
9.
Computing maximum likelihood estimates from type II doubly censored exponential data 总被引:1,自引:0,他引:1
Arturo J. fernández José I. Bravo Íñigo De Fuentes 《Statistical Methods and Applications》2002,11(2):187-200
It is well-known that, under Type II double censoring, the maximum likelihood (ML) estimators of the location and scale parameters, θ and δ, of a twoparameter exponential distribution are linear functions
of the order statistics. In contrast, when θ is known, theML estimator of δ does not admit a closed form expression. It is shown, however, that theML estimator of the scale parameter exists and is unique. Moreover, it has good large-sample properties. In addition, sharp
lower and upper bounds for this estimator are provided, which can serve as starting points for iterative interpolation methods
such as regula falsi. Explicit expressions for the expected Fisher information and Cramér-Rao lower bound are also derived.
In the Bayesian context, assuming an inverted gamma prior on δ, the uniqueness, boundedness and asymptotics of the highest
posterior density estimator of δ can be deduced in a similar way. Finally, an illustrative example is included. 相似文献
10.
Carmen Fernández Eduardo Ley Mark F. J. Steel 《Journal of the Royal Statistical Society. Series C, Applied statistics》2002,51(3):257-280
Summary. We model daily catches of fishing boats in the Grand Bank fishing grounds. We use data on catches per species for a number of vessels collected by the European Union in the context of the Northwest Atlantic Fisheries Organization. Many variables can be thought to influence the amount caught: a number of ship characteristics (such as the size of the ship, the fishing technique used and the mesh size of the nets) are obvious candidates, but one can also consider the season or the actual location of the catch. Our database leads to 28 possible regressors (arising from six continuous variables and four categorical variables, whose 22 levels are treated separately), resulting in a set of 177 million possible linear regression models for the log-catch. Zero observations are modelled separately through a probit model. Inference is based on Bayesian model averaging, using a Markov chain Monte Carlo approach. Particular attention is paid to the prediction of catches for single and aggregated ships. 相似文献