首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1517篇
  免费   39篇
  国内免费   5篇
管理学   114篇
人口学   10篇
丛书文集   15篇
理论方法论   9篇
综合类   150篇
社会学   58篇
统计学   1205篇
  2024年   2篇
  2023年   13篇
  2022年   13篇
  2021年   16篇
  2020年   33篇
  2019年   72篇
  2018年   70篇
  2017年   112篇
  2016年   60篇
  2015年   41篇
  2014年   62篇
  2013年   363篇
  2012年   114篇
  2011年   39篇
  2010年   59篇
  2009年   42篇
  2008年   33篇
  2007年   40篇
  2006年   42篇
  2005年   36篇
  2004年   27篇
  2003年   31篇
  2002年   39篇
  2001年   33篇
  2000年   20篇
  1999年   21篇
  1998年   17篇
  1997年   19篇
  1996年   14篇
  1995年   9篇
  1994年   8篇
  1993年   5篇
  1992年   11篇
  1991年   4篇
  1990年   8篇
  1989年   6篇
  1988年   3篇
  1987年   3篇
  1986年   3篇
  1985年   2篇
  1984年   1篇
  1983年   6篇
  1982年   1篇
  1981年   1篇
  1980年   2篇
  1979年   1篇
  1978年   2篇
  1977年   2篇
排序方式: 共有1561条查询结果,搜索用时 625 毫秒
801.
The mixture maximum likelihood approach to clustering is used to allocate treatments from a randomized complete block de-sign into relatively homogeneous groups. The implementation of this approach is straightforward for fixed but not random block effects. The density function in each underlying group is assumed to be normal and clustering is performed on the basis of the estimated posterior probabilities of group membership. A test based on the log likelihood under the mixture model can be used to assess the actual number of groups present. The tech-nique is demonstrated by using it to cluster data from a random-ized complete block experiment.  相似文献   
802.
This paper deals with the probability density functions of two correlated Pareto, two correlated t and two correlated F random variables.

  相似文献   
803.
804.
The LM test is modified to test any value of the ratio of two variance components in a mixed effects linear model with two variance components. The test is exact, so it can be used to construct exact confidence intervals on this ratio.Exact Neyman-Pearson (NP) tests on the variance ratio are described.Their powers provide attainable upper bounds on powers of tests on the variance ratio.Efficiencies of LM tests, which include ANOVA tests, and NP tests are compared for unbalanced, random, one-way ANOVA models.Confidence intervals corresponding to LM tests and NP tests are described.  相似文献   
805.
Under stratified random sampling, we develop a kth-order bootstrap bias-corrected estimator of the number of classes θ which exist in a study region. This research extends Smith and van Belle’s (1984) first-order bootstrap bias-corrected estimator under simple random sampling. Our estimator has applicability for many settings including: estimating the number of animals when there are stratified capture periods, estimating the number of species based on stratified random sampling of subunits (say, quadrats) from the region, and estimating the number of errors/defects in a product based on observations from two or more types of inspectors. When the differences between the strata are large, utilizing stratified random sampling and our estimator often results in superior performance versus the use of simple random sampling and its bootstrap or jackknife [Burnham and Overton (1978)] estimator. The superior performance is often associated with more observed classes, and we provide insights into optimal designation of the strata and optimal allocation of sample sectors to strata.  相似文献   
806.
We present a sharp uniform-in-bandwidth functional limit law for the increments of the Kaplan–Meier empirical process based upon right-censored random data. We apply this result to obtain limit laws for nonparametric kernel estimators of local functionals of lifetime densities, which are uniform with respect to the choices of bandwidth and kernel. These are established in the framework of convergence in probability, and we allow the bandwidth to vary within the complete range for which the estimators are consistent. We provide explicit values for the asymptotic limiting constant for the sup-norm of the estimation random error.  相似文献   
807.
This paper presents at an elementary level a unified presentation of concepts related to sufficiency and minimal sufficiency. Extensively discussed are techniques for showing in a particular statistical model that a given statistic is not sufficient or that a given sufficient statistic is not minimal. The applicability of these techniques is illustrated in three examples.  相似文献   
808.
This paper introduces a novel bootstrap procedure to perform inference in a wide class of partially identified econometric models. We consider econometric models defined by finitely many weak moment inequalities, 2 We can also admit models defined by moment equalities by combining pairs of weak moment inequalities.
which encompass many applications of economic interest. The objective of our inferential procedure is to cover the identified set with a prespecified probability. 3 We deal with the objective of covering each element of the identified set with a prespecified probability in Bugni (2010a).
We compare our bootstrap procedure, a competing asymptotic approximation, and subsampling procedures in terms of the rate at which they achieve the desired coverage level, also known as the error in the coverage probability. Under certain conditions, we show that our bootstrap procedure and the asymptotic approximation have the same order of error in the coverage probability, which is smaller than that obtained by using subsampling. This implies that inference based on our bootstrap and asymptotic approximation should eventually be more precise than inference based on subsampling. A Monte Carlo study confirms this finding in a small sample simulation.  相似文献   
809.
The random priority (random serial dictatorship) mechanism is a common method for assigning objects. The mechanism is easy to implement and strategy‐proof. However, this mechanism is inefficient, because all agents may be made better off by another mechanism that increases their chances of obtaining more preferred objects. This form of inefficiency is eliminated by a mechanism called probabilistic serial, but this mechanism is not strategy‐proof. Thus, which mechanism to employ in practical applications is an open question. We show that these mechanisms become equivalent when the market becomes large. More specifically, given a set of object types, the random assignments in these mechanisms converge to each other as the number of copies of each object type approaches infinity. Thus, the inefficiency of the random priority mechanism becomes small in large markets. Our result gives some rationale for the common use of the random priority mechanism in practical problems such as student placement in public schools.  相似文献   
810.
When the finite population ‘totals’ are estimated for individual areas, they do not necessarily add up to the known ‘total’ for all areas. Benchmarking (BM) is a technique used to ensure that the totals for all areas match the grand total, which can be obtained from an independent source. BM is desirable to practitioners of survey sampling. BM shifts the small-area estimators to accommodate the constraint. In doing so, it can provide increased precision to the small-area estimators of the finite population means or totals. The Scott–Smith model is used to benchmark the finite population means of small areas. This is a one-way random effects model for a superpopulation, and it is computationally convenient to use a Bayesian approach. We illustrate our method by estimating body mass index using data in the third National Health and Nutrition Examination Survey. Several properties of the benchmarked small-area estimators are obtained using a simulation study.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号