首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3261篇
  免费   81篇
  国内免费   14篇
管理学   196篇
民族学   8篇
人口学   82篇
丛书文集   120篇
理论方法论   82篇
综合类   866篇
社会学   129篇
统计学   1873篇
  2024年   1篇
  2023年   10篇
  2022年   11篇
  2021年   24篇
  2020年   51篇
  2019年   84篇
  2018年   98篇
  2017年   153篇
  2016年   67篇
  2015年   74篇
  2014年   137篇
  2013年   813篇
  2012年   265篇
  2011年   134篇
  2010年   136篇
  2009年   100篇
  2008年   140篇
  2007年   124篇
  2006年   148篇
  2005年   120篇
  2004年   96篇
  2003年   85篇
  2002年   70篇
  2001年   73篇
  2000年   65篇
  1999年   42篇
  1998年   27篇
  1997年   34篇
  1996年   32篇
  1995年   23篇
  1994年   10篇
  1993年   16篇
  1992年   15篇
  1991年   10篇
  1990年   8篇
  1989年   9篇
  1988年   10篇
  1987年   3篇
  1986年   3篇
  1985年   8篇
  1984年   4篇
  1983年   5篇
  1982年   4篇
  1981年   4篇
  1980年   4篇
  1979年   3篇
  1978年   3篇
排序方式: 共有3356条查询结果,搜索用时 453 毫秒
51.
A new estimator for estimating the proportion of a potentially sensitive attribute in survey sampling has been introduced by solving a linear equation. The proposed estimator has been compared with the estimator proposed by Odumade and Singh (2009) with equal protection to all of the respondents. The asymptotic properties of the proposed estimator are investigated through exact numerical illustrations for different choices of parameters. A non randomized response approach has been suggested. A scope for further research has also been pointed out.  相似文献   
52.
Response‐adaptive randomisation (RAR) can considerably improve the chances of a successful treatment outcome for patients in a clinical trial by skewing the allocation probability towards better performing treatments as data accumulates. There is considerable interest in using RAR designs in drug development for rare diseases, where traditional designs are not either feasible or ethically questionable. In this paper, we discuss and address a major criticism levelled at RAR: namely, type I error inflation due to an unknown time trend over the course of the trial. The most common cause of this phenomenon is changes in the characteristics of recruited patients—referred to as patient drift. This is a realistic concern for clinical trials in rare diseases due to their lengthly accrual rate. We compute the type I error inflation as a function of the time trend magnitude to determine in which contexts the problem is most exacerbated. We then assess the ability of different correction methods to preserve type I error in these contexts and their performance in terms of other operating characteristics, including patient benefit and power. We make recommendations as to which correction methods are most suitable in the rare disease context for several RAR rules, differentiating between the 2‐armed and the multi‐armed case. We further propose a RAR design for multi‐armed clinical trials, which is computationally efficient and robust to several time trends considered.  相似文献   
53.
The prediction error for mixed models can have a conditional or a marginal perspective depending on the research focus. We introduce a novel conditional version of the optimism theorem for mixed models linking the conditional prediction error to covariance penalties for mixed models. Different possibilities for estimating these conditional covariance penalties are introduced. These are bootstrap methods, cross-validation, and a direct approach called Steinian. The behavior of the different estimation techniques is assessed in a simulation study for the binomial-, the t-, and the gamma distribution and for different kinds of prediction error. Furthermore, the impact of the estimation techniques on the prediction error is discussed based on an application to undernutrition in Zambia.  相似文献   
54.
55.
56.
Business establishment microdata typically are required to satisfy agency-specified edit rules, such as balance equations and linear inequalities. Inevitably some establishments' reported data violate the edit rules. Statistical agencies correct faulty values using a process known as edit-imputation. Business establishment data also must be heavily redacted before being shared with the public; indeed, confidentiality concerns lead many agencies not to share establishment microdata as unrestricted access files. When microdata must be heavily redacted, one approach is to create synthetic data, as done in the U.S. Longitudinal Business Database and the German IAB Establishment Panel. This article presents the first implementation of a fully integrated approach to edit-imputation and data synthesis. We illustrate the approach on data from the U.S. Census of Manufactures and present a variety of evaluations of the utility of the synthetic data. The paper also presents assessments of disclosure risks for several intruder attacks. We find that the synthetic data preserve important distributional features from the post-editing confidential microdata, and have low risks for the various attacks.  相似文献   
57.
This article discusses the minimax estimator in partial linear model y = Zβ + f + ε under ellipsoidal restrictions on the parameter space and quadratic loss function. The superiority of the minimax estimator over the two-step estimator is studied in the mean squared error matrix criterion.  相似文献   
58.
In simulation studies for discriminant analysis, misclassification errors are often computed using the Monte Carlo method, by testing a classifier on large samples generated from known populations. Although large samples are expected to behave closely to the underlying distributions, they may not do so in a small interval or region, and thus may lead to unexpected results. We demonstrate with an example that the LDA misclassification error computed via the Monte Carlo method may often be smaller than the Bayes error. We give a rigorous explanation and recommend a method to properly compute misclassification errors.  相似文献   
59.

Engineers who conduct reliability tests need to choose the sample size when designing a test plan. The model parameters and quantiles are the typical quantities of interest. The large-sample procedure relies on the property that the distribution of the t -like quantities is close to the standard normal in large samples. In this paper, we use a new procedure based on both simulation and asymptotic theory to determine the sample size for a test plan. Unlike the complete data case, the t -like quantities are not pivotal quantities in general when data are time censored. However we show that the distribution of the t -like quantities only depend on the expected proportion failing and obtain the distributions by simulation for both complete and time censoring case when data follow Weibull distribution. We find that the large-sample procedure usually underestimates the sample size even when it is said to be 200 or more. The sample size given by the proposed procedure insures the requested nominal accuracy and confidence of the estimation when the test plan results in complete or time censored data. Some useful figures displaying the required sample size for the new procedure are also presented.  相似文献   
60.
Methods are proposed to combine several individual classifiers in order to develop more accurate classification rules. The proposed algorithm uses Rademacher–Walsh polynomials to combine M (≥2) individual classifiers in a nonlinear way. The resulting classifier is optimal in the sense that its misclassification error rate is always less than, or equal to, that of each constituent classifier. A number of numerical examples (based on both real and simulated data) are also given. These examples demonstrate some new, and far-reaching, benefits of working with combined classifiers.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号