首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7431篇
  免费   766篇
  国内免费   52篇
管理学   1884篇
劳动科学   3篇
民族学   21篇
人口学   95篇
丛书文集   203篇
理论方法论   1014篇
综合类   1698篇
社会学   2061篇
统计学   1270篇
  2024年   9篇
  2023年   43篇
  2022年   38篇
  2021年   158篇
  2020年   254篇
  2019年   427篇
  2018年   282篇
  2017年   454篇
  2016年   445篇
  2015年   455篇
  2014年   528篇
  2013年   870篇
  2012年   575篇
  2011年   467篇
  2010年   418篇
  2009年   341篇
  2008年   369篇
  2007年   278篇
  2006年   289篇
  2005年   280篇
  2004年   257篇
  2003年   194篇
  2002年   219篇
  2001年   192篇
  2000年   148篇
  1999年   48篇
  1998年   31篇
  1997年   24篇
  1996年   21篇
  1995年   17篇
  1994年   15篇
  1993年   21篇
  1992年   9篇
  1991年   15篇
  1990年   10篇
  1989年   10篇
  1988年   15篇
  1987年   4篇
  1986年   3篇
  1985年   6篇
  1984年   3篇
  1983年   3篇
  1981年   4篇
排序方式: 共有8249条查询结果,搜索用时 15 毫秒
61.
62.
Interpreting data and communicating effectively through graphs and tables are requisite skills for statisticians and non‐statisticians in the pharmaceutical industry. However, the quality of visual displays of data in the medical and pharmaceutical literature and at scientific conferences is severely lacking. We describe an interactive, workshop‐driven, 2‐day short course that we constructed for pharmaceutical research personnel to learn these skills. The examples in the course and the workshop datasets source from our professional experiences, the scientific literature, and the mass media. During the course, the participants are exposed to and gain hands‐on experience with the principles of visual and graphical perception, design, and construction of both graphic and tabular displays of quantitative and qualitative information. After completing the course, with a critical eye, the participants are able to construct, revise, critique, and interpret graphic and tabular displays according to an extensive set of guidelines. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   
63.
A fast and accurate method of confidence interval construction for the smoothing parameter in penalised spline and partially linear models is proposed. The method is akin to a parametric percentile bootstrap where Monte Carlo simulation is replaced by saddlepoint approximation, and can therefore be viewed as an approximate bootstrap. It is applicable in a quite general setting, requiring only that the underlying estimator be the root of an estimating equation that is a quadratic form in normal random variables. This is the case under a variety of optimality criteria such as those commonly denoted by maximum likelihood (ML), restricted ML (REML), generalized cross validation (GCV) and Akaike's information criteria (AIC). Simulation studies reveal that under the ML and REML criteria, the method delivers a near‐exact performance with computational speeds that are an order of magnitude faster than existing exact methods, and two orders of magnitude faster than a classical bootstrap. Perhaps most importantly, the proposed method also offers a computationally feasible alternative when no known exact or asymptotic methods exist, e.g. GCV and AIC. An application is illustrated by applying the methodology to well‐known fossil data. Giving a range of plausible smoothed values in this instance can help answer questions about the statistical significance of apparent features in the data.  相似文献   
64.
Tree-based methods similar to CART have recently been utilized for problems in which the main goal is to estimate some set of interest. It is often the case that the boundary of the true set is smooth in some sense, however tree-based estimates will not be smooth, as they will be a union of ‘boxes’. We propose a general methodology for smoothing such sets that allows for varying levels of smoothness on the boundary automatically. The method is similar to the idea underlying support vector machines, which is applying a computationally simple technique to data after a non-linear mapping to produce smooth estimates in the original space. In particular, we consider the problem of level-set estimation for regression functions and the dyadic tree-based method of Willett and Nowak [Minimax optimal level-set estimation, IEEE Trans. Image Process. 16 (2007), pp. 2965–2979].  相似文献   
65.
We propose a semiparametric approach for the analysis of case–control genome-wide association study. Parametric components are used to model both the conditional distribution of the case status given the covariates and the distribution of genotype counts, whereas the distribution of the covariates are modelled nonparametrically. This yields a direct and joint modelling of the case status, covariates and genotype counts, and gives a better understanding of the disease mechanism and results in more reliable conclusions. Side information, such as the disease prevalence, can be conveniently incorporated into the model by an empirical likelihood approach and leads to more efficient estimates and a powerful test in the detection of disease-associated SNPs. Profiling is used to eliminate a nuisance nonparametric component, and the resulting profile empirical likelihood estimates are shown to be consistent and asymptotically normal. For the hypothesis test on disease association, we apply the approximate Bayes factor (ABF) which is computationally simple and most desirable in genome-wide association studies where hundreds of thousands to a million genetic markers are tested. We treat the approximate Bayes factor as a hybrid Bayes factor which replaces the full data by the maximum likelihood estimates of the parameters of interest in the full model and derive it under a general setting. The deviation from Hardy–Weinberg Equilibrium (HWE) is also taken into account and the ABF for HWE using cases is shown to provide evidence of association between a disease and a genetic marker. Simulation studies and an application are further provided to illustrate the utility of the proposed methodology.  相似文献   
66.
Abstract. We consider N independent stochastic processes (X i (t), t ∈ [0,T i ]), i=1,…, N, defined by a stochastic differential equation with drift term depending on a random variable φ i . The distribution of the random effect φ i depends on unknown parameters which are to be estimated from the continuous observation of the processes Xi. We give the expression of the exact likelihood. When the drift term depends linearly on the random effect φ i and φ i has Gaussian distribution, an explicit formula for the likelihood is obtained. We prove that the maximum likelihood estimator is consistent and asymptotically Gaussian, when T i =T for all i and N tends to infinity. We discuss the case of discrete observations. Estimators are computed on simulated data for several models and show good performances even when the length time interval of observations is not very large.  相似文献   
67.
We propose several new tests for monotonicity of regression functions based on different empirical processes of residuals and pseudo‐residuals. The residuals are obtained from an unconstrained kernel regression estimator whereas the pseudo‐residuals are obtained from an increasing regression estimator. Here, in particular, we consider a recently developed simple kernel‐based estimator for increasing regression functions based on increasing rearrangements of unconstrained non‐parametric estimators. The test statistics are estimated distance measures between the regression function and its increasing rearrangement. We discuss the asymptotic distributions, consistency and small sample performances of the tests.  相似文献   
68.
The paper introduces a new method for flexible spline fitting for copula density estimation. Spline coefficients are penalized to achieve a smooth fit. To weaken the curse of dimensionality, instead of a full tensor spline basis, a reduced tensor product based on so called sparse grids (Notes Numer. Fluid Mech. Multidiscip. Des., 31, 1991, 241‐251) is used. To achieve uniform margins of the copula density, linear constraints are placed on the spline coefficients, and quadratic programming is used to fit the model. Simulations and practical examples accompany the presentation.  相似文献   
69.
Statisticians fall far short of their potential as guides to enlightened decision making in business. Two important explanations are: (1) Decision makers are often more easily convinced by concrete examples, however fragmentary and misleading, than by competent statistical analysis. (2) The effective use of statistics in the process of decision making requires hard thinking by decision makers, thinking that cannot be delegated entirely to the statistical specialist. Modern developments in interactive statistical computing may help to reduce the force of these limitations on exploitation of statistics; used properly, computing can encourage, almost force, the student or business user of statistics to think statistically.  相似文献   
70.
An empirical Bayes problem has an unknown prior to be estimated from data. The predictive recursion (PR) algorithm provides fast nonparametric estimation of mixing distributions and is ideally suited for empirical Bayes applications. This article presents a general notion of empirical Bayes asymptotic optimality, and it is shown that PR-based procedures satisfy this property under certain conditions. As an application, the problem of in-season prediction of baseball batting averages is considered. There the PR-based empirical Bayes rule performs well in terms of prediction error and ability to capture the distribution of the latent features.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号