首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1172篇
  免费   28篇
  国内免费   3篇
管理学   54篇
民族学   3篇
人口学   43篇
丛书文集   24篇
理论方法论   8篇
综合类   286篇
社会学   32篇
统计学   753篇
  2023年   8篇
  2022年   9篇
  2021年   6篇
  2020年   23篇
  2019年   29篇
  2018年   30篇
  2017年   62篇
  2016年   35篇
  2015年   20篇
  2014年   49篇
  2013年   281篇
  2012年   97篇
  2011年   39篇
  2010年   39篇
  2009年   42篇
  2008年   45篇
  2007年   29篇
  2006年   32篇
  2005年   24篇
  2004年   19篇
  2003年   21篇
  2002年   27篇
  2001年   21篇
  2000年   20篇
  1999年   30篇
  1998年   19篇
  1997年   14篇
  1996年   12篇
  1995年   30篇
  1994年   17篇
  1993年   13篇
  1992年   16篇
  1991年   11篇
  1990年   2篇
  1989年   5篇
  1988年   3篇
  1987年   4篇
  1986年   3篇
  1985年   2篇
  1984年   2篇
  1983年   2篇
  1982年   2篇
  1981年   2篇
  1980年   1篇
  1979年   2篇
  1977年   3篇
  1976年   1篇
排序方式: 共有1203条查询结果,搜索用时 46 毫秒
101.
We establish weak and strong posterior consistency of Gaussian process priors studied by Lenk [1988. The logistic normal distribution for Bayesian, nonparametric, predictive densities. J. Amer. Statist. Assoc. 83 (402), 509–516] for density estimation. Weak consistency is related to the support of a Gaussian process in the sup-norm topology which is explicitly identified for many covariance kernels. In fact we show that this support is the space of all continuous functions when the usual covariance kernels are chosen and an appropriate prior is used on the smoothing parameters of the covariance kernel. We then show that a large class of Gaussian process priors achieve weak as well as strong posterior consistency (under some regularity conditions) at true densities that are either continuous or piecewise continuous.  相似文献   
102.
Hartigan (1975) defines the number q of clusters in a d ‐variate statistical population as the number of connected components of the set {f > c}, where f denotes the underlying density function on Rd and c is a given constant. Some usual cluster algorithms treat q as an input which must be given in advance. The authors propose a method for estimating this parameter which is based on the computation of the number of connected components of an estimate of {f > c}. This set estimator is constructed as a union of balls with centres at an appropriate subsample which is selected via a nonparametric density estimator of f. The asymptotic behaviour of the proposed method is analyzed. A simulation study and an example with real data are also included.  相似文献   
103.
In this paper, we investigate the performance of different parametric and nonparametric approaches for analyzing overdispersed person–time–event rates in the clinical trial setting. We show that the likelihood‐based parametric approach may not maintain the right size for the tested overdispersed person–time–event data. The nonparametric approaches may use an estimator as either the mean of the ratio of number of events over follow‐up time within each subjects or the ratio of the mean of the number of events over the mean follow‐up time in all the subjects. Among these, the ratio of the means is a consistent estimator and can be studied analytically. Asymptotic properties of all estimators were studied through numerical simulations. This research shows that the nonparametric ratio of the mean estimator is to be recommended in analyzing overdispersed person–time data. When sample size is small, some resampling‐based approaches can yield satisfactory results. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   
104.
Non-Gaussian processes of Ornstein–Uhlenbeck (OU) type offer the possibility of capturing important distributional deviations from Gaussianity and for flexible modelling of dependence structures. This paper develops this potential, drawing on and extending powerful results from probability theory for applications in statistical analysis. Their power is illustrated by a sustained application of OU processes within the context of finance and econometrics. We construct continuous time stochastic volatility models for financial assets where the volatility processes are superpositions of positive OU processes, and we study these models in relation to financial data and theory.  相似文献   
105.
对高密度可读写DVD光盘所采用的相变记录技术、增加记录密度的方法、轨道格式的确定、前置技术、ZCLV转动控制技术、绿色和兰色激光技术等关键技术作了介绍和分析。  相似文献   
106.
用氧化修饰脂蛋白(a)对Balb/c小鼠免疫.利用淋巴细胞杂交瘤技术制备了鼠抗人氧化修饰低密度脂蛋白(oxidativelymodifiedlowdensitylipoprotein,oxLDL)单克隆抗体细胞株.以单克隆抗体包被建立了酶联免疫吸附(ELISA)双抗夹心法.用此法对10例冠心病患者、6例其他心脏病患者、10例正常健康人血浆中oxLDL的含量进行了测定.结果表明,冠心病患者血浆中oxLDL水平明显高于其他心脏病组和正常对照组.本实验所建立的方法对诊断冠心病患者血浆中的oxLDL具有重要意义  相似文献   
107.
For the problems of nonparametric estimation of nonincreasing and symmetric unimodal density functions with bounded supports we determine the projections of estimates onto the convex families of possible parent densities with respect to the weighted integrated squared error. We also describe the method of approximating the analogous projections onto the respective density classes satisfying some general moment conditions. The method of projections reduces the estimation errors for all possible values of observations of a given finite sample size in a uniformly optimal way and provides estimates sharing the properties of the parent densities.  相似文献   
108.
Two families of parameter estimation procedures for the stable laws based on a variant of the characteristic function are provided. The methodology which produces viable computational procedures for the stable laws is generally applicable to other families of distributions across a variety of settings. Both families of procedures may be described as a modified weighted chi-squared minimization procedure, and both explicitly take account of constraints on the parameter space. Influence func-tions for and efficiencies of the estimators are given. If x1, x2, …xn random sample from an unknown distribution F , a method for determining the stable law to which F is attracted is developed. Procedures for regression and autoregres-sion with stable error structure are provided. A number of examples are given.  相似文献   
109.
On a multiple choice test in which each item has r alternative options, a given number c of which are correct, various scoring models have been proposed. In one case the test-taker is allowed to choose any size solution subset and he/she is graded according to whether the subset is small and according to how many correct answers the subset contains. In a second case the test-taker is allowed to select only solution subsets of a prespecified maximum size and is graded as above. The first case is analogous to the situation where the test-taker is given a set of r options with each question; each question calls for a solution which consists of selecting that subset of the r responses which he/she believes to be correct. In the second case, when the prespecified solution subset is restricted to be of size at most one, the resulting scoring model corresponds to the usual model, referred to below as standard. The number c of correct options per item is usually known to the test-taker in this case.

Scoring models are evaluated according to how well they correctly identify the total scores of the individuals in the class of test-takers. Loss functions are constructed which penalize scoring models resulting in student scores which are not associated with the students true (or average) total score on the exam. Scoring models are compared on the basis of cross-validated assessments of the loss incurred by using each of the given models. It is shown that in many cases the assessment of the loss for scoring models which allow students the opportunity to choose more than one option for each question are smaller than the assessment of the loss for the standard scoring model.  相似文献   
110.
By means of an example it is shown how eigenvalues and eigenvectors of variance components models can be obtained straightforwardly when balanced data are available. Simple asymptotically efficient estimators of the variance components are presented.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号