首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3929篇
  免费   114篇
  国内免费   13篇
管理学   190篇
民族学   1篇
人口学   40篇
丛书文集   24篇
理论方法论   27篇
综合类   358篇
社会学   25篇
统计学   3391篇
  2024年   1篇
  2023年   23篇
  2022年   12篇
  2021年   26篇
  2020年   69篇
  2019年   151篇
  2018年   175篇
  2017年   273篇
  2016年   137篇
  2015年   85篇
  2014年   119篇
  2013年   1166篇
  2012年   351篇
  2011年   104篇
  2010年   126篇
  2009年   134篇
  2008年   122篇
  2007年   91篇
  2006年   91篇
  2005年   88篇
  2004年   78篇
  2003年   61篇
  2002年   66篇
  2001年   62篇
  2000年   59篇
  1999年   59篇
  1998年   53篇
  1997年   42篇
  1996年   24篇
  1995年   20篇
  1994年   26篇
  1993年   19篇
  1992年   23篇
  1991年   8篇
  1990年   15篇
  1989年   9篇
  1988年   17篇
  1987年   8篇
  1986年   6篇
  1985年   4篇
  1984年   12篇
  1983年   13篇
  1982年   6篇
  1981年   5篇
  1980年   1篇
  1979年   6篇
  1978年   5篇
  1977年   2篇
  1975年   2篇
  1973年   1篇
排序方式: 共有4056条查询结果,搜索用时 15 毫秒
91.
Randomized response models have been used to estimate a population proportion of a sensitive attribute. A randomized device is typically employed to protect respondent's privacy in a survey. In addition, an unrelated question is asked to improve the statistical efficiency. In this article, we propose Bayesian estimation of rare sensitive attribute using randomized response technique, which includes a rare unrelated attribute. Two cases are considered, the proportion of a rare unrelated attribute is known and unknown. A simulation study is conducted to assess the performance of the models using mean absolute error and coverage probability. The results show that the performance depends on the parameters and is robust to priors.  相似文献   
92.
In this paper, a competing risks model is considered under adaptive type-I progressive hybrid censoring scheme (AT-I PHCS). The lifetimes of the latent failure times have Weibull distributions with the same shape parameter. We investigate the maximum likelihood estimation of the parameters. Bayes estimates of the parameters are obtained based on squared error and LINEX loss functions under the assumption of independent gamma priors. We propose to apply Markov Chain Monte Carlo (MCMC) techniques to carry out a Bayesian estimation procedure and in turn calculate the credible intervals. To evaluate the performance of the estimators, a simulation study is carried out.  相似文献   
93.
阶梯电价不仅引导居民合理、节约用电,而且减少了用户之间的电价交叉补贴,但在实际应用中存在多种不确定因素,如居民用电需求变动以及各档电量的确定范围等。针对居民用电需求变动,本文提出了基于贝叶斯估计方法的阶梯电价用电需求模型。首先提出基于阶梯电价的需求函数;其次对阶梯电价用电需求函数进行贝叶斯分析,分别从统计模型、似然函数、后验分布以及加速收敛四个方面分析;最后,对1250个用户的用电数据进行估计,将影响因素带入模型得出各用户的用电需求,确定了贝叶斯估计对用电需求模型构建的适用性。  相似文献   
94.
95.
The use of robust measures helps to increase the precision of the estimators, especially for the estimation of extremely skewed distributions. In this article, a generalized ratio estimator is proposed by using some robust measures with single auxiliary variable under the adaptive cluster sampling (ACS) design. We have incorporated tri-mean (TM), mid-range (MR) and Hodges-Lehman (HL) of the auxiliary variable as robust measures together with some conventional measures. The expressions of bias and mean square error (MSE) of the proposed generalized ratio estimator are derived. Two types of numerical study have been conducted using artificial clustered population and real data application to examine the performance of the proposed estimator over the usual mean per unit estimator under simple random sampling (SRS). Related results of the simulation study show that the proposed estimators provide better estimation results on both real and artificial population over the competing estimators.  相似文献   
96.
In this paper, we present an algorithm for clustering based on univariate kernel density estimation, named ClusterKDE. It consists of an iterative procedure that in each step a new cluster is obtained by minimizing a smooth kernel function. Although in our applications we have used the univariate Gaussian kernel, any smooth kernel function can be used. The proposed algorithm has the advantage of not requiring a priori the number of cluster. Furthermore, the ClusterKDE algorithm is very simple, easy to implement, well-defined and stops in a finite number of steps, namely, it always converges independently of the initial point. We also illustrate our findings by numerical experiments which are obtained when our algorithm is implemented in the software Matlab and applied to practical applications. The results indicate that the ClusterKDE algorithm is competitive and fast when compared with the well-known Clusterdata and K-means algorithms, used by Matlab to clustering data.  相似文献   
97.
Estimation in the multivariate context when the number of observations available is less than the number of variables is a classical theoretical problem. In order to ensure estimability, one has to assume certain constraints on the parameters. A method for maximum likelihood estimation under constraints is proposed to solve this problem. Even in the extreme case where only a single multivariate observation is available, this may provide a feasible solution. It simultaneously provides a simple, straightforward methodology to allow for specific structures within and between covariance matrices of several populations. This methodology yields exact maximum likelihood estimates.  相似文献   
98.
99.
This paper focusses on computing the Bayesian reliability of components whose performance characteristics (degradation – fatigue and cracks) are observed during a specified period of time. Depending upon the nature of degradation data collected, we fit a monotone increasing or decreasing function for the data. Since the components are supposed to have different lifetimes, the rate of degradation is assumed to be a random variable. At a critical level of degradation, the time to failure distribution is obtained. The exponential and power degradation models are studied and exponential density function is assumed for the random variable representing the rate of degradation. The maximum likelihood estimator and Bayesian estimator of the parameter of exponential density function, predictive distribution, hierarchical Bayes approach and robustness of the posterior mean are presented. The Gibbs sampling algorithm is used to obtain the Bayesian estimates of the parameter. Illustrations are provided for the train wheel degradation data.  相似文献   
100.
Prior information is often incorporated informally when planning a clinical trial. Here, we present an approach on how to incorporate prior information, such as data from historical clinical trials, into the nuisance parameter–based sample size re‐estimation in a design with an internal pilot study. We focus on trials with continuous endpoints in which the outcome variance is the nuisance parameter. For planning and analyzing the trial, frequentist methods are considered. Moreover, the external information on the variance is summarized by the Bayesian meta‐analytic‐predictive approach. To incorporate external information into the sample size re‐estimation, we propose to update the meta‐analytic‐predictive prior based on the results of the internal pilot study and to re‐estimate the sample size using an estimator from the posterior. By means of a simulation study, we compare the operating characteristics such as power and sample size distribution of the proposed procedure with the traditional sample size re‐estimation approach that uses the pooled variance estimator. The simulation study shows that, if no prior‐data conflict is present, incorporating external information into the sample size re‐estimation improves the operating characteristics compared to the traditional approach. In the case of a prior‐data conflict, that is, when the variance of the ongoing clinical trial is unequal to the prior location, the performance of the traditional sample size re‐estimation procedure is in general superior, even when the prior information is robustified. When considering to include prior information in sample size re‐estimation, the potential gains should be balanced against the risks.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号