首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7989篇
  免费   215篇
  国内免费   219篇
管理学   909篇
民族学   16篇
人才学   3篇
人口学   29篇
丛书文集   768篇
理论方法论   201篇
综合类   5336篇
社会学   303篇
统计学   858篇
  2024年   15篇
  2023年   81篇
  2022年   113篇
  2021年   98篇
  2020年   141篇
  2019年   136篇
  2018年   154篇
  2017年   186篇
  2016年   179篇
  2015年   210篇
  2014年   414篇
  2013年   555篇
  2012年   453篇
  2011年   463篇
  2010年   356篇
  2009年   403篇
  2008年   474篇
  2007年   526篇
  2006年   595篇
  2005年   508篇
  2004年   534篇
  2003年   551篇
  2002年   428篇
  2001年   362篇
  2000年   192篇
  1999年   72篇
  1998年   41篇
  1997年   30篇
  1996年   36篇
  1995年   20篇
  1994年   10篇
  1993年   18篇
  1992年   15篇
  1991年   7篇
  1990年   11篇
  1989年   9篇
  1988年   6篇
  1987年   3篇
  1986年   3篇
  1985年   1篇
  1984年   4篇
  1983年   1篇
  1982年   4篇
  1980年   3篇
  1978年   1篇
  1977年   1篇
排序方式: 共有8423条查询结果,搜索用时 15 毫秒
991.
The reconstruction of populations by stochastic optimization solves the nontrivial problem of finding demographic flows from population registers or vital statistics and censuses, if available. These flows allow the reconstruction of stocks (age pyramids and vital statistics). After a review of reconstruction methods, the sensitivity analysis shows the robustness of the method by stochastic optimization to flawed or missing values, to the length of the reconstruction period, and to variations in the actual demographic flows.  相似文献   
992.
In this paper, locally D-optimal saturated designs for a logistic model with one and two continuous input variables have been constructed by modifying the famous Fedorov exchange algorithm. A saturated design not only ensures the minimum number of runs in the design but also simplifies the row exchange computation. The basic idea is to exchange a design point with a point from the design space. The algorithm performs the best row exchange between design points and points form a candidate set representing the design space. Naturally, the resultant designs depend on the candidate set. For gain in precision, intuitively a candidate set with a larger number of points and the low discrepancy is desirable, but it increases the computational cost. Apart from the modification in row exchange computation, we propose implementing the algorithm in two stages. Initially, construct a design with a candidate set of affordable size and then later generate a new candidate set around the points of design searched in the former stage. In order to validate the optimality of constructed designs, we have used the general equivalence theorem. Algorithms for the construction of optimal designs have been implemented by developing suitable codes in R.  相似文献   
993.
Measures of statistical divergence are used to assess mutual similarities between distributions of multiple variables through a variety of methodologies including Shannon entropy and Csiszar divergence. Modified measures of statistical divergence are introduced throughout the present article. Those modified measures are related to the Lin–Wong (LW) divergence applied on the past lifetime data. Accordingly, the relationship between Fisher information and the LW divergence measure was explored when applied on the past lifetime data. Throughout this study, a number of relations are proposed between various assessment methods which implement the Jensen–Shannon, Jeffreys, and Hellinger divergence measures. Also, relations between the LW measure and the Kullback–Leibler (KL) measures for past lifetime data were examined. Furthermore, the present study discusses the relationship between the proposed ordering scheme and the distance interval between LW and KL measures under certain conditions.  相似文献   
994.
There are no practical and effective mechanisms to share high-dimensional data including sensitive information in various fields like health financial intelligence or socioeconomics without compromising either the utility of the data or exposing private personal or secure organizational information. Excessive scrambling or encoding of the information makes it less useful for modelling or analytical processing. Insufficient preprocessing may compromise sensitive information and introduce a substantial risk for re-identification of individuals by various stratification techniques. To address this problem, we developed a novel statistical obfuscation method (DataSifter) for on-the-fly de-identification of structured and unstructured sensitive high-dimensional data such as clinical data from electronic health records (EHR). DataSifter provides complete administrative control over the balance between risk of data re-identification and preservation of the data information. Simulation results suggest that DataSifter can provide privacy protection while maintaining data utility for different types of outcomes of interest. The application of DataSifter on a large autism dataset provides a realistic demonstration of its promise practical applications.  相似文献   
995.
The asymptotic variance of the maximum likelihood estimate is proved to decrease when the maximization is restricted to a subspace that contains the true parameter value. Maximum likelihood estimation allows a systematic fitting of covariance models to the sample, which is important in data assimilation. The hierarchical maximum likelihood approach is applied to the spectral diagonal covariance model with different parameterizations of eigenvalue decay, and to the sparse inverse covariance model with specified parameter values on different sets of nonzero entries. It is shown computationally that using smaller sets of parameters can decrease the sampling noise in high dimension substantially.  相似文献   
996.
In recent years, calibration estimation has become an important field of research in survey sampling. This paper proposes a new calibration estimator for the population mean in the presence of two auxiliary variables in stratified sampling. The theory of new calibration estimator is given and optimum calibration weights are derived. A simulation study is carried out to performance of the proposed calibration estimator over other existing calibration estimators. The results reveal that the proposed calibration estimators are more efficient than other existing calibration estimators in stratified sampling.  相似文献   
997.
In studies with recurrent event endpoints, misspecified assumptions of event rates or dispersion can lead to underpowered trials or overexposure of patients. Specification of overdispersion is often a particular problem as it is usually not reported in clinical trial publications. Changing event rates over the years have been described for some diseases, adding to the uncertainty in planning. To mitigate the risks of inadequate sample sizes, internal pilot study designs have been proposed with a preference for blinded sample size reestimation procedures, as they generally do not affect the type I error rate and maintain trial integrity. Blinded sample size reestimation procedures are available for trials with recurrent events as endpoints. However, the variance in the reestimated sample size can be considerable in particular with early sample size reviews. Motivated by a randomized controlled trial in paediatric multiple sclerosis, a rare neurological condition in children, we apply the concept of blinded continuous monitoring of information, which is known to reduce the variance in the resulting sample size. Assuming negative binomial distributions for the counts of recurrent relapses, we derive information criteria and propose blinded continuous monitoring procedures. The operating characteristics of these are assessed in Monte Carlo trial simulations demonstrating favourable properties with regard to type I error rate, power, and stopping time, ie, sample size.  相似文献   
998.
999.
In this article, we present results on the Shannon information (SI) contained in upper(lower) k-record values and associated k-record times. We then establish an interesting relationship between the SI content of a random sample of fixed size, and the SI in the data consisting of sequential maxima. We also consider the information contained in the k-record data from an inverse sampling plan (ISP).  相似文献   
1000.
Consider a two-by-two factorial experiment with more than one replicate. Suppose that we have uncertain prior information that the two-factor interaction is zero. We describe new simultaneous frequentist confidence intervals for the four population cell means, with simultaneous confidence coefficient 1 ? α, that utilize this prior information in the following sense. These simultaneous confidence intervals define a cube with expected volume that (a) is relatively small when the two-factor interaction is zero and (b) has maximum value that is not too large. Also, these intervals coincide with the standard simultaneous confidence intervals obtained by Tukey’s method, with simultaneous confidence coefficient 1 ? α, when the data strongly contradict the prior information that the two-factor interaction is zero. We illustrate the application of these new simultaneous confidence intervals to a real data set.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号