首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1330篇
  免费   29篇
  国内免费   8篇
管理学   58篇
劳动科学   1篇
民族学   2篇
人口学   11篇
丛书文集   44篇
理论方法论   7篇
综合类   289篇
社会学   5篇
统计学   950篇
  2023年   6篇
  2022年   9篇
  2021年   11篇
  2020年   22篇
  2019年   44篇
  2018年   50篇
  2017年   75篇
  2016年   30篇
  2015年   31篇
  2014年   46篇
  2013年   377篇
  2012年   97篇
  2011年   46篇
  2010年   41篇
  2009年   22篇
  2008年   31篇
  2007年   52篇
  2006年   39篇
  2005年   33篇
  2004年   33篇
  2003年   22篇
  2002年   25篇
  2001年   37篇
  2000年   30篇
  1999年   17篇
  1998年   23篇
  1997年   14篇
  1996年   14篇
  1995年   10篇
  1994年   8篇
  1993年   10篇
  1992年   11篇
  1991年   6篇
  1990年   10篇
  1989年   4篇
  1988年   7篇
  1987年   3篇
  1986年   2篇
  1985年   2篇
  1984年   1篇
  1983年   3篇
  1982年   1篇
  1980年   5篇
  1979年   2篇
  1978年   4篇
  1976年   1篇
排序方式: 共有1367条查询结果,搜索用时 0 毫秒
1.
Abstract

The mean estimators with ratio depend on multiple auxiliary variables and unknown parameters in a finite population setting. We propose a new generalized approach with matrices for modeling the mutivariate mean estimators with two auxiliary variables. Our approach brings naturally a graphical analysis for comparing mean estimators.  相似文献   
2.
On Optimality of Bayesian Wavelet Estimators   总被引:2,自引:0,他引:2  
Abstract.  We investigate the asymptotic optimality of several Bayesian wavelet estimators, namely, posterior mean, posterior median and Bayes Factor, where the prior imposed on wavelet coefficients is a mixture of a mass function at zero and a Gaussian density. We show that in terms of the mean squared error, for the properly chosen hyperparameters of the prior, all the three resulting Bayesian wavelet estimators achieve optimal minimax rates within any prescribed Besov space     for p  ≥ 2. For 1 ≤  p  < 2, the Bayes Factor is still optimal for (2 s +2)/(2 s +1) ≤  p  < 2 and always outperforms the posterior mean and the posterior median that can achieve only the best possible rates for linear estimators in this case.  相似文献   
3.
Summary Meta-analyses of sets of clinical trials often combine risk differences from several 2×2 tables according to a random-effects model. The DerSimonian-Laird random-effects procedure, widely used for estimating the populaton mean risk difference, weights the risk difference from each primary study inversely proportional to an estimate of its variance (the sum of the between-study variance and the conditional within-study variance). Because those weights are not independent of the risk differences, however, the procedure sometimes exhibits bias and unnatural behavior. The present paper proposes a modified weighting scheme that uses the unconditional within-study variance to avoid this source of bias. The modified procedure has variance closer to that available from weighting by ideal weights when such weights are known. We studied the modified procedure in extensive simulation experiments using situations whose parameters resemble those of actual studies in medical research. For comparison we also included two unbiased procedures, the unweighted mean and a sample-size-weighted mean; their relative variability depends on the extent of heterogeneity among the primary studies. An example illustrates the application of the procedures to actual data and the differences among the results. This research was supported by Grant HS 05936 from the Agency for Health Care Policy and Research to Harvard University.  相似文献   
4.
Longitudinal data often contain missing observations, and it is in general difficult to justify particular missing data mechanisms, whether random or not, that may be hard to distinguish. The authors describe a likelihood‐based approach to estimating both the mean response and association parameters for longitudinal binary data with drop‐outs. They specify marginal and dependence structures as regression models which link the responses to the covariates. They illustrate their approach using a data set from the Waterloo Smoking Prevention Project They also report the results of simulation studies carried out to assess the performance of their technique under various circumstances.  相似文献   
5.
借助于李雅谱洛夫理论、矩阵分析方法和It?公式,结合不等式分析技巧,研究了随机细胞神经网络系统的均方指数稳定性,给出了系统的解的二阶矩Liapunov指数估计式和均方指数稳定的充分条件。  相似文献   
6.
WEIGHTED SUMS OF NEGATIVELY ASSOCIATED RANDOM VARIABLES   总被引:2,自引:0,他引:2  
In this paper, we establish strong laws for weighted sums of negatively associated (NA) random variables which have a higher‐order moment condition. Some results of Bai Z.D. & Cheng P.E. (2000) [Marcinkiewicz strong laws for linear statistics. Statist. and Probab. Lett. 43, 105–112,] and Sung S.K. (2001) [Strong laws for weighted sums of i.i.d. random variables, Statist. and Probab. Lett. 52, 413–419] are sharpened and extended from the independent identically distributed case to the NA setting. Also, one of the results of Li D.L. et al. (1995) [Complete convergence and almost sure convergence of weighted sums of random variables. J. Theoret. Probab. 8, 49–76,] is complemented and extended.  相似文献   
7.
Annual concentrations of toxic air contaminants are of primary concern from the perspective of chronic human exposure assessment and risk analysis. Despite recent advances in air quality monitoring technology, resource and technical constraints often impose limitations on the availability of a sufficient number of ambient concentration measurements for performing environmental risk analysis. Therefore, sample size limitations, representativeness of data, and uncertainties in the estimated annual mean concentration must be examined before performing quantitative risk analysis. In this paper, we discuss several factors that need to be considered in designing field-sampling programs for toxic air contaminants and in verifying compliance with environmental regulations. Specifically, we examine the behavior of SO2, TSP, and CO data as surrogates for toxic air contaminants and as examples of point source, area source, and line source-dominated pollutants, respectively, from the standpoint of sampling design. We demonstrate the use of bootstrap resampling method and normal theory in estimating the annual mean concentration and its 95% confidence bounds from limited sampling data, and illustrate the application of operating characteristic (OC) curves to determine optimum sample size and other sampling strategies. We also outline a statistical procedure, based on a one-sided t-test, that utilizes the sampled concentration data for evaluating whether a sampling site is compliance with relevant ambient guideline concentrations for toxic air contaminants.  相似文献   
8.
Finding optimal, or at least good, maintenance and repair policies is crucial in reliability engineering. Likewise, describing life phases of human mortality is important when determining social policy or insurance premiums. In these tasks, one searches for distributions to fit data and then makes inferences about the population(s). In the present paper, we focus on bathtub‐type distributions and provide a view of certain problems, methods and solutions, and a few challenges, that can be encountered in reliability engineering, survival analysis, demography and actuarial science.  相似文献   
9.
The weaknesses of established model selection procedures based on hypothesis testing and similar criteria are discussed and an alternative based on synthetic (composite) estimation is proposed. It is developed for the problem of prediction in ordinary regression and its properties are explored by simulations for the simple regression. Extensions to a general setting are described and an example with multiple regression is analysed. Arguments are presented against using a selected model for any inferences.  相似文献   
10.
文章提出具有卖空总量限制、阈值约束和V型交易成本的多阶段均值—半绝对偏差(M-SAD)投资组合优化模型。该模型分别运用均值和半绝对偏衡量资产的收益率和风险。由于交易成本的存在,该模型不满足无后效性的动态优化问题。文章将该模型近似为一般动态规划问题,提出一种新的离散迭代方法,并证明该算法是线性收敛的。最后,文章通过实证研究比较分析卖空总量限制和风险偏好系数取不同值时对投资组合最优策略的影响,验证模型和算法的有效性。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号