首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5319篇
  免费   139篇
  国内免费   20篇
管理学   344篇
民族学   1篇
人口学   40篇
丛书文集   26篇
理论方法论   21篇
综合类   363篇
社会学   33篇
统计学   4650篇
  2024年   2篇
  2023年   23篇
  2022年   39篇
  2021年   26篇
  2020年   83篇
  2019年   197篇
  2018年   218篇
  2017年   364篇
  2016年   163篇
  2015年   110篇
  2014年   150篇
  2013年   1500篇
  2012年   508篇
  2011年   137篇
  2010年   147篇
  2009年   170篇
  2008年   169篇
  2007年   147篇
  2006年   138篇
  2005年   144篇
  2004年   118篇
  2003年   92篇
  2002年   90篇
  2001年   91篇
  2000年   89篇
  1999年   93篇
  1998年   90篇
  1997年   63篇
  1996年   38篇
  1995年   34篇
  1994年   42篇
  1993年   24篇
  1992年   35篇
  1991年   16篇
  1990年   16篇
  1989年   12篇
  1988年   19篇
  1987年   8篇
  1986年   8篇
  1985年   5篇
  1984年   15篇
  1983年   13篇
  1982年   7篇
  1981年   6篇
  1980年   1篇
  1979年   6篇
  1978年   6篇
  1977年   2篇
  1975年   3篇
  1973年   1篇
排序方式: 共有5478条查询结果,搜索用时 62 毫秒
1.
Longitudinal studies are the gold standard of empirical work and stress research whenever experiments are not plausible. Frequently, scales are used to assess risk factors and their consequences, and cross-lagged effects are estimated to determine possible risks. Methods to translate cross-lagged effects into risk ratios to facilitate risk assessment do not yet exist, which creates a divide between psychological and epidemiological work stress research. The aim of the present paper is to demonstrate how cross-lagged effects can be used to assess the risk ratio of different levels of psychosocial safety climate (PSC) in organisations, an important psychosocial risk for the development of depression. We used available longitudinal evidence from the Australian Workplace Barometer (N?=?1905) to estimate cross-lagged effects of PSC on depression. We applied continuous time modelling to obtain time-scalable cross effects. These were further investigated in a 4-year Monte Carlo simulation, which translated them into 4-year incident rates. Incident rates were determined by relying on clinically relevant 2-year periods of depression. We suggest a critical value of PSC?=?26 (corresponding to ?1.4 SD), which is indicative of more than 100% increased incidents of persistent depressive disorder in 4-year periods compared to average levels of PSC across 4 years.  相似文献   
2.
The generalized half-normal (GHN) distribution and progressive type-II censoring are considered in this article for studying some statistical inferences of constant-stress accelerated life testing. The EM algorithm is considered to calculate the maximum likelihood estimates. Fisher information matrix is formed depending on the missing information law and it is utilized for structuring the asymptomatic confidence intervals. Further, interval estimation is discussed through bootstrap intervals. The Tierney and Kadane method, importance sampling procedure and Metropolis-Hastings algorithm are utilized to compute Bayesian estimates. Furthermore, predictive estimates for censored data and the related prediction intervals are obtained. We consider three optimality criteria to find out the optimal stress level. A real data set is used to illustrate the importance of GHN distribution as an alternative lifetime model for well-known distributions. Finally, a simulation study is provided with discussion.  相似文献   
3.
Bioequivalence (BE) studies are designed to show that two formulations of one drug are equivalent and they play an important role in drug development. When in a design stage, it is possible that there is a high degree of uncertainty on variability of the formulations and the actual performance of the test versus reference formulation. Therefore, an interim look may be desirable to stop the study if there is no chance of claiming BE at the end (futility), or claim BE if evidence is sufficient (efficacy), or adjust the sample size. Sequential design approaches specially for BE studies have been proposed previously in publications. We applied modification to the existing methods focusing on simplified multiplicity adjustment and futility stopping. We name our method modified sequential design for BE studies (MSDBE). Simulation results demonstrate comparable performance between MSDBE and the original published methods while MSDBE offers more transparency and better applicability. The R package MSDBE is available at https://sites.google.com/site/modsdbe/ . Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
4.
《Risk analysis》2018,38(8):1576-1584
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed‐form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling‐based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed‐form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks’s method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models.  相似文献   
5.
Abstract

This article proposes a new approach to analyze multiple vector autoregressive (VAR) models that render us a newly constructed matrix autoregressive (MtAR) model based on a matrix-variate normal distribution with two covariance matrices. The MtAR is a generalization of VAR models where the two covariance matrices allow the extension of MtAR to a structural MtAR analysis. The proposed MtAR can also incorporate different lag orders across VAR systems that provide more flexibility to the model. The estimation results from a simulation study and an empirical study on macroeconomic application show favorable performance of our proposed models and method.  相似文献   
6.
Abstract

In this article we develop the minimax estimation approach of general linear models to the semiparametric linear models when the parameters are simultaneously constrained by an ellipsoid and linear restrictions. Combining sample information and prior constraints the minimax estimator is obtained and compared with partially least square estimator by theoretical and simulation methods.  相似文献   
7.
依据GSM协议,结合GSM数字移动通信系统信号传输的特点,给出了基于GSM系统的带软输出结构的MLSE接收机的设计方案,阐述了MLSE接收机的实现算法。实际环境的试验结果表明:工程实现的MLSE接收机具有良好的工作性能和一定的实际应用价值。  相似文献   
8.
SαS稳定分布是一类非常重要的非高斯随机分布,具有这类分布的噪声称为冲激噪声。在冲激噪声情况下,α阶以上的矩均不存在,导致基于二阶矩的高斯模型算法性能下降,甚至不能正常工作。该文提出了一种在冲激噪声环境下线性调频信号特征参数估计的算法,通过分析冲激噪声的具体特点,给出了修正的低阶矩模糊函数,并结合Radon变换估计了冲激噪声环境下LFM信号的参数。该算法既可应用于冲激噪声下,又可应用于高斯噪声环境,故具有较好的鲁棒性。最后用计算机仿真验证了该算法的有效性。  相似文献   
9.
关于元评估     
本文阐述了元评估工作的重要意义 ,并对元评估活动所依据的原则 (客观性、整体性 全面性、综合性 )、应遵循的程序和采用的方法 (元研究方法、系统方法、评价方法、模型方法 )作了系统论述。元评估活动将对提高各级领导决策的科学性和自觉性 ,推动社会各个领域乃至整个社会与自然界协调而持续的发展作出宝贵的贡献  相似文献   
10.
While much used in practice, latent variable models raise challenging estimation problems due to the intractability of their likelihood. Monte Carlo maximum likelihood (MCML), as proposed by Geyer & Thompson (1992 ), is a simulation-based approach to maximum likelihood approximation applicable to general latent variable models. MCML can be described as an importance sampling method in which the likelihood ratio is approximated by Monte Carlo averages of importance ratios simulated from the complete data model corresponding to an arbitrary value of the unknown parameter. This paper studies the asymptotic (in the number of observations) performance of the MCML method in the case of latent variable models with independent observations. This is in contrast with previous works on the same topic which only considered conditional convergence to the maximum likelihood estimator, for a fixed set of observations. A first important result is that when is fixed, the MCML method can only be consistent if the number of simulations grows exponentially fast with the number of observations. If on the other hand, is obtained from a consistent sequence of estimates of the unknown parameter, then the requirements on the number of simulations are shown to be much weaker.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号