首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3590篇
  免费   133篇
  国内免费   10篇
管理学   621篇
民族学   7篇
人口学   25篇
丛书文集   113篇
理论方法论   67篇
综合类   535篇
社会学   211篇
统计学   2154篇
  2023年   38篇
  2022年   31篇
  2021年   54篇
  2020年   62篇
  2019年   105篇
  2018年   123篇
  2017年   216篇
  2016年   107篇
  2015年   101篇
  2014年   160篇
  2013年   644篇
  2012年   253篇
  2011年   152篇
  2010年   116篇
  2009年   144篇
  2008年   132篇
  2007年   138篇
  2006年   138篇
  2005年   111篇
  2004年   135篇
  2003年   108篇
  2002年   88篇
  2001年   79篇
  2000年   84篇
  1999年   66篇
  1998年   44篇
  1997年   38篇
  1996年   13篇
  1995年   11篇
  1994年   24篇
  1993年   21篇
  1992年   31篇
  1991年   32篇
  1990年   12篇
  1989年   14篇
  1988年   16篇
  1987年   9篇
  1986年   11篇
  1985年   10篇
  1984年   8篇
  1983年   16篇
  1982年   10篇
  1981年   9篇
  1980年   8篇
  1979年   4篇
  1978年   5篇
  1977年   1篇
  1975年   1篇
排序方式: 共有3733条查询结果,搜索用时 15 毫秒
91.
Compared to the remarkable progress in risk analysis of normal accidents, the risk analysis of major accidents has not been so well‐established, partly due to the complexity of such accidents and partly due to low probabilities involved. The issue of low probabilities normally arises from the scarcity of major accidents’ relevant data since such accidents are few and far between. In this work, knowing that major accidents are frequently preceded by accident precursors, a novel precursor‐based methodology has been developed for likelihood modeling of major accidents in critical infrastructures based on a unique combination of accident precursor data, information theory, and approximate reasoning. For this purpose, we have introduced an innovative application of information analysis to identify the most informative near accident of a major accident. The observed data of the near accident were then used to establish predictive scenarios to foresee the occurrence of the major accident. We verified the methodology using offshore blowouts in the Gulf of Mexico, and then demonstrated its application to dam breaches in the United Sates.  相似文献   
92.
船舶航行风险是由各种风险因素交互作用产生。综合 HHM-RFRM 理论,构建船舶航行多维风险情景危险度测度模型,以风险因素耦合视角探讨船舶航行风险管理问题。在船舶航行多维风险情景危险度测度模型中结合贝叶斯定理,对船舶航行风险情景进行了定性与定量化过滤、评级。最后以大连港某从事商务活动的货船为例,验证了所提出方法的可行性。传统的风险评估方法只能评估单个风险因素对系统的影响,此评估方法克服了这一局限性,为船舶航行风险管理提供新的视角。  相似文献   
93.
Estimation and Properties of a Time-Varying EGARCH(1,1) in Mean Model   总被引:1,自引:1,他引:0  
Time-varying GARCH-M models are commonly employed in econometrics and financial economics. Yet the recursive nature of the conditional variance makes likelihood analysis of these models computationally infeasible. This article outlines the issues and suggests to employ a Markov chain Monte Carlo algorithm which allows the calculation of a classical estimator via the simulated EM algorithm or a simulated Bayesian solution in only O(T) computational operations, where T is the sample size. Furthermore, the theoretical dynamic properties of a time-varying-parameter EGARCH(1,1)-M are derived. We discuss them and apply the suggested Bayesian estimation to three major stock markets.  相似文献   
94.
This paper focusses on computing the Bayesian reliability of components whose performance characteristics (degradation – fatigue and cracks) are observed during a specified period of time. Depending upon the nature of degradation data collected, we fit a monotone increasing or decreasing function for the data. Since the components are supposed to have different lifetimes, the rate of degradation is assumed to be a random variable. At a critical level of degradation, the time to failure distribution is obtained. The exponential and power degradation models are studied and exponential density function is assumed for the random variable representing the rate of degradation. The maximum likelihood estimator and Bayesian estimator of the parameter of exponential density function, predictive distribution, hierarchical Bayes approach and robustness of the posterior mean are presented. The Gibbs sampling algorithm is used to obtain the Bayesian estimates of the parameter. Illustrations are provided for the train wheel degradation data.  相似文献   
95.
Word clouds constitute one of the most popular statistical tools for the visual analysis of text documents because they provide users with a quick and intuitive understanding of the content. Despite their popularity for visualizing single documents, word clouds are not appropriate to compare different text documents. Independently generating word clouds for each document leads to configurations where the same word is typically located in widely different positions. This makes it very difficult to compare two or more word clouds. This paper introduces COWORDS, a new stochastic algorithm to create multiple word clouds, including one for each document. The shared words in multiple documents are placed in the same position in all clouds. Similar documents produce similar and compact clouds, making it easier to simultaneously compare and interpret several word clouds. The algorithm is based on a probability distribution in which the most probable configurations are those with a desirable visual aspect, such as a low value for the total distance between the words in all clouds. The algorithm output is a set of word clouds that are randomly selected from this probability distribution. The selection procedure uses a Markov chain Monte Carlo simulation method. We present several examples that illustrate the performance and visual results that can be obtained by our algorithm.  相似文献   
96.
Information before unblinding regarding the success of confirmatory clinical trials is highly uncertain. Current techniques using point estimates of auxiliary parameters for estimating expected blinded sample size: (i) fail to describe the range of likely sample sizes obtained after the anticipated data are observed, and (ii) fail to adjust to the changing patient population. Sequential MCMC-based algorithms are implemented for purposes of sample size adjustments. The uncertainty arising from clinical trials is characterized by filtering later auxiliary parameters through their earlier counterparts and employing posterior distributions to estimate sample size and power. The use of approximate expected power estimates to determine the required additional sample size are closely related to techniques employing Simple Adjustments or the EM algorithm. By contrast with these, our proposed methodology provides intervals for the expected sample size using the posterior distribution of auxiliary parameters. Future decisions about additional subjects are better informed due to our ability to account for subject response heterogeneity over time. We apply the proposed methodologies to a depression trial. Our proposed blinded procedures should be considered for most studies due to ease of implementation.  相似文献   
97.
Early phase 2 tuberculosis (TB) trials are conducted to characterize the early bactericidal activity (EBA) of anti‐TB drugs. The EBA of anti‐TB drugs has conventionally been calculated as the rate of decline in colony forming unit (CFU) count during the first 14 days of treatment. The measurement of CFU count, however, is expensive and prone to contamination. Alternatively to CFU count, time to positivity (TTP), which is a potential biomarker for long‐term efficacy of anti‐TB drugs, can be used to characterize EBA. The current Bayesian nonlinear mixed‐effects (NLME) regression model for TTP data, however, lacks robustness to gross outliers that often are present in the data. The conventional way of handling such outliers involves their identification by visual inspection and subsequent exclusion from the analysis. However, this process can be questioned because of its subjective nature. For this reason, we fitted robust versions of the Bayesian nonlinear mixed‐effects regression model to a wide range of TTP datasets. The performance of the explored models was assessed through model comparison statistics and a simulation study. We conclude that fitting a robust model to TTP data obviates the need for explicit identification and subsequent “deletion” of outliers but ensures that gross outliers exert no undue influence on model fits. We recommend that the current practice of fitting conventional normal theory models be abandoned in favor of fitting robust models to TTP data.  相似文献   
98.
A Bayesian statistical temporal‐prevalence‐concentration model (TPCM) was built to assess the prevalence and concentration of pathogenic campylobacter species in batches of fresh chicken and turkey meat at retail. The data set was collected from Finnish grocery stores in all the seasons of the year. Observations at low concentration levels are often censored due to the limit of determination of the microbiological methods. This model utilized the potential of Bayesian methods to borrow strength from related samples in order to perform under heavy censoring. In this extreme case the majority of the observed batch‐specific concentrations was below the limit of determination. The hierarchical structure was included in the model in order to take into account the within‐batch and between‐batch variability, which may have a significant impact on the sample outcome depending on the sampling plan. Temporal changes in the prevalence of campylobacter were modeled using a Markovian time series. The proposed model is adaptable for other pathogens if the same type of data set is available. The computation of the model was performed using OpenBUGS software.  相似文献   
99.
The role of information processing in understanding people's responses to risk information has recently received substantial attention. One limitation of this research concerns the unavailability of a validated questionnaire of information processing. This article presents two studies in which we describe the development and validation of the Information‐Processing Questionnaire to meet that need. Study 1 describes the development and initial validation of the questionnaire. Participants were randomized to either a systematic processing or a heuristic processing condition after which they completed a manipulation check and the initial 15‐item questionnaire and again two weeks later. The questionnaire was subjected to factor reliability and validity analyses on both measurement times for purposes of cross‐validation of the results. A two‐factor solution was observed representing a systematic processing and a heuristic processing subscale. The resulting scale showed good reliability and validity, with the systematic condition scoring significantly higher on the systematic subscale and the heuristic processing condition significantly higher on the heuristic subscale. Study 2 sought to further validate the questionnaire in a field study. Results of the second study corresponded with those of Study 1 and provided further evidence of the validity of the Information‐Processing Questionnaire. The availability of this information‐processing scale will be a valuable asset for future research and may provide researchers with new research opportunities.  相似文献   
100.
Ordinary differential equations (ODEs) are normally used to model dynamic processes in applied sciences such as biology, engineering, physics, and many other areas. In these models, the parameters are usually unknown, and thus they are often specified artificially or empirically. Alternatively, a feasible method is to estimate the parameters based on observed data. In this study, we propose a Bayesian penalized B-spline approach to estimate the parameters and initial values for ODEs used in epidemiology. We evaluated the efficiency of the proposed method based on simulations using the Markov chain Monte Carlo algorithm for the Kermack–McKendrick model. The proposed approach is also illustrated based on a real application to the transmission dynamics of hepatitis C virus in mainland China.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号