首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2385篇
  免费   106篇
  国内免费   9篇
管理学   266篇
人口学   11篇
丛书文集   11篇
理论方法论   32篇
综合类   65篇
社会学   24篇
统计学   2091篇
  2023年   35篇
  2022年   27篇
  2021年   39篇
  2020年   38篇
  2019年   95篇
  2018年   113篇
  2017年   200篇
  2016年   90篇
  2015年   84篇
  2014年   118篇
  2013年   536篇
  2012年   201篇
  2011年   85篇
  2010年   84篇
  2009年   102篇
  2008年   74篇
  2007年   78篇
  2006年   73篇
  2005年   58篇
  2004年   58篇
  2003年   38篇
  2002年   33篇
  2001年   29篇
  2000年   35篇
  1999年   26篇
  1998年   28篇
  1997年   21篇
  1996年   9篇
  1995年   8篇
  1994年   14篇
  1993年   7篇
  1992年   11篇
  1991年   13篇
  1990年   3篇
  1989年   5篇
  1988年   7篇
  1987年   3篇
  1986年   3篇
  1985年   4篇
  1984年   2篇
  1983年   3篇
  1982年   5篇
  1981年   1篇
  1980年   2篇
  1979年   1篇
  1975年   1篇
排序方式: 共有2500条查询结果,搜索用时 218 毫秒
61.
Compared to the remarkable progress in risk analysis of normal accidents, the risk analysis of major accidents has not been so well‐established, partly due to the complexity of such accidents and partly due to low probabilities involved. The issue of low probabilities normally arises from the scarcity of major accidents’ relevant data since such accidents are few and far between. In this work, knowing that major accidents are frequently preceded by accident precursors, a novel precursor‐based methodology has been developed for likelihood modeling of major accidents in critical infrastructures based on a unique combination of accident precursor data, information theory, and approximate reasoning. For this purpose, we have introduced an innovative application of information analysis to identify the most informative near accident of a major accident. The observed data of the near accident were then used to establish predictive scenarios to foresee the occurrence of the major accident. We verified the methodology using offshore blowouts in the Gulf of Mexico, and then demonstrated its application to dam breaches in the United Sates.  相似文献   
62.
船舶航行风险是由各种风险因素交互作用产生。综合 HHM-RFRM 理论,构建船舶航行多维风险情景危险度测度模型,以风险因素耦合视角探讨船舶航行风险管理问题。在船舶航行多维风险情景危险度测度模型中结合贝叶斯定理,对船舶航行风险情景进行了定性与定量化过滤、评级。最后以大连港某从事商务活动的货船为例,验证了所提出方法的可行性。传统的风险评估方法只能评估单个风险因素对系统的影响,此评估方法克服了这一局限性,为船舶航行风险管理提供新的视角。  相似文献   
63.
Estimation and Properties of a Time-Varying EGARCH(1,1) in Mean Model   总被引:1,自引:1,他引:0  
Time-varying GARCH-M models are commonly employed in econometrics and financial economics. Yet the recursive nature of the conditional variance makes likelihood analysis of these models computationally infeasible. This article outlines the issues and suggests to employ a Markov chain Monte Carlo algorithm which allows the calculation of a classical estimator via the simulated EM algorithm or a simulated Bayesian solution in only O(T) computational operations, where T is the sample size. Furthermore, the theoretical dynamic properties of a time-varying-parameter EGARCH(1,1)-M are derived. We discuss them and apply the suggested Bayesian estimation to three major stock markets.  相似文献   
64.
This paper focusses on computing the Bayesian reliability of components whose performance characteristics (degradation – fatigue and cracks) are observed during a specified period of time. Depending upon the nature of degradation data collected, we fit a monotone increasing or decreasing function for the data. Since the components are supposed to have different lifetimes, the rate of degradation is assumed to be a random variable. At a critical level of degradation, the time to failure distribution is obtained. The exponential and power degradation models are studied and exponential density function is assumed for the random variable representing the rate of degradation. The maximum likelihood estimator and Bayesian estimator of the parameter of exponential density function, predictive distribution, hierarchical Bayes approach and robustness of the posterior mean are presented. The Gibbs sampling algorithm is used to obtain the Bayesian estimates of the parameter. Illustrations are provided for the train wheel degradation data.  相似文献   
65.
Information before unblinding regarding the success of confirmatory clinical trials is highly uncertain. Current techniques using point estimates of auxiliary parameters for estimating expected blinded sample size: (i) fail to describe the range of likely sample sizes obtained after the anticipated data are observed, and (ii) fail to adjust to the changing patient population. Sequential MCMC-based algorithms are implemented for purposes of sample size adjustments. The uncertainty arising from clinical trials is characterized by filtering later auxiliary parameters through their earlier counterparts and employing posterior distributions to estimate sample size and power. The use of approximate expected power estimates to determine the required additional sample size are closely related to techniques employing Simple Adjustments or the EM algorithm. By contrast with these, our proposed methodology provides intervals for the expected sample size using the posterior distribution of auxiliary parameters. Future decisions about additional subjects are better informed due to our ability to account for subject response heterogeneity over time. We apply the proposed methodologies to a depression trial. Our proposed blinded procedures should be considered for most studies due to ease of implementation.  相似文献   
66.
Early phase 2 tuberculosis (TB) trials are conducted to characterize the early bactericidal activity (EBA) of anti‐TB drugs. The EBA of anti‐TB drugs has conventionally been calculated as the rate of decline in colony forming unit (CFU) count during the first 14 days of treatment. The measurement of CFU count, however, is expensive and prone to contamination. Alternatively to CFU count, time to positivity (TTP), which is a potential biomarker for long‐term efficacy of anti‐TB drugs, can be used to characterize EBA. The current Bayesian nonlinear mixed‐effects (NLME) regression model for TTP data, however, lacks robustness to gross outliers that often are present in the data. The conventional way of handling such outliers involves their identification by visual inspection and subsequent exclusion from the analysis. However, this process can be questioned because of its subjective nature. For this reason, we fitted robust versions of the Bayesian nonlinear mixed‐effects regression model to a wide range of TTP datasets. The performance of the explored models was assessed through model comparison statistics and a simulation study. We conclude that fitting a robust model to TTP data obviates the need for explicit identification and subsequent “deletion” of outliers but ensures that gross outliers exert no undue influence on model fits. We recommend that the current practice of fitting conventional normal theory models be abandoned in favor of fitting robust models to TTP data.  相似文献   
67.
Mixtures of factor analyzers is a useful model-based clustering method which can avoid the curse of dimensionality in high-dimensional clustering. However, this approach is sensitive to both diverse non-normalities of marginal variables and outliers, which are commonly observed in multivariate experiments. We propose mixtures of Gaussian copula factor analyzers (MGCFA) for clustering high-dimensional clustering. This model has two advantages; (1) it allows different marginal distributions to facilitate fitting flexibility of the mixture model, (2) it can avoid the curse of dimensionality by embedding the factor-analytic structure in the component-correlation matrices of the mixture distribution.An EM algorithm is developed for the fitting of MGCFA. The proposed method is free of the curse of dimensionality and allows any parametric marginal distribution which fits best to the data. It is applied to both synthetic data and a microarray gene expression data for clustering and shows its better performance over several existing methods.  相似文献   
68.
A Bayesian statistical temporal‐prevalence‐concentration model (TPCM) was built to assess the prevalence and concentration of pathogenic campylobacter species in batches of fresh chicken and turkey meat at retail. The data set was collected from Finnish grocery stores in all the seasons of the year. Observations at low concentration levels are often censored due to the limit of determination of the microbiological methods. This model utilized the potential of Bayesian methods to borrow strength from related samples in order to perform under heavy censoring. In this extreme case the majority of the observed batch‐specific concentrations was below the limit of determination. The hierarchical structure was included in the model in order to take into account the within‐batch and between‐batch variability, which may have a significant impact on the sample outcome depending on the sampling plan. Temporal changes in the prevalence of campylobacter were modeled using a Markovian time series. The proposed model is adaptable for other pathogens if the same type of data set is available. The computation of the model was performed using OpenBUGS software.  相似文献   
69.
ABSTRACT

The class of bivariate copulas that are invariant under truncation with respect to one variable is considered. A simulation algorithm for the members of the class and a novel construction method are presented. Moreover, inspired by a stochastic interpretation of the members of such a class, a procedure is suggested to check whether the dependence structure of a given data set is truncation invariant. The overall performance of the procedure has been illustrated on both simulated and real data.  相似文献   
70.
Ordinary differential equations (ODEs) are normally used to model dynamic processes in applied sciences such as biology, engineering, physics, and many other areas. In these models, the parameters are usually unknown, and thus they are often specified artificially or empirically. Alternatively, a feasible method is to estimate the parameters based on observed data. In this study, we propose a Bayesian penalized B-spline approach to estimate the parameters and initial values for ODEs used in epidemiology. We evaluated the efficiency of the proposed method based on simulations using the Markov chain Monte Carlo algorithm for the Kermack–McKendrick model. The proposed approach is also illustrated based on a real application to the transmission dynamics of hepatitis C virus in mainland China.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号