首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1807篇
  免费   40篇
  国内免费   8篇
管理学   176篇
人口学   3篇
丛书文集   5篇
理论方法论   16篇
综合类   75篇
社会学   28篇
统计学   1552篇
  2024年   3篇
  2023年   4篇
  2022年   1篇
  2021年   6篇
  2020年   20篇
  2019年   68篇
  2018年   70篇
  2017年   127篇
  2016年   53篇
  2015年   41篇
  2014年   46篇
  2013年   422篇
  2012年   213篇
  2011年   49篇
  2010年   36篇
  2009年   49篇
  2008年   59篇
  2007年   66篇
  2006年   56篇
  2005年   64篇
  2004年   52篇
  2003年   37篇
  2002年   28篇
  2001年   42篇
  2000年   38篇
  1999年   40篇
  1998年   42篇
  1997年   25篇
  1996年   16篇
  1995年   15篇
  1994年   17篇
  1993年   6篇
  1992年   12篇
  1991年   8篇
  1990年   1篇
  1989年   4篇
  1988年   3篇
  1987年   2篇
  1986年   2篇
  1985年   1篇
  1984年   4篇
  1983年   1篇
  1982年   1篇
  1981年   1篇
  1979年   1篇
  1978年   1篇
  1977年   1篇
  1975年   1篇
排序方式: 共有1855条查询结果,搜索用时 187 毫秒
1.
Longitudinal studies are the gold standard of empirical work and stress research whenever experiments are not plausible. Frequently, scales are used to assess risk factors and their consequences, and cross-lagged effects are estimated to determine possible risks. Methods to translate cross-lagged effects into risk ratios to facilitate risk assessment do not yet exist, which creates a divide between psychological and epidemiological work stress research. The aim of the present paper is to demonstrate how cross-lagged effects can be used to assess the risk ratio of different levels of psychosocial safety climate (PSC) in organisations, an important psychosocial risk for the development of depression. We used available longitudinal evidence from the Australian Workplace Barometer (N?=?1905) to estimate cross-lagged effects of PSC on depression. We applied continuous time modelling to obtain time-scalable cross effects. These were further investigated in a 4-year Monte Carlo simulation, which translated them into 4-year incident rates. Incident rates were determined by relying on clinically relevant 2-year periods of depression. We suggest a critical value of PSC?=?26 (corresponding to ?1.4 SD), which is indicative of more than 100% increased incidents of persistent depressive disorder in 4-year periods compared to average levels of PSC across 4 years.  相似文献   
2.
This paper proposes a new hysteretic vector autoregressive (HVAR) model in which the regime switching may be delayed when the hysteresis variable lies in a hysteresis zone. We integrate an adapted multivariate Student-t distribution from amending the scale mixtures of normal distributions. This HVAR model allows for a higher degree of flexibility in the degrees of freedom for each time series. We use the proposed model to test for a causal relationship between any two target time series. Using posterior odds ratios, we overcome the limitations of the classical approach to multiple testing. Both simulated and real examples herein help illustrate the suggested methods. We apply the proposed HVAR model to investigate the causal relationship between the quarterly growth rates of gross domestic product of United Kingdom and United States. Moreover, we check the pairwise lagged dependence of daily PM2.5 levels in three districts of Taipei.  相似文献   
3.
《Risk analysis》2018,38(8):1576-1584
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed‐form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling‐based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed‐form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks’s method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models.  相似文献   
4.
Abstract

This article proposes a new approach to analyze multiple vector autoregressive (VAR) models that render us a newly constructed matrix autoregressive (MtAR) model based on a matrix-variate normal distribution with two covariance matrices. The MtAR is a generalization of VAR models where the two covariance matrices allow the extension of MtAR to a structural MtAR analysis. The proposed MtAR can also incorporate different lag orders across VAR systems that provide more flexibility to the model. The estimation results from a simulation study and an empirical study on macroeconomic application show favorable performance of our proposed models and method.  相似文献   
5.
While much used in practice, latent variable models raise challenging estimation problems due to the intractability of their likelihood. Monte Carlo maximum likelihood (MCML), as proposed by Geyer & Thompson (1992 ), is a simulation-based approach to maximum likelihood approximation applicable to general latent variable models. MCML can be described as an importance sampling method in which the likelihood ratio is approximated by Monte Carlo averages of importance ratios simulated from the complete data model corresponding to an arbitrary value of the unknown parameter. This paper studies the asymptotic (in the number of observations) performance of the MCML method in the case of latent variable models with independent observations. This is in contrast with previous works on the same topic which only considered conditional convergence to the maximum likelihood estimator, for a fixed set of observations. A first important result is that when is fixed, the MCML method can only be consistent if the number of simulations grows exponentially fast with the number of observations. If on the other hand, is obtained from a consistent sequence of estimates of the unknown parameter, then the requirements on the number of simulations are shown to be much weaker.  相似文献   
6.
Generalized method of moments (GMM) is used to develop tests for discriminating discrete distributions among the two-parameter family of Katz distributions. Relationships involving moments are exploited to obtain identifying and over-identifying restrictions. The asymptotic relative efficiencies of tests based on GMM are analyzed using the local power approach and the approximate Bahadur efficiency. The paper also gives results of Monte Carlo experiments designed to check the validity of the theoretical findings and to shed light on the small sample properties of the proposed tests. Extensions of the results to compound Poisson alternative hypotheses are discussed.  相似文献   
7.
Bayesian analysis of discrete time warranty data   总被引:1,自引:0,他引:1  
Summary.  The analysis of warranty claim data, and their use for prediction, has been a topic of active research in recent years. Field data comprising numbers of units returned under guarantee are examined, covering both situations in which the ages of the failed units are known and in which they are not. The latter case poses particular computational problems for likelihood-based methods because of the large number of feasible failure patterns that must be included as contributions to the likelihood function. For prediction of future warranty exposure, which is of central concern to the manufacturer, the Bayesian approach is adopted. For this, Markov chain Monte Carlo methodology is developed.  相似文献   
8.
Summary.  Non-ignorable missing data, a serious problem in both clinical trials and observational studies, can lead to biased inferences. Quality-of-life measures have become increasingly popular in clinical trials. However, these measures are often incompletely observed, and investigators may suspect that missing quality-of-life data are likely to be non-ignorable. Although several recent references have addressed missing covariates in survival analysis, they all required the assumption that missingness is at random or that all covariates are discrete. We present a method for estimating the parameters in the Cox proportional hazards model when missing covariates may be non-ignorable and continuous or discrete. Our method is useful in reducing the bias and improving efficiency in the presence of missing data. The methodology clearly specifies assumptions about the missing data mechanism and, through sensitivity analysis, helps investigators to understand the potential effect of missing data on study results.  相似文献   
9.
A test of congruence among distance matrices is described. It tests the hypothesis that several matrices, containing different types of variables about the same objects, are congruent with one another, so they can be used jointly in statistical analysis. Raw data tables are turned into similarity or distance matrices prior to testing; they can then be compared to data that naturally come in the form of distance matrices. The proposed test can be seen as a generalization of the Mantel test of matrix correspondence to any number of distance matrices. This paper shows that the new test has the correct rate of Type I error and good power. Power increases as the number of objects and the number of congruent data matrices increase; power is higher when the total number of matrices in the study is smaller. To illustrate the method, the proposed test is used to test the hypothesis that matrices representing different types of organoleptic variables (colour, nose, body, palate and finish) in single‐malt Scotch whiskies are congruent.  相似文献   
10.
This paper argues that Fisher's paradox can be explained away in terms of estimator choice. We analyse by means of Monte Carlo experiments the small sample properties of a large set of estimators (including virtually all available single-equation estimators), and compute the critical values based on the empirical distributions of the t-statistics, for a variety of Data Generation Processes (DGPs), allowing for structural breaks, ARCH effects etc. We show that precisely the estimators most commonly used in the literature, namely OLS, Dynamic OLS (DOLS) and non-prewhitened FMLS, have the worst performance in small samples, and produce rejections of the Fisher hypothesis. If one employs the estimators with the most desirable properties (i.e., the smallest downward bias and the minimum shift in the distribution of the associated t-statistics), or if one uses the empirical critical values, the evidence based on US data is strongly supportive of the Fisher relation, consistently with many theoretical models.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号