首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3703篇
  免费   115篇
  国内免费   15篇
管理学   380篇
民族学   6篇
人口学   69篇
丛书文集   54篇
理论方法论   95篇
综合类   353篇
社会学   250篇
统计学   2626篇
  2024年   1篇
  2023年   27篇
  2022年   32篇
  2021年   38篇
  2020年   61篇
  2019年   108篇
  2018年   154篇
  2017年   226篇
  2016年   114篇
  2015年   118篇
  2014年   121篇
  2013年   854篇
  2012年   271篇
  2011年   140篇
  2010年   123篇
  2009年   154篇
  2008年   146篇
  2007年   143篇
  2006年   118篇
  2005年   133篇
  2004年   111篇
  2003年   92篇
  2002年   71篇
  2001年   74篇
  2000年   67篇
  1999年   55篇
  1998年   50篇
  1997年   38篇
  1996年   19篇
  1995年   16篇
  1994年   27篇
  1993年   17篇
  1992年   19篇
  1991年   13篇
  1990年   11篇
  1989年   8篇
  1988年   12篇
  1987年   7篇
  1986年   4篇
  1985年   8篇
  1984年   6篇
  1983年   9篇
  1982年   8篇
  1981年   1篇
  1980年   2篇
  1979年   2篇
  1978年   2篇
  1977年   1篇
  1976年   1篇
排序方式: 共有3833条查询结果,搜索用时 312 毫秒
871.
Recently, the field of multiple hypothesis testing has experienced a great expansion, basically because of the new methods developed in the field of genomics. These new methods allow scientists to simultaneously process thousands of hypothesis tests. The frequentist approach to this problem is made by using different testing error measures that allow to control the Type I error rate at a certain desired level. Alternatively, in this article, a Bayesian hierarchical model based on mixture distributions and an empirical Bayes approach are proposed in order to produce a list of rejected hypotheses that will be declared significant and interesting for a more detailed posterior analysis. In particular, we develop a straightforward implementation of a Gibbs sampling scheme where all the conditional posterior distributions are explicit. The results are compared with the frequentist False Discovery Rate (FDR) methodology. Simulation examples show that our model improves the FDR procedure in the sense that it diminishes the percentage of false negatives keeping an acceptable percentage of false positives.  相似文献   
872.
Use of experimental data from animal studies to estimate human risk due to long-term exposure to very low doses of chemicals in the environment poses a number of biological and statistical problems. One of the statistical problems is to extrapolate the animal dose-response relation from the high dose levels where data are available to low dose, which humans might encounter. Here, a quantal dose-response model is developed based on a multi-hit theory of toxic response. The development of the model utilizes a weighted Lagrange-Poisson distribution for the number of hits. When spontaneous background toxic response is included, the model involves three unknown parameters. The maximum likelihood estimators for these parameters are given as the solution of a nonlinear iterative algorithm. The use of this model for low-dose extrapolation is indicated. The results are applied to nine sets of toxic response data.  相似文献   
873.
Techniques for testing hypotheses about parameters in the regression models under the situation of grouped data are provided. A test statistic similar to conventional F statistic is considered. A simulation study performed for a few cases shows that the proposed statistic has an approximate F distribution and is useful in applications.  相似文献   
874.
Many estimation procedures for quantitative linear models with autocorrelated errors have been proposed in the literature. A number of these procedures have been compared in various ways for different sample sizes and autocorrelation parameters values and for structured or random explanatory vaiables. In this paper, we revisit three situations that were considered to some extent in previous studies, by comparing ten estimation procedures: Ordinary Least Squares (OLS), Generalized Least Squares (GLS), estimated Generalized Least Squares (six procedures), Maximum Likelihood (ML), and First Differences (FD). The six estimated GLS procedures and the ML procedure differ in the way the error autocovariance matrix is estimated. The three situations can be defined as follows: Case 1, the explanatory variable x in the simple linear regression is fixed; Case 2,x is purely random; and Case 3x is first-order autoregressive. Following a theoretical presentation, the ten estimation procedures are compared in a Monte Carlo study conducted in the time domain, where the errors are first-order autoregressive in Cases 1-3. The measure of comparison for the estimation procedures is their efficiency relative to OLS. It is evaluated as a function of the time series length and the magnitude and sign of the error autocorrelation parameter. Overall, knowledge of the model of the time series process generating the errors enhances efficiency in estimated GLS. Differences in the efficiency of estimation procedures between Case 1 and Cases 2 and 3 as well as differences in efficiency among procedures in a given situation are observed and discussed.  相似文献   
875.
In this article the decision behaviour of four production schedulers in a truck manufacturing company is investigated by means of a quantitative model. The model consists of there parts: performance variables, action variables and disturbance variables. The outcomes show that there is a large difference between schedulers that apparently have the same type of decision problem. Another interesting finding is that some scheduling actions work positively in the short term, but negatively over a longer term. Other results, along with methodological issues of quantitative research, are discussed.  相似文献   
876.
由于对通识教育理念认识的差异,大陆高校的通识教育模式也各不相同。对目前大陆高校通识教育的模式进行了梳理,并分析了大陆高校实施通识教育的经验与启示。  相似文献   
877.
State space modelling and Bayesian analysis are both active areas of applied research in fisheries stock assessment. Combining these two methodologies facilitates the fitting of state space models that may be non-linear and have non-normal errors, and hence it is particularly useful for modelling fisheries dynamics. Here, this approach is demonstrated by fitting a non-linear surplus production model to data on South Atlantic albacore tuna ( Thunnus alalunga ). The state space approach allows for random variability in both the data (the measurement of relative biomass) and in annual biomass dynamics of the tuna stock. Sampling from the joint posterior distribution of the unobservables was achieved by using Metropolis-Hastings within-Gibbs sampling.  相似文献   
878.
This article studies the probabilistic structure and asymptotic inference of the first-order periodic generalized autoregressive conditional heteroscedasticity (PGARCH(1, 1)) models in which the parameters in volatility process are allowed to switch between different regimes. First, we establish necessary and sufficient conditions for a PGARCH(1, 1) process to have a unique stationary solution (in periodic sense) and for the existence of moments of any order. Second, using the representation of squared PGARCH(1, 1) model as a PARMA(1, 1) model, we then consider Yule-Walker type estimators for the parameters in PGARCH(1, 1) model and derives their consistency and asymptotic normality. The estimator can be surprisingly efficient for quite small numbers of autocorrelations and, in some cases can be more efficient than the least squares estimate (LSE). We use a residual bootstrap to define bootstrap estimators for the Yule-Walker estimates and prove the consistency of this bootstrap method. A set of numerical experiments illustrates the practical relevance of our theoretical results.  相似文献   
879.
This note considers a method for estimating regression parameters from the data containing measurement errors using some natural estimates of the unobserved explanatory variables. It is shown that the resulting estimator is consistent not only in the usual linear regression model but also in the probit model and regression models with censoship or truncation. However, it fails to be consistent in nonlinear regression models except for special cases.  相似文献   
880.
We discuss the development of dynamic factor models for multivariate financial time series, and the incorporation of stochastic volatility components for latent factor processes. Bayesian inference and computation is developed and explored in a study of the dynamic factor structure of daily spot exchange rates for a selection of international currencies. The models are direct generalizations of univariate stochastic volatility models and represent specific varieties of models recently discussed in the growing multivariate stochastic volatility literature. We discuss model fitting based on retrospective data and sequential analysis for forward filtering and short-term forecasting. Analyses are compared with results from the much simpler method of dynamic variance-matrix discounting that, for over a decade, has been a standard approach in applied financial econometrics. We study these models in analysis, forecasting, and sequential portfolio allocation for a selected set of international exchange-rate-return time series. Our goals are to understand a range of modeling questions arising in using these factor models and to explore empirical performance in portfolio construction relative to discount approaches. We report on our experiences and conclude with comments about the practical utility of structured factor models and on future potential model extensions.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号