首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   542篇
  免费   14篇
管理学   106篇
民族学   3篇
人口学   22篇
丛书文集   35篇
理论方法论   29篇
综合类   107篇
社会学   66篇
统计学   188篇
  2023年   2篇
  2022年   5篇
  2021年   11篇
  2020年   18篇
  2019年   10篇
  2018年   17篇
  2017年   19篇
  2016年   18篇
  2015年   9篇
  2014年   30篇
  2013年   104篇
  2012年   36篇
  2011年   35篇
  2010年   28篇
  2009年   27篇
  2008年   38篇
  2007年   16篇
  2006年   15篇
  2005年   8篇
  2004年   19篇
  2003年   23篇
  2002年   10篇
  2001年   11篇
  2000年   5篇
  1999年   9篇
  1998年   7篇
  1997年   7篇
  1996年   11篇
  1995年   4篇
  1994年   2篇
  1993年   1篇
  1981年   1篇
排序方式: 共有556条查询结果,搜索用时 15 毫秒
71.
72.
Time series smoothers estimate the level of a time series at time t as its conditional expectation given present, past and future observations, with the smoothed value depending on the estimated time series model. Alternatively, local polynomial regressions on time can be used to estimate the level, with the implied smoothed value depending on the weight function and the bandwidth in the local linear least squares fit. In this article we compare the two smoothing approaches and describe their similarities. Through simulations, we assess the increase in the mean square error that results when approximating the estimated optimal time series smoother with the local regression estimate of the level.  相似文献   
73.
Abstract

We investigate the problem of testing for variance breaks in the case where the variance structure is assumed to be smoothly time-varying under the null. Since the classical tests are aimed to detect any change in the variance, they are not able to distinguish between smooth non constant variance and abrupt breaks. In this paper a new procedure for detecting variance breaks taking into account for smooth changes in the variance under the null is proposed. The finite sample properties of the test we introduce are investigated by Monte Carlo experiments. The theoretical outputs are illustrated using U.S. macroeconomic data.  相似文献   
74.
The results obtained in five years of forecasting with Bayesian vector autoregressions (BVAR's) demonstrate that this inexpensive, reproducible statistical technique is as accurate, on average, as those used by the best known commercial forecasting services. This article considers the problem of economic forecasting, the justification for the Bayesian approach, its implementation, and the performance of one small BVAR model over the past five years.  相似文献   
75.
This study considers testing for a unit root in a time series characterized by a structural change in its mean level. My approach follows the “intervention analysis” of Box and Tiao (1975) in the sense that I consider the change as being exogenous and as occurring at a known date. Standard unit-root tests are shown to be biased toward nonrejection of the hypothesis of a unit root when the full sample is used. Since tests using split sample regressions usually have low power, I design test statistics that allow the presence of a change in the mean of the series under both the null and alternative hypotheses. The limiting distribution of the statistics is derived and tabulated under the null hypothesis of a unit root. My analysis is illustrated by considering the behavior of various univariate time series for which the unit-root hypothesis has been advanced in the literature. This study complements that of Perron (1989), which considered time series with trends.  相似文献   
76.
In this paper we provide a comprehensive Bayesian posterior analysis of trend determination in general autoregressive models. Multiple lag autoregressive models with fitted drifts and time trends as well as models that allow for certain types of structural change in the deterministic components are considered. We utilize a modified information matrix-based prior that accommodates stochastic nonstationarity, takes into account the interactions between long-run and short-run dynamics and controls the degree of stochastic nonstationarity permitted. We derive analytic posterior densities for all of the trend determining parameters via the Laplace approximation to multivariate integrals. We also address the sampling properties of our posteriors under alternative data generating processes by simulation methods. We apply our Bayesian techniques to the Nelson-Plosser macroeconomic data and various stock price and dividend data. Contrary to DeJong and Whiteman (1989a,b,c), we do not find that the data overwhelmingly favor the existence of deterministic trends over stochastic trends. In addition, we find evidence supporting Perron's (1989) view that some of the Nelson and Plosser data are best construed as trend stationary with a change in the trend function occurring at 1929.  相似文献   
77.
Several important economic time series are recorded on a particular day every week. Seasonal adjustment of such series is difficult because the number of weeks varies between 52 and 53 and the position of the recording day changes from year to year. In addition certain festivals, most notably Easter, take place at different times according to the year. This article presents a solution to problems of this kind by setting up a structural time series model that allows the seasonal pattern to evolve over time and enables trend extraction and seasonal adjustment to be carried out by means of state-space filtering and smoothing algorithms. The method is illustrated with a Bank of England series on the money supply.  相似文献   
78.
Determining whether per capita output can be characterized by a stochastic trend is complicated by the fact that infrequent breaks in trend can bias standard unit root tests towards nonrejection of the unit root hypothesis. The bulk of the existing literature has focused on the application of unit root tests allowing for structural breaks in the trend function under the trend stationary alternative but not under the unit root null. These tests, however, provide little information regarding the existence and number of trend breaks. Moreover, these tests suffer from serious power and size distortions due to the asymmetric treatment of breaks under the null and alternative hypotheses. This article estimates the number of breaks in trend employing procedures that are robust to the unit root/stationarity properties of the data. Our analysis of the per capita gross domestic product (GDP) for Organization for Economic Cooperation and Development (OECD) countries thereby permits a robust classification of countries according to the “growth shift,” “level shift,” and “linear trend” hypotheses. In contrast to the extant literature, unit root tests conditional on the presence or absence of breaks do not provide evidence against the unit root hypothesis.  相似文献   
79.
A study on the robustness of the adaptation of the sample size for a phase III trial on the basis of existing phase II data is presented—when phase III is lower than phase II effect size. A criterion of clinical relevance for phase II results is applied in order to launch phase III, where data from phase II cannot be included in statistical analysis. The adaptation consists in adopting the conservative approach to sample size estimation, which takes into account the variability of phase II data. Some conservative sample size estimation strategies, Bayesian and frequentist, are compared with the calibrated optimal γ conservative strategy (viz. COS) which is the best performer when phase II and phase III effect sizes are equal. The Overall Power (OP) of these strategies and the mean square error (MSE) of their sample size estimators are computed under different scenarios, in the presence of the structural bias due to lower phase III effect size, for evaluating the robustness of the strategies. When the structural bias is quite small (i.e., the ratio of phase III to phase II effect size is greater than 0.8), and when some operating conditions for applying sample size estimation hold, COS can still provide acceptable results for planning phase III trials, even if in bias absence the OP was higher.

Main results concern the introduction of a correction, which affects just sample size estimates and not launch probabilities, for balancing the structural bias. In particular, the correction is based on a postulation of the structural bias; hence, it is more intuitive and easier to use than those based on the modification of Type I or/and Type II errors. A comparison of corrected conservative sample size estimation strategies is performed in the presence of a quite small bias. When the postulated correction is right, COS provides good OP and the lowest MSE. Moreover, the OPs of COS are even higher than those observed without bias, thanks to higher launch probability and a similar estimation performance. The structural bias can therefore be exploited for improving sample size estimation performances. When the postulated correction is smaller than necessary, COS is still the best performer, and it also works well. A higher than necessary correction should be avoided.  相似文献   
80.
ABSTRACT

Latent variable modeling is commonly used in behavioral, social, and medical science research. The models used in such analysis relate all observed variables to latent common factors. In many applications, the observations are highly non normal or discrete, e.g., polytomous responses or counts. The existing approaches for non normal observations can be considered lacking in several aspects, especially for multi-group samples situations. We propose a generalized linear model approach for multi-sample latent variable analysis that can handle a broad class of non normal and discrete observations, and that furnishes meaningful interpretation and inference in multi-group studies through maximum likelihood analysis. A Monte Carlo EM algorithm is proposed for parameter estimation. The convergence assessment and standard error estimation is addressed. Simulation studies are reported to show the usefulness of the our approach. An example from a substance abuse prevention study is also presented.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号