全文获取类型
收费全文 | 1180篇 |
免费 | 30篇 |
国内免费 | 17篇 |
专业分类
管理学 | 200篇 |
民族学 | 1篇 |
人口学 | 1篇 |
丛书文集 | 8篇 |
理论方法论 | 12篇 |
综合类 | 144篇 |
社会学 | 14篇 |
统计学 | 847篇 |
出版年
2024年 | 1篇 |
2023年 | 4篇 |
2022年 | 11篇 |
2021年 | 7篇 |
2020年 | 32篇 |
2019年 | 37篇 |
2018年 | 55篇 |
2017年 | 79篇 |
2016年 | 33篇 |
2015年 | 36篇 |
2014年 | 56篇 |
2013年 | 337篇 |
2012年 | 93篇 |
2011年 | 56篇 |
2010年 | 34篇 |
2009年 | 55篇 |
2008年 | 61篇 |
2007年 | 40篇 |
2006年 | 43篇 |
2005年 | 27篇 |
2004年 | 25篇 |
2003年 | 22篇 |
2002年 | 11篇 |
2001年 | 8篇 |
2000年 | 10篇 |
1999年 | 9篇 |
1998年 | 9篇 |
1997年 | 6篇 |
1996年 | 3篇 |
1994年 | 5篇 |
1993年 | 3篇 |
1992年 | 2篇 |
1990年 | 1篇 |
1989年 | 1篇 |
1988年 | 1篇 |
1987年 | 1篇 |
1985年 | 3篇 |
1983年 | 3篇 |
1982年 | 2篇 |
1980年 | 3篇 |
1979年 | 1篇 |
1975年 | 1篇 |
排序方式: 共有1227条查询结果,搜索用时 296 毫秒
1.
Towards Uniformly Efficient Trend Estimation Under Weak/Strong Correlation and Non‐stationary Volatility 下载免费PDF全文
In this paper, we consider the deterministic trend model where the error process is allowed to be weakly or strongly correlated and subject to non‐stationary volatility. Extant estimators of the trend coefficient are analysed. We find that under heteroskedasticity, the Cochrane–Orcutt‐type estimator (with some initial condition) could be less efficient than Ordinary Least Squares (OLS) when the process is highly persistent, whereas it is asymptotically equivalent to OLS when the process is less persistent. An efficient non‐parametrically weighted Cochrane–Orcutt‐type estimator is then proposed. The efficiency is uniform over weak or strong serial correlation and non‐stationary volatility of unknown form. The feasible estimator relies on non‐parametric estimation of the volatility function, and the asymptotic theory is provided. We use the data‐dependent smoothing bandwidth that can automatically adjust for the strength of non‐stationarity in volatilities. The implementation does not require pretesting persistence of the process or specification of non‐stationary volatility. Finite‐sample evaluation via simulations and an empirical application demonstrates the good performance of proposed estimators. 相似文献
2.
Jonathan H. Wright 《Econometric Reviews》2002,21(4):397-417
Many recent papers have used semiparametric methods, especially the log-periodogram regression, to detect and estimate long memory in the volatility of asset returns. In these papers, the volatility is proxied by measures such as squared, log-squared, and absolute returns. While the evidence for the existence of long memory is strong using any of these measures, the actual long memory parameter estimates can be sensitive to which measure is used. In Monte-Carlo simulations, I find that if the data is conditionally leptokurtic, the log-periodogram regression estimator using squared returns has a large downward bias, which is avoided by using other volatility measures. In United States stock return data, I find that squared returns give much lower estimates of the long memory parameter than the alternative volatility measures, which is consistent with the simulation results. I conclude that researchers should avoid using the squared returns in the semiparametric estimation of long memory volatility dependencies. 相似文献
3.
The problem considered is that of finding an optimum measurement schedule to estimate population parameters in a nonlinear model when the patient effects are random. The paper presents examples of the use of sensitivity functions, derived from the General Equivalence Theorem for D-optimality, in the construction of optimum population designs for such schedules. With independent observations, the theorem applies to the potential inclusion of a single observation. However, in population designs the observations are correlated and the theorem applies to the inclusion of an additional measurement schedule. In one example, three groups of patients of differing size are subject to distinct schedules. Numerical, as opposed to analytical, calculation of the sensitivity function is advocated. The required covariances of the observations are found by simulation. 相似文献
4.
《Econometric Reviews》2007,26(1):1-24
This paper extends the current literature on the variance-causality topic providing the coefficient restrictions ensuring variance noncausality within multivariate GARCH models with in-mean effects. Furthermore, this paper presents a new multivariate model, the exponential causality GARCH. By the introduction of a multiplicative causality impact function, the variance causality effects becomes directly interpretable and can therefore be used to detect both the existence of causality and its direction; notably, the proposed model allows for increasing and decreasing variance effects. An empirical application evidences negative causality effects between returns and volume of an Italian stock market index future contract. 相似文献
5.
Distributions generated by perturbation of symmetry with emphasis on a multivariate skew t-distribution 总被引:1,自引:0,他引:1
Adelchi Azzalini Antonella Capitanio 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2003,65(2):367-389
Summary . A fairly general procedure is studied to perturb a multivariate density satisfying a weak form of multivariate symmetry, and to generate a whole set of non-symmetric densities. The approach is sufficiently general to encompass some recent proposals in the literature, variously related to the skew normal distribution. The special case of skew elliptical densities is examined in detail, establishing connections with existing similar work. The final part of the paper specializes further to a form of multivariate skew t -density. Likelihood inference for this distribution is examined, and it is illustrated with numerical examples. 相似文献
6.
Cédric Béguin Beat Hulliger 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2004,167(2):275-294
Summary. As a part of the EUREDIT project new methods to detect multivariate outliers in incomplete survey data have been developed. These methods are the first to work with sampling weights and to be able to cope with missing values. Two of these methods are presented here. The epidemic algorithm simulates the propagation of a disease through a population and uses extreme infection times to find outlying observations. Transformed rank correlations are robust estimates of the centre and the scatter of the data. They use a geometric transformation that is based on the rank correlation matrix. The estimates are used to define a Mahalanobis distance that reveals outliers. The two methods are applied to a small data set and to one of the evaluation data sets of the EUREDIT project. 相似文献
7.
There is an emerging consensus in empirical finance that realized volatility series typically display long range dependence with a memory parameter (d) around 0.4 (Andersen et al., 2001; Martens et al., 2004). The present article provides some illustrative analysis of how long memory may arise from the accumulative process underlying realized volatility. The article also uses results in Lieberman and Phillips (2004, 2005) to refine statistical inference about d by higher order theory. Standard asymptotic theory has an O(n-1/2) error rate for error rejection probabilities, and the theory used here refines the approximation to an error rate of o(n-1/2). The new formula is independent of unknown parameters, is simple to calculate and user-friendly. The method is applied to test whether the reported long memory parameter estimates of Andersen et al. (2001) and Martens et al. (2004) differ significantly from the lower boundary (d = 0.5) of nonstationary long memory, and generally confirms earlier findings. 相似文献
8.
Bayesian networks for imputation 总被引:1,自引:0,他引:1
Marco Di Zio Mauro Scanu Lucia Coppola Orietta Luzi Alessandra Ponti 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2004,167(2):309-322
Summary. Bayesian networks are particularly useful for dealing with high dimensional statistical problems. They allow a reduction in the complexity of the phenomenon under study by representing joint relationships between a set of variables through conditional relationships between subsets of these variables. Following Thibaudeau and Winkler we use Bayesian networks for imputing missing values. This method is introduced to deal with the problem of the consistency of imputed values: preservation of statistical relationships between variables ( statistical consistency ) and preservation of logical constraints in data ( logical consistency ). We perform some experiments on a subset of anonymous individual records from the 1991 UK population census. 相似文献
9.
Cathy W. S. Chen F. C. Liu Mike K. P. So 《Australian & New Zealand Journal of Statistics》2008,50(1):29-51
To capture mean and variance asymmetries and time‐varying volatility in financial time series, we generalize the threshold stochastic volatility (THSV) model and incorporate a heavy‐tailed error distribution. Unlike existing stochastic volatility models, this model simultaneously accounts for uncertainty in the unobserved threshold value and in the time‐delay parameter. Self‐exciting and exogenous threshold variables are considered to investigate the impact of a number of market news variables on volatility changes. Adopting a Bayesian approach, we use Markov chain Monte Carlo methods to estimate all unknown parameters and latent variables. A simulation experiment demonstrates good estimation performance for reasonable sample sizes. In a study of two international financial market indices, we consider two variants of the generalized THSV model, with US market news as the threshold variable. Finally, we compare models using Bayesian forecasting in a value‐at‐risk (VaR) study. The results show that our proposed model can generate more accurate VaR forecasts than can standard models. 相似文献
10.
我国股票市场上的交易方式有两种,即集合竞价和连续竞价,交易方式的差异会对股价的波动性产生影响,而市场的波动性对股票市场而言是双刃剑,因此从交易方式角度探求股市的适度波动成为理论工作的一个重心.本文以上海股票市场为研究对象,针对2001年的全部交易数据进行实证研究.结果表明,集合竞价形成的开盘价格收益率的方差大于连续竞价形成的收盘价格收益率的方差,其原因在于集合竞价与连续竞价相比,其交易过程的透明度差和交易指令具有不可更改性,据此,从交易制度上提出政策建议. 相似文献