首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2040篇
  免费   49篇
  国内免费   39篇
管理学   479篇
民族学   1篇
人口学   9篇
丛书文集   12篇
理论方法论   32篇
综合类   304篇
社会学   21篇
统计学   1270篇
  2024年   4篇
  2023年   11篇
  2022年   28篇
  2021年   17篇
  2020年   61篇
  2019年   70篇
  2018年   100篇
  2017年   143篇
  2016年   81篇
  2015年   69篇
  2014年   90篇
  2013年   456篇
  2012年   156篇
  2011年   95篇
  2010年   63篇
  2009年   85篇
  2008年   85篇
  2007年   74篇
  2006年   63篇
  2005年   62篇
  2004年   39篇
  2003年   44篇
  2002年   26篇
  2001年   22篇
  2000年   24篇
  1999年   28篇
  1998年   19篇
  1997年   19篇
  1996年   14篇
  1995年   10篇
  1994年   10篇
  1993年   5篇
  1992年   8篇
  1991年   7篇
  1990年   4篇
  1989年   5篇
  1988年   2篇
  1987年   5篇
  1986年   1篇
  1985年   3篇
  1984年   4篇
  1983年   4篇
  1982年   4篇
  1980年   4篇
  1979年   1篇
  1976年   2篇
  1975年   1篇
排序方式: 共有2128条查询结果,搜索用时 67 毫秒
1.
Abstract

Characterizing relations via Rényi entropy of m-generalized order statistics are considered along with examples and related stochastic orderings. Previous results for common order statistics are included.  相似文献   
2.
In this paper, we consider the deterministic trend model where the error process is allowed to be weakly or strongly correlated and subject to non‐stationary volatility. Extant estimators of the trend coefficient are analysed. We find that under heteroskedasticity, the Cochrane–Orcutt‐type estimator (with some initial condition) could be less efficient than Ordinary Least Squares (OLS) when the process is highly persistent, whereas it is asymptotically equivalent to OLS when the process is less persistent. An efficient non‐parametrically weighted Cochrane–Orcutt‐type estimator is then proposed. The efficiency is uniform over weak or strong serial correlation and non‐stationary volatility of unknown form. The feasible estimator relies on non‐parametric estimation of the volatility function, and the asymptotic theory is provided. We use the data‐dependent smoothing bandwidth that can automatically adjust for the strength of non‐stationarity in volatilities. The implementation does not require pretesting persistence of the process or specification of non‐stationary volatility. Finite‐sample evaluation via simulations and an empirical application demonstrates the good performance of proposed estimators.  相似文献   
3.
While much used in practice, latent variable models raise challenging estimation problems due to the intractability of their likelihood. Monte Carlo maximum likelihood (MCML), as proposed by Geyer & Thompson (1992 ), is a simulation-based approach to maximum likelihood approximation applicable to general latent variable models. MCML can be described as an importance sampling method in which the likelihood ratio is approximated by Monte Carlo averages of importance ratios simulated from the complete data model corresponding to an arbitrary value of the unknown parameter. This paper studies the asymptotic (in the number of observations) performance of the MCML method in the case of latent variable models with independent observations. This is in contrast with previous works on the same topic which only considered conditional convergence to the maximum likelihood estimator, for a fixed set of observations. A first important result is that when is fixed, the MCML method can only be consistent if the number of simulations grows exponentially fast with the number of observations. If on the other hand, is obtained from a consistent sequence of estimates of the unknown parameter, then the requirements on the number of simulations are shown to be much weaker.  相似文献   
4.
Many recent papers have used semiparametric methods, especially the log-periodogram regression, to detect and estimate long memory in the volatility of asset returns. In these papers, the volatility is proxied by measures such as squared, log-squared, and absolute returns. While the evidence for the existence of long memory is strong using any of these measures, the actual long memory parameter estimates can be sensitive to which measure is used. In Monte-Carlo simulations, I find that if the data is conditionally leptokurtic, the log-periodogram regression estimator using squared returns has a large downward bias, which is avoided by using other volatility measures. In United States stock return data, I find that squared returns give much lower estimates of the long memory parameter than the alternative volatility measures, which is consistent with the simulation results. I conclude that researchers should avoid using the squared returns in the semiparametric estimation of long memory volatility dependencies.  相似文献   
5.
The problem considered is that of finding an optimum measurement schedule to estimate population parameters in a nonlinear model when the patient effects are random. The paper presents examples of the use of sensitivity functions, derived from the General Equivalence Theorem for D-optimality, in the construction of optimum population designs for such schedules. With independent observations, the theorem applies to the potential inclusion of a single observation. However, in population designs the observations are correlated and the theorem applies to the inclusion of an additional measurement schedule. In one example, three groups of patients of differing size are subject to distinct schedules. Numerical, as opposed to analytical, calculation of the sensitivity function is advocated. The required covariances of the observations are found by simulation.  相似文献   
6.
Abstract. This document presents a survey of the statistical and combinatorial aspects of four areas of comparative genomics: gene order based measures of evolutionary distances between species, construction of phylogenetic trees, detection of horizontal transfer of genes, and detection of ancient whole genome duplications.  相似文献   
7.
《Econometric Reviews》2007,26(1):1-24
This paper extends the current literature on the variance-causality topic providing the coefficient restrictions ensuring variance noncausality within multivariate GARCH models with in-mean effects. Furthermore, this paper presents a new multivariate model, the exponential causality GARCH. By the introduction of a multiplicative causality impact function, the variance causality effects becomes directly interpretable and can therefore be used to detect both the existence of causality and its direction; notably, the proposed model allows for increasing and decreasing variance effects. An empirical application evidences negative causality effects between returns and volume of an Italian stock market index future contract.  相似文献   
8.
Summary . A fairly general procedure is studied to perturb a multivariate density satisfying a weak form of multivariate symmetry, and to generate a whole set of non-symmetric densities. The approach is sufficiently general to encompass some recent proposals in the literature, variously related to the skew normal distribution. The special case of skew elliptical densities is examined in detail, establishing connections with existing similar work. The final part of the paper specializes further to a form of multivariate skew t -density. Likelihood inference for this distribution is examined, and it is illustrated with numerical examples.  相似文献   
9.
Summary.  As a part of the EUREDIT project new methods to detect multivariate outliers in incomplete survey data have been developed. These methods are the first to work with sampling weights and to be able to cope with missing values. Two of these methods are presented here. The epidemic algorithm simulates the propagation of a disease through a population and uses extreme infection times to find outlying observations. Transformed rank correlations are robust estimates of the centre and the scatter of the data. They use a geometric transformation that is based on the rank correlation matrix. The estimates are used to define a Mahalanobis distance that reveals outliers. The two methods are applied to a small data set and to one of the evaluation data sets of the EUREDIT project.  相似文献   
10.
There is an emerging consensus in empirical finance that realized volatility series typically display long range dependence with a memory parameter (d) around 0.4 (Andersen et al., 2001; Martens et al., 2004). The present article provides some illustrative analysis of how long memory may arise from the accumulative process underlying realized volatility. The article also uses results in Lieberman and Phillips (2004, 2005) to refine statistical inference about d by higher order theory. Standard asymptotic theory has an O(n-1/2) error rate for error rejection probabilities, and the theory used here refines the approximation to an error rate of o(n-1/2). The new formula is independent of unknown parameters, is simple to calculate and user-friendly. The method is applied to test whether the reported long memory parameter estimates of Andersen et al. (2001) and Martens et al. (2004) differ significantly from the lower boundary (d = 0.5) of nonstationary long memory, and generally confirms earlier findings.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号