首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1053篇
  免费   44篇
  国内免费   20篇
管理学   221篇
民族学   2篇
人口学   3篇
丛书文集   13篇
理论方法论   20篇
综合类   215篇
社会学   9篇
统计学   634篇
  2024年   1篇
  2023年   10篇
  2022年   14篇
  2021年   15篇
  2020年   38篇
  2019年   45篇
  2018年   58篇
  2017年   93篇
  2016年   32篇
  2015年   38篇
  2014年   47篇
  2013年   177篇
  2012年   93篇
  2011年   64篇
  2010年   42篇
  2009年   41篇
  2008年   57篇
  2007年   39篇
  2006年   41篇
  2005年   36篇
  2004年   16篇
  2003年   23篇
  2002年   11篇
  2001年   17篇
  2000年   7篇
  1999年   12篇
  1998年   6篇
  1997年   12篇
  1996年   4篇
  1995年   8篇
  1994年   4篇
  1993年   4篇
  1992年   5篇
  1991年   1篇
  1988年   1篇
  1986年   1篇
  1983年   1篇
  1980年   1篇
  1979年   1篇
  1977年   1篇
排序方式: 共有1117条查询结果,搜索用时 15 毫秒
1.
2.
In this paper, we consider the deterministic trend model where the error process is allowed to be weakly or strongly correlated and subject to non‐stationary volatility. Extant estimators of the trend coefficient are analysed. We find that under heteroskedasticity, the Cochrane–Orcutt‐type estimator (with some initial condition) could be less efficient than Ordinary Least Squares (OLS) when the process is highly persistent, whereas it is asymptotically equivalent to OLS when the process is less persistent. An efficient non‐parametrically weighted Cochrane–Orcutt‐type estimator is then proposed. The efficiency is uniform over weak or strong serial correlation and non‐stationary volatility of unknown form. The feasible estimator relies on non‐parametric estimation of the volatility function, and the asymptotic theory is provided. We use the data‐dependent smoothing bandwidth that can automatically adjust for the strength of non‐stationarity in volatilities. The implementation does not require pretesting persistence of the process or specification of non‐stationary volatility. Finite‐sample evaluation via simulations and an empirical application demonstrates the good performance of proposed estimators.  相似文献   
3.
Proportional hazards are a common assumption when designing confirmatory clinical trials in oncology. This assumption not only affects the analysis part but also the sample size calculation. The presence of delayed effects causes a change in the hazard ratio while the trial is ongoing since at the beginning we do not observe any difference between treatment arms, and after some unknown time point, the differences between treatment arms will start to appear. Hence, the proportional hazards assumption no longer holds, and both sample size calculation and analysis methods to be used should be reconsidered. The weighted log‐rank test allows a weighting for early, middle, and late differences through the Fleming and Harrington class of weights and is proven to be more efficient when the proportional hazards assumption does not hold. The Fleming and Harrington class of weights, along with the estimated delay, can be incorporated into the sample size calculation in order to maintain the desired power once the treatment arm differences start to appear. In this article, we explore the impact of delayed effects in group sequential and adaptive group sequential designs and make an empirical evaluation in terms of power and type‐I error rate of the of the weighted log‐rank test in a simulated scenario with fixed values of the Fleming and Harrington class of weights. We also give some practical recommendations regarding which methodology should be used in the presence of delayed effects depending on certain characteristics of the trial.  相似文献   
4.
5.
Many recent papers have used semiparametric methods, especially the log-periodogram regression, to detect and estimate long memory in the volatility of asset returns. In these papers, the volatility is proxied by measures such as squared, log-squared, and absolute returns. While the evidence for the existence of long memory is strong using any of these measures, the actual long memory parameter estimates can be sensitive to which measure is used. In Monte-Carlo simulations, I find that if the data is conditionally leptokurtic, the log-periodogram regression estimator using squared returns has a large downward bias, which is avoided by using other volatility measures. In United States stock return data, I find that squared returns give much lower estimates of the long memory parameter than the alternative volatility measures, which is consistent with the simulation results. I conclude that researchers should avoid using the squared returns in the semiparametric estimation of long memory volatility dependencies.  相似文献   
6.
There is an emerging consensus in empirical finance that realized volatility series typically display long range dependence with a memory parameter (d) around 0.4 (Andersen et al., 2001; Martens et al., 2004). The present article provides some illustrative analysis of how long memory may arise from the accumulative process underlying realized volatility. The article also uses results in Lieberman and Phillips (2004, 2005) to refine statistical inference about d by higher order theory. Standard asymptotic theory has an O(n-1/2) error rate for error rejection probabilities, and the theory used here refines the approximation to an error rate of o(n-1/2). The new formula is independent of unknown parameters, is simple to calculate and user-friendly. The method is applied to test whether the reported long memory parameter estimates of Andersen et al. (2001) and Martens et al. (2004) differ significantly from the lower boundary (d = 0.5) of nonstationary long memory, and generally confirms earlier findings.  相似文献   
7.
首先分析了对数最小二乘排序法的特点,说明它是一种值得重视的好方法;并进一步阐述了这一方法的基本原理,着重地对群体判断下求加权的综合排序向量的方法进行了严密的数学推导;提出了在加权的综合排序中权重系数确定的新見解,并举例子以解释。  相似文献   
8.
To capture mean and variance asymmetries and time‐varying volatility in financial time series, we generalize the threshold stochastic volatility (THSV) model and incorporate a heavy‐tailed error distribution. Unlike existing stochastic volatility models, this model simultaneously accounts for uncertainty in the unobserved threshold value and in the time‐delay parameter. Self‐exciting and exogenous threshold variables are considered to investigate the impact of a number of market news variables on volatility changes. Adopting a Bayesian approach, we use Markov chain Monte Carlo methods to estimate all unknown parameters and latent variables. A simulation experiment demonstrates good estimation performance for reasonable sample sizes. In a study of two international financial market indices, we consider two variants of the generalized THSV model, with US market news as the threshold variable. Finally, we compare models using Bayesian forecasting in a value‐at‐risk (VaR) study. The results show that our proposed model can generate more accurate VaR forecasts than can standard models.  相似文献   
9.
This paper presents a method of estimating a receiver operating characteristic (ROC) curve when the underlying diagnostic variable X is continuous and fully observed. The new method is based on modelling the probability of response given X , rather than the distribution of X given response. The method offers advantages in modelling flexibility and computational simplicity. The resulting ROC curve estimates are semi-parametric and can, in principle, take an infinite variety of shapes. Moreover, model selection can be based on standard methods within the binomial regression framework. Statistical accuracy of the curve estimate is provided by a simply implemented bootstrap approach.  相似文献   
10.
WEIGHTED SUMS OF NEGATIVELY ASSOCIATED RANDOM VARIABLES   总被引:2,自引:0,他引:2  
In this paper, we establish strong laws for weighted sums of negatively associated (NA) random variables which have a higher‐order moment condition. Some results of Bai Z.D. & Cheng P.E. (2000) [Marcinkiewicz strong laws for linear statistics. Statist. and Probab. Lett. 43, 105–112,] and Sung S.K. (2001) [Strong laws for weighted sums of i.i.d. random variables, Statist. and Probab. Lett. 52, 413–419] are sharpened and extended from the independent identically distributed case to the NA setting. Also, one of the results of Li D.L. et al. (1995) [Complete convergence and almost sure convergence of weighted sums of random variables. J. Theoret. Probab. 8, 49–76,] is complemented and extended.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号