首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   236篇
  免费   9篇
管理学   5篇
民族学   3篇
人口学   7篇
丛书文集   2篇
理论方法论   1篇
综合类   28篇
社会学   6篇
统计学   193篇
  2022年   2篇
  2021年   1篇
  2020年   5篇
  2019年   9篇
  2018年   9篇
  2017年   15篇
  2016年   8篇
  2015年   5篇
  2014年   5篇
  2013年   48篇
  2012年   19篇
  2011年   9篇
  2010年   4篇
  2009年   11篇
  2008年   7篇
  2007年   8篇
  2006年   9篇
  2005年   9篇
  2004年   5篇
  2003年   2篇
  2002年   2篇
  2001年   2篇
  2000年   2篇
  1999年   9篇
  1998年   10篇
  1997年   5篇
  1995年   1篇
  1994年   3篇
  1993年   1篇
  1991年   2篇
  1990年   4篇
  1989年   2篇
  1988年   1篇
  1986年   1篇
  1985年   2篇
  1984年   3篇
  1983年   3篇
  1979年   1篇
  1978年   1篇
排序方式: 共有245条查询结果,搜索用时 15 毫秒
41.
42.
In recent years, with the availability of high-frequency financial market data modeling realized volatility has become a new and innovative research direction. The construction of “observable” or realized volatility series from intra-day transaction data and the use of standard time-series techniques has lead to promising strategies for modeling and predicting (daily) volatility. In this article, we show that the residuals of commonly used time-series models for realized volatility and logarithmic realized variance exhibit non-Gaussianity and volatility clustering. We propose extensions to explicitly account for these properties and assess their relevance for modeling and forecasting realized volatility. In an empirical application for S&P 500 index futures we show that allowing for time-varying volatility of realized volatility and logarithmic realized variance substantially improves the fit as well as predictive performance. Furthermore, the distributional assumption for residuals plays a crucial role in density forecasting.  相似文献   
43.
Estimating a curve nonparametrically from data measured with error is a difficult problem that has been studied by many authors. Constructing a consistent estimator in this context can sometimes be quite challenging, and in this paper we review some of the tools that have been developed in the literature for kernel‐based approaches, founded on the Fourier transform and a more general unbiased score technique. We use those tools to rederive some of the existing nonparametric density and regression estimators for data contaminated by classical or Berkson errors, and discuss how to compute these estimators in practice. We also review some mistakes made by those working in the area, and highlight a number of problems with an existing R package decon .  相似文献   
44.
Constructing spatial density maps of seismic events, such as earthquake hypocentres, is complicated by the fact that events are not located precisely. In this paper, we present a method for estimating density maps from event locations that are measured with error. The estimator is based on the simulation–extrapolation method of estimation and is appropriate for location errors that are either homoscedastic or heteroscedastic. A simulation study shows that the estimator outperforms the standard estimator of density that ignores location errors in the data, even when location errors are spatially dependent. We apply our method to construct an estimated density map of earthquake hypocenters using data from the Alaska earthquake catalogue.  相似文献   
45.
In this study we propose a unified semiparametric approach to estimate various indices of treatment effect under the density ratio model, which connects two density functions by an exponential tilt. For each index, we construct two estimating functions based on the model and apply the generalized method of moments to improve the estimates. The estimating functions are allowed to be non smooth with respect to parameters and hence make the proposed method more flexible. We establish the asymptotic properties of the proposed estimators and illustrate the application with several simulations and two real data sets.  相似文献   
46.
This paper is motivated by our attempt to answer a policy question: how is private health insurance take‐up in Australia affected by the income threshold at which the Medicare Levy Surcharge (MLS) kicks in? We propose a new difference deconvolution kernel estimator for the location and size of regression discontinuities. We also propose a bootstrapping procedure for estimating the confidence interval for the estimated discontinuity. Performance of the estimator is evaluated by Monte Carlo simulations before it is applied to estimating the effect of the income threshold of MLS on the take‐up of private health insurance in Australia, using contaminated data.  相似文献   
47.
We propose a new type of non-parametric density estimators fitted to random variables with lower or upper-bounded support. To illustrate the method, we focus on nonnegative random variables. The estimators are constructed using kernels which are densities of empirical means of m i.i.d. nonnegative random variables with expectation 1. The exponent m   plays the role of the bandwidth. We study the pointwise mean square error and propose a pointwise adaptive estimator. The risk of the adaptive estimator satisfies an almost oracle inequality. A noteworthy result is that the adaptive rate is in correspondence with the smoothness properties of the unknown density as a function on (0,+∞)(0,+). The adaptive estimators are illustrated on simulated data. We compare our approach with the classical kernel estimators.  相似文献   
48.
论人口的聚焦效应   总被引:4,自引:0,他引:4  
本文认为人口聚集有五大主要效益———分工深化效应、学习激励效应、低成本效应、高生活质量效应和文明发展效应;这些效应促进了社会经济的发展和繁荣;当我们就如何实现人口的聚集,提出几点措施;此外本文还就人口聚集有关的问题提出了自己的观点。  相似文献   
49.
Methods are suggested for improving the coverage accuracy of intervals for predicting future values of a random variable drawn from a sampled distribution. It is shown that properties of solutions to such problems may be quite unexpected. For example, the bootstrap and the jackknife perform very poorly when used to calibrate coverage, although the jackknife estimator of the true coverage is virtually unbiased. A version of the smoothed bootstrap can be employed for successful calibration, however. Interpolation among adjacent order statistics can also be an effective way of calibrating, although even there the results are unexpected. In particular, whereas the coverage error can be reduced from O ( n -1) to orders O ( n -2) and O ( n -3) (where n denotes the sample size) by interpolating among two and three order statistics respectively, the next two orders of reduction require interpolation among five and eight order statistics respectively.  相似文献   
50.
ABSTRACT

This article examines two density-based value-capture mechanisms – community amenity contributions (CAC) in Vancouver, Canada, and transfer of development rights (TDR) in New Taipei City, Taiwan – that planners use to finance public goods. To understand the differences in the design of the mechanisms, negotiating dynamics, actors involved, and types of public goods financed, we propose three perspectives on development rights: absolute ownership, bundle of rights, and public asset. We find that the public asset perspective underpins Vancouver’s CAC, whereas in New Taipei City’s TDR development rights are treated more as a commodity, a concept rooted in the absolute ownership perspective.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号