首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   391篇
  免费   16篇
  国内免费   2篇
管理学   17篇
人口学   2篇
综合类   21篇
统计学   369篇
  2023年   4篇
  2021年   3篇
  2020年   7篇
  2019年   15篇
  2018年   16篇
  2017年   22篇
  2016年   14篇
  2015年   11篇
  2014年   10篇
  2013年   91篇
  2012年   38篇
  2011年   8篇
  2010年   9篇
  2009年   22篇
  2008年   10篇
  2007年   16篇
  2006年   5篇
  2005年   14篇
  2004年   15篇
  2003年   9篇
  2002年   9篇
  2001年   10篇
  2000年   11篇
  1999年   10篇
  1998年   4篇
  1997年   7篇
  1996年   6篇
  1995年   5篇
  1994年   2篇
  1993年   2篇
  1992年   1篇
  1990年   1篇
  1987年   2篇
排序方式: 共有409条查询结果,搜索用时 671 毫秒
321.
There are several ways to handle within‐subject correlations with a longitudinal discrete outcome, such as mortality. The most frequently used models are either marginal or random‐effects types. This paper deals with a random‐effects‐based approach. We propose a nonparametric regression model having time‐varying mixed effects for longitudinal cancer mortality data. The time‐varying mixed effects in the proposed model are estimated by combining kernel‐smoothing techniques and a growth‐curve model. As an illustration based on real data, we apply the proposed method to a set of prefecture‐specific data on mortality from large‐bowel cancer in Japan.  相似文献   
322.
Experimental studies have found that a decision maker prefers spreading good and bad outcomes evenly over time. We propose, in an axiomatic framework, a new model of discount factors that captures this preference for spread. The model provides a refinement of the discounted utility model while maintaining dynamic consistency. The derived discount factors incorporate gain/loss asymmetry recursively: the difference between average future utility and current utility defines a gain or a loss, and gains are discounted more than losses. This notion of utility smoothing can induce a preference for spread: if bad outcomes are concentrated on future periods, moving one of the bad outcomes to today would be beneficial because such an operation eliminates a large loss and replaces it with a small gain.  相似文献   
323.
324.
Case-control data are often used in medical-related applications, and most studies have applied parametric logistic regression to analyze such data. In this study, we investigated a semiparametric model for the analysis of case-control data by relaxing the linearity assumption of risk factors by using a partial smoothing spline model. A faster computation method for the model by extending the lower-dimensional approximation approach of Gu and Kim 4 Gu, C. and Kim, Y.-J. 2002. Penalized likelihood regression: General formulation and efficient approximation. Canad. J. Statist., 30: 619628. [Crossref], [Web of Science ®] [Google Scholar] developed in penalized likelihood regression is considered to apply to case-control studies. Simulations were conducted to evaluate the performance of the method with selected smoothing parameters and to compare the method with existing methods. The method was applied to Korean gastric cancer case-control data to estimate the nonparametric probability function of age and regression parameters for other categorical risk factors simultaneously. The method could be used in preliminary studies to identify whether there is a flexible function form of risk factors in the semiparametric logistic regression analysis involving a large data set.  相似文献   
325.
Estimation of points of rapid change in the mean function m(t) is considered under long memory residuals, irregularily spaced time points and smoothly changing marginal distributions obtained by local Gaussian subordination. The approach is based on kernel estimation of derivatives of the trend function. An asymptotic expression for the mean squared error is obtained. Limit theorems are derived for derivatives of m and the time points where rapid change occurs. The results are illustrated by an application to measurements of oxygen isotopes trapped in the Greenland ice sheets during the last 20,000 years.  相似文献   
326.
There are several levels of sophistication when specifying the bandwidth matrix H to be used in a multivariate kernel density estimator, including H to be a positive multiple of the identity matrix, a diagonal matrix with positive elements or, in its most general form, a symmetric positive‐definite matrix. In this paper, the author proposes a data‐based method for choosing the smoothing parametrization to be used in the kernel density estimator. The procedure is fully illustrated by a simulation study and some real data examples. The Canadian Journal of Statistics © 2009 Statistical Society of Canada  相似文献   
327.
In this article we consider data-sharpening methods for nonparametric regression. In particular modifications are made to existing methods in the following two directions. First, we introduce a new tuning parameter to control the extent to which the data are to be sharpened, so that the amount of sharpening is adaptive and can be tuned to best suit the data at hand. We call this new parameter the sharpening parameter. Second, we develop automatic methods for jointly choosing the value of this sharpening parameter as well as the values of other required smoothing parameters. These automatic parameter selection methods are shown to be asymptotically optimal in a well defined sense. Numerical experiments were also conducted to evaluate their finite-sample performances. To the best of our knowledge, there is no bandwidth selection method developed in the literature for sharpened nonparametric regression.  相似文献   
328.
This paper considers the problem of selecting a robust threshold of wavelet shrinkage. Previous approaches reported in literature to handle the presence of outliers mainly focus on developing a robust procedure for a given threshold; this is related to solving a nontrivial optimization problem. The drawback of this approach is that the selection of a robust threshold, which is crucial for the resulting fit is ignored. This paper points out that the best fit can be achieved by a robust wavelet shrinkage with a robust threshold. We propose data-driven selection methods for a robust threshold. These approaches are based on a coupling of classical wavelet thresholding rules with pseudo data. The concept of pseudo data has influenced the implementation of the proposed methods, and provides a fast and efficient algorithm. Results from a simulation study and a real example demonstrate the promising empirical properties of the proposed approaches.  相似文献   
329.
Summary.  We propose an adaptive varying-coefficient spatiotemporal model for data that are observed irregularly over space and regularly in time. The model is capable of catching possible non-linearity (both in space and in time) and non-stationarity (in space) by allowing the auto-regressive coefficients to vary with both spatial location and an unknown index variable. We suggest a two-step procedure to estimate both the coefficient functions and the index variable, which is readily implemented and can be computed even for large spatiotemporal data sets. Our theoretical results indicate that, in the presence of the so-called nugget effect, the errors in the estimation may be reduced via the spatial smoothing—the second step in the estimation procedure proposed. The simulation results reinforce this finding. As an illustration, we apply the methodology to a data set of sea level pressure in the North Sea.  相似文献   
330.
牛鞭效应与生产平滑模型有效性问题   总被引:25,自引:3,他引:25       下载免费PDF全文
评述国内外关于牛鞭效应和与之紧密相关的生产平滑模型有效性问题的研究进展,也扼要 地叙述作者在这个领域所做的部分工作. 作者的工作将国际上对牛鞭效应的研究从特殊时间序列 下的两阶段模型拓展到一般ARIMA 时间序列下的多阶段模型,并以此为基础分析了需求信息在供 应链中传播的一般规律. 这些规律的发现扩展了牛鞭效应研究的一些经典结论,揭示了一些新颖现 象的存在,并为彻底解决国际学术界长达20 年的关于生产平滑模型有效性问题的争论提供了可 能. 最后,指出几个相关的研究问题,希望中国学者能在这一领域的研究中取得重要突破  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号