首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1013篇
  免费   24篇
  国内免费   3篇
管理学   51篇
民族学   1篇
人口学   3篇
丛书文集   23篇
理论方法论   9篇
综合类   186篇
社会学   3篇
统计学   764篇
  2023年   6篇
  2022年   11篇
  2021年   5篇
  2020年   16篇
  2019年   29篇
  2018年   42篇
  2017年   42篇
  2016年   24篇
  2015年   20篇
  2014年   33篇
  2013年   255篇
  2012年   68篇
  2011年   36篇
  2010年   38篇
  2009年   28篇
  2008年   40篇
  2007年   42篇
  2006年   30篇
  2005年   47篇
  2004年   28篇
  2003年   26篇
  2002年   16篇
  2001年   24篇
  2000年   24篇
  1999年   26篇
  1998年   19篇
  1997年   14篇
  1996年   8篇
  1995年   6篇
  1994年   5篇
  1993年   3篇
  1992年   6篇
  1991年   2篇
  1990年   4篇
  1989年   4篇
  1987年   2篇
  1986年   1篇
  1985年   2篇
  1984年   1篇
  1983年   2篇
  1982年   1篇
  1980年   1篇
  1978年   1篇
  1976年   1篇
  1975年   1篇
排序方式: 共有1040条查询结果,搜索用时 21 毫秒
1.
Abstract.  Recurrent event data are largely characterized by the rate function but smoothing techniques for estimating the rate function have never been rigorously developed or studied in statistical literature. This paper considers the moment and least squares methods for estimating the rate function from recurrent event data. With an independent censoring assumption on the recurrent event process, we study statistical properties of the proposed estimators and propose bootstrap procedures for the bandwidth selection and for the approximation of confidence intervals in the estimation of the occurrence rate function. It is identified that the moment method without resmoothing via a smaller bandwidth will produce a curve with nicks occurring at the censoring times, whereas there is no such problem with the least squares method. Furthermore, the asymptotic variance of the least squares estimator is shown to be smaller under regularity conditions. However, in the implementation of the bootstrap procedures, the moment method is computationally more efficient than the least squares method because the former approach uses condensed bootstrap data. The performance of the proposed procedures is studied through Monte Carlo simulations and an epidemiological example on intravenous drug users.  相似文献   
2.
Nonparametric deconvolution problems require one to recover an unknown density when the data are contaminated with errors. Optimal global rates of convergence are found under the weighted Lp-loss (1 ≤ p ≤ ∞). It appears that the optimal rates of convergence are extremely low for supersmooth error distributions. To resolve this difficulty, we examine how high the noise level can be for deconvolution to be feasible, and for the deconvolution estimate to be as good as the ordinary density estimate. It is shown that if the noise level is not too high, nonparametric Gaussian deconvolution can still be practical. Several simulation studies are also presented.  相似文献   
3.
The L1 and L2-errors of the histogram estimate of a density f from a sample X1,X2,…,Xn using a cubic partition are shown to be asymptotically normal without any unnecessary conditions imposed on the density f. The asymptotic variances are shown to depend on f only through the corresponding norm of f. From this follows the asymptotic null distribution of a goodness-of-fit test based on the total variation distance, introduced by Györfi and van der Meulen (1991). This note uses the idea of partial inversion for obtaining characteristic functions of conditional distributions, which goes back at least to Bartlett (1938).  相似文献   
4.
The author proposes some simple diagnostics for assessing the necessity of selected terms in smoothing spline ANOVA models. The elimination of practically insignificant terms generally enhances the interpretability of the estimates and sometimes may also have inferential implications. The diagnostics are derived from Kullback‐Leibler geometry and are illustrated in the settings of regression, probability density estimation, and hazard rate estimation.  相似文献   
5.
It is often of interest to find the maximum or near maxima among a set of vector‐valued parameters in a statistical model; in the case of disease mapping, for example, these correspond to relative‐risk “hotspots” where public‐health intervention may be needed. The general problem is one of estimating nonlinear functions of the ensemble of relative risks, but biased estimates result if posterior means are simply substituted into these nonlinear functions. The authors obtain better estimates of extrema from a new, weighted ranks squared error loss function. The derivation of these Bayes estimators assumes a hidden‐Markov random‐field model for relative risks, and their behaviour is illustrated with real and simulated data.  相似文献   
6.
Abstract.  The likelihood ratio statistic for testing pointwise hypotheses about the survival time distribution in the current status model can be inverted to yield confidence intervals (CIs). One advantage of this procedure is that CIs can be formed without estimating the unknown parameters that figure in the asymptotic distribution of the maximum likelihood estimator (MLE) of the distribution function. We discuss the likelihood ratio-based CIs for the distribution function and the quantile function and compare these intervals to several different intervals based on the MLE. The quantiles of the limiting distribution of the MLE are estimated using various methods including parametric fitting, kernel smoothing and subsampling techniques. Comparisons are carried out both for simulated data and on a data set involving time to immunization against rubella. The comparisons indicate that the likelihood ratio-based intervals are preferable from several perspectives.  相似文献   
7.
We consider the competing risks set-up. In many practical situations, the conditional probability of the cause of failure given the failure time is of direct interest. We propose to model the competing risks by the overall hazard rate and the conditional probabilities rather than the cause-specific hazards. We adopt a Bayesian smoothing approach for both quantities of interest. Illustrations are given at the end.  相似文献   
8.
9.
Nowadays airborne laser scanning is used in many territorial studies, providing point data which may contain strong discontinuities. Motivated by the need to interpolate such data and preserve their edges, this paper considers robust nonparametric smoothers. These estimators, when implemented with bounded loss functions, have suitable jump‐preserving properties. Iterative algorithms are developed here, and are equivalent to nonlinear M‐smoothers, but have the advantage of resembling the linear Kernel regression. The selection of their coefficients is carried out by combining cross‐validation and robust‐tuning techniques. Two real case studies and a simulation experiment confirm the validity of the method; in particular, the performance in building recognition is excellent.  相似文献   
10.
In this paper we present an approach to using historical control data to augment information from a randomized controlled clinical trial, when it is not possible to continue the control regimen to obtain the most reliable and valid assessment of long term treatment effects. Using an adjustment procedure to the historical control data, we investigate a method of estimating the long term survival function for the clinical trial control group and for evaluating the long term treatment effect. The suggested method is simple to interpret, and particularly motivated in clinical trial settings when ethical considerations preclude the long term follow-up of placebo controls. A simulation study reveals that the bias in parameter estimates that arises in the setting of group sequential monitoring will be attenuated when long term historical control information is used in the proposed manner. Data from the first and second National Wilms' Tumor studies are used to illustrate the method.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号