首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   391篇
  免费   16篇
  国内免费   2篇
管理学   17篇
人口学   2篇
综合类   21篇
统计学   369篇
  2023年   4篇
  2021年   3篇
  2020年   7篇
  2019年   15篇
  2018年   16篇
  2017年   22篇
  2016年   14篇
  2015年   11篇
  2014年   10篇
  2013年   91篇
  2012年   38篇
  2011年   8篇
  2010年   9篇
  2009年   22篇
  2008年   10篇
  2007年   16篇
  2006年   5篇
  2005年   14篇
  2004年   15篇
  2003年   9篇
  2002年   9篇
  2001年   10篇
  2000年   11篇
  1999年   10篇
  1998年   4篇
  1997年   7篇
  1996年   6篇
  1995年   5篇
  1994年   2篇
  1993年   2篇
  1992年   1篇
  1990年   1篇
  1987年   2篇
排序方式: 共有409条查询结果,搜索用时 31 毫秒
101.
In this paper a new method called the EMS algorithm is used to solve Wicksell's corpuscle problem, that is the determination of the distribution of the sphere radii in a medium given the radii of their profiles in a random slice. The EMS algorithm combines the EM algorithm, a procedure for obtaining maximum likelihood estimates of parameters from incomplete data, with simple smoothing. The method is tested on simulated data from three different sphere radii densities, namely a bimodal mixture of Normals, a Weibull and a Normal. The effect of varying the level of smoothing, the number of classes in which the data is binned and the number of classes for which the estimated density is evaluated, is investigated. Comparisons are made between these results and those obtained by others in this field.  相似文献   
102.
Quite often we are faced with a sparse number of observations over a finite number of cells and are interested in estimating the cell probabilities. Some local polynomial smoothers or local likelihood estimators have been proposed to improve on the histogram, which would produce too many zero values. We propose a relativized local polynomial smoothing for this problem, weighting heavier the estimating errors in small probability cells. A simulation study about the estimators that are proposed show a good behaviour with respect to natural error criteria, especially when dealing with sparse observations.  相似文献   
103.
This paper introduces two estimators, a boundary corrected minimum variance kernel estimator based on a uniform kernel and a discrete frequency polygon estimator, for the cell probabilities of ordinal contingency tables. Simulation results show that the minimum variance boundary kernel estimator has a smaller average sum of squared error than the existing boundary kernel estimators. The discrete frequency polygon estimator is simple and easy to interpret, and it is competitive with the minimum variance boundary kernel estimator. It is proved that both estimators have an optimal rate of convergence in terms of mean sum of squared error, The estimators are also defined for high-dimensional tables.  相似文献   
104.
Given a collection of n curves that are independent realizations of a functional variable, we are interested in finding patterns in the curve data by exploring low-dimensional approximations to the curves. It is assumed that the data curves are noisy samples from the vector space span <texlscub>f 1, …, f m </texlscub>, where f 1, …, f m are unknown functions on the real interval (0, T) with square-integrable derivatives of all orders m or less, and m<n. Ramsay [Principal differential analysis: Data reduction by differential operators, J. R. Statist. Soc. Ser. B 58 (1996), pp. 495–508] first proposed the method of regularized principal differential analysis (PDA) as an alternative to principal component analysis for finding low-dimensional approximations to curves. PDA is based on the following theorem: there exists an annihilating linear differential operator (LDO) ? of order m such that ?f i =0, i=1, …, m [E.A. Coddington and N. Levinson, Theory of Ordinary Differential Equations, McGraw-Hill, New York, 1955, Theorem 6.2]. PDA specifies m, then uses the data to estimate an annihilating LDO. Smooth estimates of the coefficients of the LDO are obtained by minimizing a penalized sum of the squared norm of the residuals. In this context, the residual is that part of the data curve that is not annihilated by the LDO. PDA obtains the smooth low dimensional approximation to the data curves by projecting onto the null space of the estimated annihilating LDO; PDA is thus useful for obtaining low-dimensional approximations to the data curves whether or not the interpretation of the annihilating LDO is intuitive or obvious from the context of the data. This paper extends PDA to allow for the coefficients in the LDO to smoothly depend upon a single continuous covariate. The estimating equations for the coefficients allowing for a continuous covariate are derived; the penalty of Eilers and Marx [Flexible smoothing with B-splines and penalties, Statist. Sci. 11(2) (1996), pp. 89–121] is used to impose smoothness. The results of a small computer simulation study investigating the bias and variance properties of the estimator are reported.  相似文献   
105.
We consider the problem of statistical inference for functional and dynamic magnetic resonance imaging (MRI). A new approach is proposed which extends the adaptive weights smoothing procedure of Polzehl and Spokoiny that was originally designed for image denoising. We demonstrate how the adaptive weights smoothing method can be applied to time series of images, which typically occur in functional and dynamic MRI. It is shown how signal detection in functional MRI and the analysis of dynamic MRI can benefit from spatially adaptive smoothing. The performance of the procedure is illustrated by using real and simulated data.  相似文献   
106.
Abstract.  Hazard rate estimation is an alternative to density estimation for positive variables that is of interest when variables are times to event. In particular, it is here shown that hazard rate estimation is useful for seismic hazard assessment. This paper suggests a simple, but flexible, Bayesian method for non-parametric hazard rate estimation, based on building the prior hazard rate as the convolution mixture of a Gaussian kernel with an exponential jump-size compound Poisson process. Conditions are given for a compound Poisson process prior to be well-defined and to select smooth hazard rates, an elicitation procedure is devised to assign a constant prior expected hazard rate while controlling prior variability, and a Markov chain Monte Carlo approximation of the posterior distribution is obtained. Finally, the suggested method is validated in a simulation study, and some Italian seismic event data are analysed.  相似文献   
107.
Statistical learning is emerging as a promising field where a number of algorithms from machine learning are interpreted as statistical methods and vice-versa. Due to good practical performance, boosting is one of the most studied machine learning techniques. We propose algorithms for multivariate density estimation and classification. They are generated by using the traditional kernel techniques as weak learners in boosting algorithms. Our algorithms take the form of multistep estimators, whose first step is a standard kernel method. Some strategies for bandwidth selection are also discussed with regard both to the standard kernel density classification problem, and to our 'boosted' kernel methods. Extensive experiments, using real and simulated data, show an encouraging practical relevance of the findings. Standard kernel methods are often outperformed by the first boosting iterations and in correspondence of several bandwidth values. In addition, the practical effectiveness of our classification algorithm is confirmed by a comparative study on two real datasets, the competitors being trees including AdaBoosting with trees.  相似文献   
108.
This paper introduces an alternating conditional expectation (ACE) algorithm: a non-parametric approach for estimating the transformations that lead to the maximal multiple correlation of a response and a set of independent variables in regression and correlation analysis. These transformations can give the data analyst insight into the relationships between these variables so that this can be best described and non-linear relationships uncovered. Using the Bayesian information criterion (BIC), we show how to find the best closed-form approximations for the optimal ACE transformations. By means of ACE and BIC, the model fit can be considerably improved compared with the conventional linear model as demonstrated in the two simulated and two real datasets in this paper.  相似文献   
109.
A simple algorithm for estimating the regression function over the United States is introduced. The approach allows for data obtained from a complicated sampling design, as well as for the inclusion of a few additional covariates. The regression estimates are obtained from an associated probability density estimate, namely the averaged shifted histogram. The algorithm has proven especially successful over a large mesh, say 300 by 200 nodes, in a data rich setting, even on a 486 computer running Splus. We currently run much higher resolution meshes on a Pentium. Commonly available alternative codes including kriging failed to produce useful estimates in this setting.  相似文献   
110.
This is an expository article. Here we show how the successfully used Kalman filter, popular with control engineers and other scientists, can be easily understood by statisticians if we use a Bayesian formulation and some well-known results in multivariate statistics. We also give a simple example illustrating the use of the Kalman filter for quality control work.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号