首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   245篇
  免费   6篇
管理学   5篇
人口学   2篇
丛书文集   1篇
综合类   28篇
统计学   215篇
  2021年   1篇
  2020年   3篇
  2019年   5篇
  2018年   10篇
  2017年   7篇
  2016年   5篇
  2015年   7篇
  2014年   11篇
  2013年   76篇
  2012年   20篇
  2011年   6篇
  2010年   9篇
  2009年   6篇
  2008年   5篇
  2007年   5篇
  2006年   6篇
  2005年   2篇
  2004年   8篇
  2003年   5篇
  2002年   5篇
  2001年   7篇
  2000年   1篇
  1999年   4篇
  1998年   6篇
  1997年   1篇
  1996年   5篇
  1995年   3篇
  1994年   4篇
  1993年   1篇
  1992年   3篇
  1991年   2篇
  1990年   1篇
  1989年   1篇
  1988年   3篇
  1987年   2篇
  1986年   1篇
  1985年   2篇
  1978年   1篇
  1976年   1篇
排序方式: 共有251条查询结果,搜索用时 62 毫秒
1.
Polynomial spline regression models of low degree have proved useful in modeling responses from designed experiments in science and engineering when simple polynomial models are inadequate. Where there is uncertainty in the number and location of the knots, or breakpoints, of the spline, then designs that minimize the systematic errors resulting from model misspecification may be appropriate. This paper gives a method for constructing such all‐bias designs for a single variable spline when the distinct knots in the assumed and true models come from some specified set. A class of designs is defined in terms of the inter‐knot intervals and sufficient conditions are obtained for a design within this class to be all‐bias under linear, quadratic and cubic spline models. An example of the construction of all‐bias designs is given.  相似文献   
2.
The paper evaluates the accuracy of Burr approximations of critical values and p-values for test a of autocorrelation and heteroscedasticity in the linear regression model.  相似文献   
3.
Summary. Semiparametric mixed models are useful in biometric and econometric applications, especially for longitudinal data. Maximum penalized likelihood estimators (MPLEs) have been shown to work well by Zhang and co-workers for both linear coefficients and nonparametric functions. This paper considers the role of influence diagnostics in the MPLE by extending the case deletion and subject deletion analysis of linear models to accommodate the inclusion of a nonparametric component. We focus on influence measures for the fixed effects and provide formulae that are analogous to those for simpler models and readily computable with the MPLE algorithm. We also establish an equivalence between the case or subject deletion model and a mean shift outlier model from which we derive tests for outliers. The influence diagnostics proposed are illustrated through a longitudinal hormone study on progesterone and a simulated example.  相似文献   
4.
American options in discrete time can be priced by solving optimal stopping problems. This can be done by computing so-called continuation values, which we represent as regression functions defined recursively by using the continuation values of the next time step. We use Monte Carlo to generate data, and then we apply smoothing spline regression estimates to estimate the continuation values from these data. All parameters of the estimate are chosen data dependent. We present results concerning consistency and the estimates’ rate of convergence.  相似文献   
5.
We consider the problem of density estimation when the data is in the form of a continuous stream with no fixed length. In this setting, implementations of the usual methods of density estimation such as kernel density estimation are problematic. We propose a method of density estimation for massive datasets that is based upon taking the derivative of a smooth curve that has been fit through a set of quantile estimates. To achieve this, a low-storage, single-pass, sequential method is proposed for simultaneous estimation of multiple quantiles for massive datasets that form the basis of this method of density estimation. For comparison, we also consider a sequential kernel density estimator. The proposed methods are shown through simulation study to perform well and to have several distinct advantages over existing methods.  相似文献   
6.
Standard algorithms for the construction of iterated bootstrap confidence intervals are computationally very demanding, requiring nested levels of bootstrap resampling. We propose an alternative approach to constructing double bootstrap confidence intervals that involves replacing the inner level of resampling by an analytical approximation. This approximation is based on saddlepoint methods and a tail probability approximation of DiCiccio and Martin (1991). Our technique significantly reduces the computational expense of iterated bootstrap calculations. A formal algorithm for the construction of our approximate iterated bootstrap confidence intervals is presented, and some crucial practical issues arising in its implementation are discussed. Our procedure is illustrated in the case of constructing confidence intervals for ratios of means using both real and simulated data. We repeat an experiment of Schenker (1985) involving the construction of bootstrap confidence intervals for a variance and demonstrate that our technique makes feasible the construction of accurate bootstrap confidence intervals in that context. Finally, we investigate the use of our technique in a more complex setting, that of constructing confidence intervals for a correlation coefficient.  相似文献   
7.
In a smoothing spline model with unknown change-points, the choice of the smoothing parameter strongly influences the estimation of the change-point locations and the function at the change-points. In a tumor biology example, where change-points in blood flow in response to treatment were of interest, choosing the smoothing parameter based on minimizing generalized cross-validation (GCV) gave unsatisfactory estimates of the change-points. We propose a new method, aGCV, that re-weights the residual sum of squares and generalized degrees of freedom terms from GCV. The weight is chosen to maximize the decrease in the generalized degrees of freedom as a function of the weight value, while simultaneously minimizing aGCV as a function of the smoothing parameter and the change-points. Compared with GCV, simulation studies suggest that the aGCV method yields improved estimates of the change-point and the value of the function at the change-point.  相似文献   
8.
The additive Cox model is flexible and powerful for modelling the dynamic changes of regression coefficients in the survival analysis. This paper is concerned with feature screening for the additive Cox model with ultrahigh-dimensional covariates. The proposed screening procedure can effectively identify active predictors. That is, with probability tending to one, the selected variable set includes the actual active predictors. In order to carry out the proposed procedure, we propose an effective algorithm and establish the ascent property of the proposed algorithm. We further prove that the proposed procedure possesses the sure screening property. Furthermore, we examine the finite sample performance of the proposed procedure via Monte Carlo simulations, and illustrate the proposed procedure by a real data example.  相似文献   
9.
In this paper, we investigate four existing and three new confidence interval estimators for the negative binomial proportion (i.e., proportion under inverse/negative binomial sampling). An extensive and systematic comparative study among these confidence interval estimators through Monte Carlo simulations is presented. The performance of these confidence intervals are evaluated in terms of their coverage probabilities and expected interval widths. Our simulation studies suggest that the confidence interval estimator based on saddlepoint approximation is more appealing for large coverage levels (e.g., nominal level≤1% ) whereas the score confidence interval estimator is more desirable for those commonly used coverage levels (e.g., nominal level>1% ). We illustrate these confidence interval construction methods with a real data set from a maternal congenital heart disease study.  相似文献   
10.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号