全文获取类型
收费全文 | 655篇 |
免费 | 19篇 |
专业分类
管理学 | 32篇 |
丛书文集 | 1篇 |
综合类 | 6篇 |
统计学 | 635篇 |
出版年
2023年 | 2篇 |
2022年 | 4篇 |
2021年 | 3篇 |
2020年 | 13篇 |
2019年 | 33篇 |
2018年 | 25篇 |
2017年 | 56篇 |
2016年 | 19篇 |
2015年 | 22篇 |
2014年 | 23篇 |
2013年 | 134篇 |
2012年 | 106篇 |
2011年 | 14篇 |
2010年 | 23篇 |
2009年 | 24篇 |
2008年 | 15篇 |
2007年 | 10篇 |
2006年 | 15篇 |
2005年 | 15篇 |
2004年 | 13篇 |
2003年 | 8篇 |
2002年 | 13篇 |
2001年 | 14篇 |
2000年 | 16篇 |
1999年 | 11篇 |
1998年 | 9篇 |
1997年 | 3篇 |
1996年 | 5篇 |
1995年 | 1篇 |
1994年 | 2篇 |
1993年 | 6篇 |
1992年 | 9篇 |
1991年 | 4篇 |
1990年 | 2篇 |
1989年 | 2篇 |
排序方式: 共有674条查询结果,搜索用时 31 毫秒
81.
The authors study the problem of testing whether two populations have the same law by comparing kernel estimators of the two density functions. The proposed test statistic is based on a local empirical likelihood approach. They obtain the asymptotic distribution of the test statistic and propose a bootstrap approximation to calibrate the test. A simulation study is carried out in which the proposed method is compared with two competitors, and a procedure to select the bandwidth parameter is studied. The proposed test can be extended to more than two samples and to multivariate distributions. 相似文献
82.
Emmanuel Flachaire 《Econometric Reviews》2005,24(2):219-241
In the presence of heteroskedasticity of unknown form, the Ordinary Least Squares parameter estimator becomes inefficient, and its covariance matrix estimator inconsistent. Eicker (1963) and White (1980) were the first to propose a robust consistent covariance matrix estimator, that permits asymptotically correct inference. This estimator is widely used in practice. Cragg (1983) proposed a more efficient estimator, but concluded that tests basd on it are unreliable. Thus, this last estimator has not been used in practice. This article is concerned with finite sample properties of tests robust to heteroskedasticity of unknown form. Our results suggest that reliable and more efficient tests can be obtained with the Cragg estimators in small samples. 相似文献
83.
A smoothing procedure for discrete time failure data is proposed which allows for the inclusion of covariates. This purely nonparametric method is based on discrete or continuous kernel smoothing techniques that gives a compromise between the data and smoothness. The method may be used as an exploratory tool to uncover the underlying structure or as an alternative to parametric methods when prediction is the primary objective. Confidence intervals are considered and alternative techniques of cross validation based choices of smoothing parameters are investigated. 相似文献
84.
Using the data from the AIDS Link to Intravenous Experiences cohort study as an example, an informative censoring model was
used to characterize the repeated hospitalization process of a group of patients. Under the informative censoring assumption,
the estimators of the baseline rate function and the regression parameters were shown to be related to a latent variable.
Hence, it becomes impractical to directly estimate the unknown quantities in the moments of the estimators for the bandwidth
selection of a smoothing estimator and the construction of confidence intervals, which are respectively based on the asymptotic
mean squared errors and the asymptotic distributions of the estimators. To overcome these difficulties, we develop a random
weighted bootstrap procedure to select appropriate bandwidths and to construct approximated confidence intervals. One can
see that our method is simple and faster to implement from a practical point of view, and is at least as accurate as other
bootstrap methods. In this article, it is shown that the proposed method is useful through the performance of a Monte Carlo
simulation. An application of our procedure is also illustrated by a recurrent event sample of intravenous drug users for
inpatient cares over time. 相似文献
85.
In this article, we investigate the limitations of traditional quantile function estimators and introduce a new class of quantile function estimators, namely, the semi-parametric tail-extrapolated quantile estimators, which has excellent performance for estimating the extreme tails with finite sample sizes. The smoothed bootstrap and direct density estimation via the characteristic function methods are developed for the estimation of confidence intervals. Through a comprehensive simulation study to compare the confidence interval estimations of various quantile estimators, we discuss the preferred quantile estimator in conjunction with the confidence interval estimation method to use under different circumstances. Data examples are given to illustrate the superiority of the semi-parametric tail-extrapolated quantile estimators. The new class of quantile estimators is obtained by slight modification of traditional quantile estimators, and therefore, should be specifically appealing to researchers in estimating the extreme tails. 相似文献
86.
Prabhanjan N. Tattar 《统计学通讯:理论与方法》2013,42(5):1270-1277
AbstractIn the present paper we develop bootstrap tests of hypothesis, based on simulation, for the transition probability matrix arising in the context of a multi-state model. The bootstrap test statistic is based on the paper of Tattar and Vaman (2008), which develops a statistic for the testing problems concerning the transition probability matrix of the non homogeneous Markov process. 相似文献
87.
The relative 'performances of improved ridge estimators and an empirical Bayes estimator are studied by means of Monte Carlo simulations. The empirical Bayes method is seen to perform consistently better in terms of smaller MSE and more accurate empirical coverage than any of the estimators considered here. A bootstrap method is proposed to obtain more reliable estimates of the MSE of ridge esimators. Some theorems on the bootstrap for the ridge estimators are also given and they are used to provide an analytical understanding of the proposed bootstrap procedure. Empirical coverages of the ridge estimators based on the proposed procedure are generally closer to the nominal coverage when compared to their earlier counterparts. In general, except for a few cases, these coverages are still less accurate than the empirical coverages of the empirical Bayes estimator. 相似文献
88.
Iliyan Georgiev David I. Harvey Stephen J. Leybourne A. M. Robert Taylor 《商业与经济统计学杂志》2013,31(3):528-541
In order for predictive regression tests to deliver asymptotically valid inference, account has to be taken of the degree of persistence of the predictors under test. There is also a maintained assumption that any predictability in the variable of interest is purely attributable to the predictors under test. Violation of this assumption by the omission of relevant persistent predictors renders the predictive regression invalid, and potentially also spurious, as both the finite sample and asymptotic size of the predictability tests can be significantly inflated. In response, we propose a predictive regression invalidity test based on a stationarity testing approach. To allow for an unknown degree of persistence in the putative predictors, and for heteroscedasticity in the data, we implement our proposed test using a fixed regressor wild bootstrap procedure. We demonstrate the asymptotic validity of the proposed bootstrap test by proving that the limit distribution of the bootstrap statistic, conditional on the data, is the same as the limit null distribution of the statistic computed on the original data, conditional on the predictor. This corrects a long-standing error in the bootstrap literature whereby it is incorrectly argued that for strongly persistent regressors and test statistics akin to ours the validity of the fixed regressor bootstrap obtains through equivalence to an unconditional limit distribution. Our bootstrap results are therefore of interest in their own right and are likely to have applications beyond the present context. An illustration is given by reexamining the results relating to U.S. stock returns data in Campbell and Yogo (2006). Supplementary materials for this article are available online. 相似文献
89.
Robert L. Paige A. Alexandre Trindade 《Australian & New Zealand Journal of Statistics》2013,55(1):25-41
A fast and accurate method of confidence interval construction for the smoothing parameter in penalised spline and partially linear models is proposed. The method is akin to a parametric percentile bootstrap where Monte Carlo simulation is replaced by saddlepoint approximation, and can therefore be viewed as an approximate bootstrap. It is applicable in a quite general setting, requiring only that the underlying estimator be the root of an estimating equation that is a quadratic form in normal random variables. This is the case under a variety of optimality criteria such as those commonly denoted by maximum likelihood (ML), restricted ML (REML), generalized cross validation (GCV) and Akaike's information criteria (AIC). Simulation studies reveal that under the ML and REML criteria, the method delivers a near‐exact performance with computational speeds that are an order of magnitude faster than existing exact methods, and two orders of magnitude faster than a classical bootstrap. Perhaps most importantly, the proposed method also offers a computationally feasible alternative when no known exact or asymptotic methods exist, e.g. GCV and AIC. An application is illustrated by applying the methodology to well‐known fossil data. Giving a range of plausible smoothed values in this instance can help answer questions about the statistical significance of apparent features in the data. 相似文献
90.
Jonghyeon Kim 《统计学通讯:理论与方法》2013,42(3):461-476
ABSTRACT The analysis of clustered data in a longitudinal ophthalmology study is complicated by correlations between repeatedly measured visual outcomes of paired eyes in a participant and missing observations due to the loss of follow-up. In the present article we consider hypothesis testing problems in an ophthalmology study, where eligible eyes are randomized to two treatments (when two eyes of a participant are eligible, the paired eyes are assigned to different treatments), and vision function outcomes are repeatedly measured over time. A large sample-based nonparametric test statistic and a nonparametric Bootstrap test analog are proposed for testing an interaction effect of two factors and testing an effect of a eye-specific factor within a level of the other person-specific factor on visual function outcomes. Both test statistics allow for missing observations, correlations between repeatedly measured outcomes on individual eyes, and correlations between repeatedly measured outcomes on both eyes of each participant. A simulation study shows that these proposed test statistics maintain nominal significance levels approximately and comparable powers to each other, as well as higher powers than the naive test statistic ignoring correlations between repeated bilateral measurements of both eyes in the same person. For illustration, we apply the proposed test statistics to the changes of visual field defect score in the Advanced Glaucoma Intervention Study. 相似文献