首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   106篇
  免费   3篇
管理学   5篇
综合类   1篇
统计学   103篇
  2023年   1篇
  2022年   3篇
  2021年   2篇
  2020年   4篇
  2019年   6篇
  2018年   2篇
  2017年   8篇
  2016年   5篇
  2015年   2篇
  2014年   4篇
  2013年   21篇
  2012年   11篇
  2010年   1篇
  2009年   2篇
  2008年   4篇
  2007年   2篇
  2006年   3篇
  2005年   4篇
  2003年   5篇
  2002年   3篇
  2001年   4篇
  2000年   4篇
  1998年   1篇
  1997年   3篇
  1992年   2篇
  1991年   2篇
排序方式: 共有109条查询结果,搜索用时 7 毫秒
91.
The aim of the paper is to study the problem of estimating the quantile function of a finite population. Attention is first focused on point estimation, and asymptotic results are obtained. Confidence intervals are then constructed, based on both the following: (i) asymptotic results and (ii) a resampling technique based on rescaling the ‘usual’ bootstrap. A simulation study to compare asymptotic and resampling‐based results, as well as an application to a real population, is finally performed.  相似文献   
92.
A bootstrap-based method for constructing confidence regions (CRs) around row or column points projected onto a pair of axes from the correspondence analysis (CA) of a two-way contingency table is presented. These regions deal with the specific question of the sampling variation of sample row and column profile points around population row and column profile points when both are projected onto the observed axes, rather than the decomposition of the χ2-test of independence or the general question of the stability of the observed CA display which has been considered in previous work. The method therefore constructs the regions in a different way to what has been proposed before. A simulation experiment shows that the method performs well in most of the situations in which it might be used, with a few exceptions being noted. An example illustrates that the method produces conclusions which are consistent with those from detailed parametric modelling.  相似文献   
93.
Importance resampling is an approach that uses exponential tilting to reduce the resampling necessary for the construction of nonparametric bootstrap confidence intervals. The properties of bootstrap importance confidence intervals are well established when the data is a smooth function of means and when there is no censoring. However, in the framework of survival or time-to-event data, the asymptotic properties of importance resampling have not been rigorously studied, mainly because of the unduly complicated theory incurred when data is censored. This paper uses extensive simulation to show that, for parameter estimates arising from fitting Cox proportional hazards models, importance bootstrap confidence intervals can be constructed if the importance resampling probabilities of the records for the n individuals in the study are determined by the empirical influence function for the parameter of interest. Our results show that, compared to uniform resampling, importance resampling improves the relative mean-squared-error (MSE) efficiency by a factor of nine (for n = 200). The efficiency increases significantly with sample size, is mildly associated with the amount of censoring, but decreases slightly as the number of bootstrap resamples increases. The extra CPU time requirement for calculating importance resamples is negligible when compared to the large improvement in MSE efficiency. The method is illustrated through an application to data on chronic lymphocytic leukemia, which highlights that the bootstrap confidence interval is the preferred alternative to large sample inferences when the distribution of a specific covariate deviates from normality. Our results imply that, because of its computational efficiency, importance resampling is recommended whenever bootstrap methodology is implemented in a survival framework. Its use is particularly important when complex covariates are involved or the survival problem to be solved is part of a larger problem; for instance, when determining confidence bounds for models linking survival time with clusters identified in gene expression microarray data.  相似文献   
94.
Semiparametric Analysis of Truncated Data   总被引:1,自引:0,他引:1  
Randomly truncated data are frequently encountered in many studies where truncation arises as a result of the sampling design. In the literature, nonparametric and semiparametric methods have been proposed to estimate parameters in one-sample models. This paper considers a semiparametric model and develops an efficient method for the estimation of unknown parameters. The model assumes that K populations have a common probability distribution but the populations are observed subject to different truncation mechanisms. Semiparametric likelihood estimation is studied and the corresponding inferences are derived for both parametric and nonparametric components in the model. The method can also be applied to two-sample problems to test the difference of lifetime distributions. Simulation results and a real data analysis are presented to illustrate the methods.  相似文献   
95.
In this paper, the two-sample scale problem is addressed within the rank framework which does not require to specify the underlying continuous distribution. However, since the power of a rank test depends on the underlying distribution, it would be very useful for the researcher to have some information on it in order to use the possibly most suitable test. A two-stage adaptive design is used with adaptive tests where the data from the first stage are used to compute a selector statistic to select the test statistic for stage 2. More precisely, an adaptive scale test due to Hall and Padmanabhan and its components are considered in one-stage and several adaptive and non-adaptive two-stage procedures. A simulation study shows that the two-stage test with the adaptive choice in the second stage and with Liptak combination, when it is not more powerful than the corresponding one-stage test, shows, however, a quite similar power behavior. The test procedures are illustrated using two ecological applications and a clinical trial.  相似文献   
96.
The maximum likelihood, jackknife and bootstrap estimators of linkage disequilibrium, a measure of association in population genetics, are derived and compared. It is found that for point estimation, the resampling methods generate almost identical mean square errors. The maximum likelihood estimator could have bigger or smaller mean square errors depending on the parameters of the underlying population. However the bootstrap confidence interval is superior to the other two as the length of the intervals is shorter or the probability that the 95% confidence intervals include the true parameter is closer to 0.95. Although the standardised measure of linkage disequilibrium has a range from -1 to 1 regardless of marginal frequencies, it is shown that the distribution of this standardised measure is still not allele frequency independent under the multinomial sampling scheme.  相似文献   
97.
The empirical best linear unbiased prediction approach is a popular method for the estimation of small area parameters. However, the estimation of reliable mean squared prediction error (MSPE) of the estimated best linear unbiased predictors (EBLUP) is a complicated process. In this paper we study the use of resampling methods for MSPE estimation of the EBLUP. A cross-sectional and time-series stationary small area model is used to provide estimates in small areas. Under this model, a parametric bootstrap procedure and a weighted jackknife method are introduced. A Monte Carlo simulation study is conducted in order to compare the performance of different resampling-based measures of uncertainty of the EBLUP with the analytical approximation. Our empirical results show that the proposed resampling-based approaches performed better than the analytical approximation in several situations, although in some cases they tend to underestimate the true MSPE of the EBLUP in a higher number of small areas.  相似文献   
98.
Markov chain Monte Carlo (MCMC) sampling is a numerically intensive simulation technique which has greatly improved the practicality of Bayesian inference and prediction. However, MCMC sampling is too slow to be of practical use in problems involving a large number of posterior (target) distributions, as in dynamic modelling and predictive model selection. Alternative simulation techniques for tracking moving target distributions, known as particle filters, which combine importance sampling, importance resampling and MCMC sampling, tend to suffer from a progressive degeneration as the target sequence evolves. We propose a new technique, based on these same simulation methodologies, which does not suffer from this progressive degeneration.  相似文献   
99.
Consider testing multiple hypotheses using tests that can only be evaluated by simulation, such as permutation tests or bootstrap tests. This article introduces MMCTest , a sequential algorithm that gives, with arbitrarily high probability, the same classification as a specific multiple testing procedure applied to ideal p‐values. The method can be used with a class of multiple testing procedures that include the Benjamini and Hochberg false discovery rate procedure and the Bonferroni correction controlling the familywise error rate. One of the key features of the algorithm is that it stops sampling for all the hypotheses that can already be decided as being rejected or non‐rejected. MMCTest can be interrupted at any stage and then returns three sets of hypotheses: the rejected, the non‐rejected and the undecided hypotheses. A simulation study motivated by actual biological data shows that MMCTest is usable in practice and that, despite the additional guarantee, it can be computationally more efficient than other methods.  相似文献   
100.
Finite mixture models arise in a natural way in that they are modeling unobserved population heterogeneity. It is assumed that the population consists of an unknown number k of subpopulations with parameters λ1, ..., λk receiving weights p1, ..., pk. Because of the irregularity of the parameter space, the log-likelihood-ratio statistic (LRS) does not have a (χ2) limit distribution and therefore it is difficult to use the LRS to test for the number of components. These problems are circumvented by using the nonparametric bootstrap such that the mixture algorithm is applied B times to bootstrap samples obtained from the original sample with replacement. The number of components k is obtained as the mode of the bootstrap distribution of k. This approach is presented using the Times newspaper data and investigated in a simulation study for mixtures of Poisson data.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号