首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   106篇
  免费   3篇
管理学   5篇
综合类   1篇
统计学   103篇
  2023年   1篇
  2022年   3篇
  2021年   2篇
  2020年   4篇
  2019年   6篇
  2018年   2篇
  2017年   8篇
  2016年   5篇
  2015年   2篇
  2014年   4篇
  2013年   21篇
  2012年   11篇
  2010年   1篇
  2009年   2篇
  2008年   4篇
  2007年   2篇
  2006年   3篇
  2005年   4篇
  2003年   5篇
  2002年   3篇
  2001年   4篇
  2000年   4篇
  1998年   1篇
  1997年   3篇
  1992年   2篇
  1991年   2篇
排序方式: 共有109条查询结果,搜索用时 15 毫秒
41.
The authors propose a bootstrap procedure which estimates the distribution of an estimating function by resampling its terms using bootstrap techniques. Studentized versions of this so‐called estimating function (EF) bootstrap yield methods which are invariant under reparametrizations. This approach often has substantial advantage, both in computation and accuracy, over more traditional bootstrap methods and it applies to a wide class of practical problems where the data are independent but not necessarily identically distributed. The methods allow for simultaneous estimation of vector parameters and their components. The authors use simulations to compare the EF bootstrap with competing methods in several examples including the common means problem and nonlinear regression. They also prove symptotic results showing that the studentized EF bootstrap yields higher order approximations for the whole vector parameter in a wide class of problems.  相似文献   
42.
We present two new statistics for estimating the number of factors underlying in a multivariate system. One of the two new methods, the original NUMFACT, has been used in high profile environmental studies. The two new methods are first explained from a geometrical viewpoint. We then present an algebraic development and asymptotic cutoff points. Next we present a simulation study that shows that for skewed data the new methods are typically superior to traditional methods and for normally distributed data the new methods are competitive to the best of the traditional methods. We finally show how the methods compare by using two environmental data sets.  相似文献   
43.
Tests for the equality of variances are of interest in many areas such as quality control, agricultural production systems, experimental education, pharmacology, biology, as well as a preliminary to the analysis of variance, dose–response modelling or discriminant analysis. The literature is vast. Traditional non-parametric tests are due to Mood, Miller and Ansari–Bradley. A test which usually stands out in terms of power and robustness against non-normality is the W50 Brown and Forsythe [Robust tests for the equality of variances, J. Am. Stat. Assoc. 69 (1974), pp. 364–367] modification of the Levene test [Robust tests for equality of variances, in Contributions to Probability and Statistics, I. Olkin, ed., Stanford University Press, Stanford, 1960, pp. 278–292]. This paper deals with the two-sample scale problem and in particular with Levene type tests. We consider 10 Levene type tests: the W50, the M50 and L50 tests [G. Pan, On a Levene type test for equality of two variances, J. Stat. Comput. Simul. 63 (1999), pp. 59–71], the R-test [R.G. O'Brien, A general ANOVA method for robust tests of additive models for variances, J. Am. Stat. Assoc. 74 (1979), pp. 877–880], as well as the bootstrap and permutation versions of the W50, L50 and R tests. We consider also the F-test, the modified Fligner and Killeen [Distribution-free two-sample tests for scale, J. Am. Stat. Assoc. 71 (1976), pp. 210–213] test, an adaptive test due to Hall and Padmanabhan [Adaptive inference for the two-sample scale problem, Technometrics 23 (1997), pp. 351–361] and the two tests due to Shoemaker [Tests for differences in dispersion based on quantiles, Am. Stat. 49(2) (1995), pp. 179–182; Interquantile tests for dispersion in skewed distributions, Commun. Stat. Simul. Comput. 28 (1999), pp. 189–205]. The aim is to identify the effective methods for detecting scale differences. Our study is different with respect to the other ones since it is focused on resampling versions of the Levene type tests, and many tests considered here have not ever been proposed and/or compared. The computationally simplest test found robust is W50. Higher power, while preserving robustness, is achieved by considering the resampling version of Levene type tests like the permutation R-test (recommended for normal- and light-tailed distributions) and the bootstrap L50 test (recommended for heavy-tailed and skewed distributions). Among non-Levene type tests, the best one is the adaptive test due to Hall and Padmanabhan.  相似文献   
44.
Generalized discriminant analysis based on distances   总被引:13,自引:1,他引:13  
This paper describes a method of generalized discriminant analysis based on a dissimilarity matrix to test for differences in a priori groups of multivariate observations. Use of classical multidimensional scaling produces a low‐dimensional representation of the data for which Euclidean distances approximate the original dissimilarities. The resulting scores are then analysed using discriminant analysis, giving tests based on the canonical correlations. The asymptotic distributions of these statistics under permutations of the observations are shown to be invariant to changes in the distributions of the original variables, unlike the distributions of the multi‐response permutation test statistics which have been considered by other workers for testing differences among groups. This canonical method is applied to multivariate fish assemblage data, with Monte Carlo simulations to make power comparisons and to compare theoretical results and empirical distributions. The paper proposes classification based on distances. Error rates are estimated using cross‐validation.  相似文献   
45.
When the outcome of interest is semicontinuous and collected longitudinally, efficient testing can be difficult. Daily rainfall data is an excellent example which we use to illustrate the various challenges. Even under the simplest scenario, the popular ‘two-part model’, which uses correlated random-effects to account for both the semicontinuous and longitudinal characteristics of the data, often requires prohibitively intensive numerical integration and difficult interpretation. Reducing data to binary (truncating continuous positive values to equal one), while relatively straightforward, leads to a potentially substantial loss in power. We propose an alternative: using a non-parametric rank test recently proposed for joint longitudinal survival data. We investigate the potential benefits of such a test for the analysis of semicontinuous longitudinal data with regards to power and computational feasibility.  相似文献   
46.
An extensive simulation study is conducted to compare the performance between balanced and antithetic resampling for the bootstrap in estimation of bias, variance, and percentiles when the statistic of interest is the median, the square root of the absolute value of the mean, or the median absolute deviations from the median. Simulation results reveal that balanced resampling provide better efficiencies in most cases; however, antithetic resampling is superior in estimating bias of the median. We also investigate the possibility of combining an existing efficient bootstrap computation of Efron (1990) with balanced or antithetic resampling for percentile estimation. Results indicate that the combination method does indeed offer gains in performance though the gains are much more dramatic for the bootstrap t statistic than for any of the three statistics of interest as described above.  相似文献   
47.
Abstract.  We consider the problem of estimating a collection of integrals with respect to an unknown finite measure μ from noisy observations of some of the integrals. A new method to carry out Bayesian inference for the integrals is proposed. We use a Dirichlet or Gamma process as a prior for μ , and construct an approximation to the posterior distribution of the integrals using the sampling importance resampling algorithm and samples from a new multidimensional version of a Markov chain by Feigin and Tweedie. We prove that the Markov chain is positive Harris recurrent, and that the approximating distribution converges weakly to the posterior as the sample size increases, under a mild integrability condition. Applications to polymer chemistry and mathematical finance are given.  相似文献   
48.
Based on recent developments in the field of operations research, we propose two adaptive resampling algorithms for estimating bootstrap distributions. One algorithm applies the principle of the recently proposed cross-entropy (CE) method for rare event simulation, and does not require calculation of the resampling probability weights via numerical optimization methods (e.g., Newton's method), whereas the other algorithm can be viewed as a multi-stage extension of the classical two-step variance minimization approach. The two algorithms can be easily used as part of a general algorithm for Monte Carlo calculation of bootstrap confidence intervals and tests, and are especially useful in estimating rare event probabilities. We analyze theoretical properties of both algorithms in an idealized setting and carry out simulation studies to demonstrate their performance. Empirical results on both one-sample and two-sample problems as well as a real survival data set show that the proposed algorithms are not only superior to traditional approaches, but may also provide more than an order of magnitude of computational efficiency gains.  相似文献   
49.
A three‐arm trial including an experimental treatment, an active reference treatment and a placebo is often used to assess the non‐inferiority (NI) with assay sensitivity of an experimental treatment. Various hypothesis‐test‐based approaches via a fraction or pre‐specified margin have been proposed to assess the NI with assay sensitivity in a three‐arm trial. There is little work done on confidence interval in a three‐arm trial. This paper develops a hybrid approach to construct simultaneous confidence interval for assessing NI and assay sensitivity in a three‐arm trial. For comparison, we present normal‐approximation‐based and bootstrap‐resampling‐based simultaneous confidence intervals. Simulation studies evidence that the hybrid approach with the Wilson score statistic performs better than other approaches in terms of empirical coverage probability and mesial‐non‐coverage probability. An example is used to illustrate the proposed approaches.  相似文献   
50.
Right‐censored and length‐biased failure time data arise in many fields including cross‐sectional prevalent cohort studies, and their analysis has recently attracted a great deal of attention. It is well‐known that for regression analysis of failure time data, two commonly used approaches are hazard‐based and quantile‐based procedures, and most of the existing methods are the hazard‐based ones. In this paper, we consider quantile regression analysis of right‐censored and length‐biased data and present a semiparametric varying‐coefficient partially linear model. For estimation of regression parameters, a three‐stage procedure that makes use of the inverse probability weighted technique is developed, and the asymptotic properties of the resulting estimators are established. In addition, the approach allows the dependence of the censoring variable on covariates, while most of the existing methods assume the independence between censoring variables and covariates. A simulation study is conducted and suggests that the proposed approach works well in practical situations. Also, an illustrative example is provided.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号