首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   106篇
  免费   3篇
管理学   5篇
综合类   1篇
统计学   103篇
  2023年   1篇
  2022年   3篇
  2021年   2篇
  2020年   4篇
  2019年   6篇
  2018年   2篇
  2017年   8篇
  2016年   5篇
  2015年   2篇
  2014年   4篇
  2013年   21篇
  2012年   11篇
  2010年   1篇
  2009年   2篇
  2008年   4篇
  2007年   2篇
  2006年   3篇
  2005年   4篇
  2003年   5篇
  2002年   3篇
  2001年   4篇
  2000年   4篇
  1998年   1篇
  1997年   3篇
  1992年   2篇
  1991年   2篇
排序方式: 共有109条查询结果,搜索用时 15 毫秒
1.
Annual concentrations of toxic air contaminants are of primary concern from the perspective of chronic human exposure assessment and risk analysis. Despite recent advances in air quality monitoring technology, resource and technical constraints often impose limitations on the availability of a sufficient number of ambient concentration measurements for performing environmental risk analysis. Therefore, sample size limitations, representativeness of data, and uncertainties in the estimated annual mean concentration must be examined before performing quantitative risk analysis. In this paper, we discuss several factors that need to be considered in designing field-sampling programs for toxic air contaminants and in verifying compliance with environmental regulations. Specifically, we examine the behavior of SO2, TSP, and CO data as surrogates for toxic air contaminants and as examples of point source, area source, and line source-dominated pollutants, respectively, from the standpoint of sampling design. We demonstrate the use of bootstrap resampling method and normal theory in estimating the annual mean concentration and its 95% confidence bounds from limited sampling data, and illustrate the application of operating characteristic (OC) curves to determine optimum sample size and other sampling strategies. We also outline a statistical procedure, based on a one-sided t-test, that utilizes the sampled concentration data for evaluating whether a sampling site is compliance with relevant ambient guideline concentrations for toxic air contaminants.  相似文献   
2.
An imputation procedure is a procedure by which each missing value in a data set is replaced (imputed) by an observed value using a predetermined resampling procedure. The distribution of a statistic computed from a data set consisting of observed and imputed values, called a completed data set, is affecwd by the imputation procedure used. In a Monte Carlo experiment, three imputation procedures are compared with respect to the empirical behavior of the goodness-of- fit chi-square statistic computed from a completed data set. The results show that each imputation procedure affects the distribution of the goodness-of-fit chi-square statistic in 3. different manner. However, when the empirical behavior of the goodness-of-fit chi-square statistic is compared u, its appropriate asymptotic distribution, there are no substantial differences between these imputation procedures.  相似文献   
3.
In animal digestibility the proportion of degraded food along the time has usually been modeled as a normal random variable with mean a function of the time and the following three parameters: the proportion of degraded food almost instantaneously, remaining proportion of food to be degraded, and velocity of degradation. The estimation of these parameters has been carried out mainly from a frequentist viewpoint by using the asymptotic distribution of the maximum likelihood estimator. This may give inadmissible estimates, such as values outside of the range of the parameters. This drawback could not appear if a Bayesian approach were adopted. In this article an objective Bayesian analysis is developed and illustrated on real and simulated data.  相似文献   
4.
Following Yang (1988), a simple, more self-contained derivation of the asymptotic normality of the bootstrap mean is presented, as well as other asymptotic results. The derivations are appropriate for beginning graduate students in statistics, relying only on fundamental notions of probability theory and analysis,  相似文献   
5.
Using a direct resampling process, a Bayesian approach is developed for the analysis of the shiftpoint problem. In many problems it is straight forward to isolate the marginal posterior distribution of the shift-point parameter and the conditional distribution of some of the parameters given the shift point and the other remaining parameters. When this is possible, a direct sampling approach is easily implemented whereby standard random number generators can be used to generate samples from the joint posterior distribution of aii the parameters in the model. This technique is illustrated with examples involving one shift for Poisson processes and regression models.  相似文献   
6.
Software packages usually report the results of statistical tests using p-values. Users often interpret these values by comparing them with standard thresholds, for example, 0.1, 1, and 5%, which is sometimes reinforced by a star rating (***, **, and *, respectively). We consider an arbitrary statistical test whose p-value p is not available explicitly, but can be approximated by Monte Carlo samples, for example, by bootstrap or permutation tests. The standard implementation of such tests usually draws a fixed number of samples to approximate p. However, the probability that the exact and the approximated p-value lie on different sides of a threshold (the resampling risk) can be high, particularly for p-values close to a threshold. We present a method to overcome this. We consider a finite set of user-specified intervals that cover [0, 1] and that can be overlapping. We call these p-value buckets. We present algorithms that, with arbitrarily high probability, return a p-value bucket containing p. We prove that for both a bounded resampling risk and a finite runtime, overlapping buckets need to be employed, and that our methods both bound the resampling risk and guarantee a finite runtime for such overlapping buckets. To interpret decisions with overlapping buckets, we propose an extension of the star rating system. We demonstrate that our methods are suitable for use in standard software, including for low p-value thresholds occurring in multiple testing settings, and that they can be computationally more efficient than standard implementations.  相似文献   
7.
Categorical longitudinal data are frequently applied in a variety of fields, and are commonly fitted by generalized linear mixed models (GLMMs) and generalized estimating equations models. The cumulative logit is one of the useful link functions to deal with the problem involving repeated ordinal responses. To check the adequacy of the GLMMs with cumulative logit link function, two goodness-of-fit tests constructed by the unweighted sum of squared model residuals using numerical integration and bootstrap resampling technique are proposed. The empirical type I error rates and powers of the proposed tests are examined by simulation studies. The ordinal longitudinal studies are utilized to illustrate the application of the two proposed tests.  相似文献   
8.
Nonresponse is a very common phenomenon in survey sampling. Nonignorable nonresponse – that is, a response mechanism that depends on the values of the variable having nonresponse – is the most difficult type of nonresponse to handle. This article develops a robust estimation approach to estimating equations (EEs) by incorporating the modelling of nonignorably missing data, the generalized method of moments (GMM) method and the imputation of EEs via the observed data rather than the imputed missing values when some responses are subject to nonignorably missingness. Based on a particular semiparametric logistic model for nonignorable missing response, this paper proposes the modified EEs to calculate the conditional expectation under nonignorably missing data. We can apply the GMM to infer the parameters. The advantage of our method is that it replaces the non-parametric kernel-smoothing with a parametric sampling importance resampling (SIR) procedure to avoid nonparametric kernel-smoothing problems with high dimensional covariates. The proposed method is shown to be more robust than some current approaches by the simulations.  相似文献   
9.
10.
This paper presents parametric bootstrap (PB) approaches for hypothesis testing and interval estimation of the fixed effects and the variance component in the growth curve models with intraclass correlation structure. The PB pivot variables are proposed based on the sufficient statistics of the parameters. Some simulation results are presented to compare the performance of the proposed approaches with the generalized inferences. Our studies show that the PB approaches perform satisfactorily for various cell sizes and parameter configurations, and tends to outperform the generalized inferences with respect to the coverage probabilities and powers. The PB approaches not only have almost exact coverage probabilities and Type I error rates, but also have the shorter expected lengths and the higher powers. Furthermore, the PB procedure can be simply carried out by a few simulation steps. Finally, the proposed approaches are illustrated by using a real data example.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号