首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT

In this paper, we consider the problem of constructing non parametric confidence intervals for the mean of a positively skewed distribution. We suggest calibrated, smoothed bootstrap upper and lower percentile confidence intervals. For the theoretical properties, we show that the proposed one-sided confidence intervals have coverage probability α + O(n? 3/2). This is an improvement upon the traditional bootstrap confidence intervals in terms of coverage probability. A version smoothed approach is also considered for constructing a two-sided confidence interval and its theoretical properties are also studied. A simulation study is performed to illustrate the performance of our confidence interval methods. We then apply the methods to a real data set.  相似文献   

2.
Abstract

In this paper, we propose maximum entropy in the mean methods for propensity score matching classification problems. We provide a new methodological approach and estimation algorithms to handle explicitly cases when data is available: (i) in interval form; (ii) with bounded measurement or observational errors; or (iii) both as intervals and with bounded errors. We show that entropy in the mean methods for these three cases generally outperform benchmark error-free approaches.  相似文献   

3.
ABSTRACT

We consider point and interval estimation of the unknown parameters of a generalized inverted exponential distribution in the presence of hybrid censoring. The maximum likelihood estimates are obtained using EM algorithm. We then compute Fisher information matrix using the missing value principle. Bayes estimates are derived under squared error and general entropy loss functions. Furthermore, approximate Bayes estimates are obtained using Tierney and Kadane method as well as using importance sampling approach. Asymptotic and highest posterior density intervals are also constructed. Proposed estimates are compared numerically using Monte Carlo simulations and a real data set is analyzed for illustrative purposes.  相似文献   

4.
Confidence interval is a basic type of interval estimation in statistics. When dealing with samples from a normal population with the unknown mean and the variance, the traditional method to construct t-based confidence intervals for the mean parameter is to treat the n sampled units as n groups and build the intervals. Here we propose a generalized method. We first divide them into several equal-sized groups and then calculate the confidence intervals with the mean values of these groups. If we define “better” in terms of the expected length of the confidence interval, then the first method is better because the expected length of the confidence interval obtained from the first method is shorter. We prove this intuition theoretically. We also specify when the elements in each group are correlated, the first method is invalid, while the second can give us correct results in terms of the coverage probability. We illustrate this with analytical expressions. In practice, when the data set is extremely large and distributed in several data centers, the second method is a good tool to get confidence intervals, in both independent and correlated cases. Some simulations and real data analyses are presented to verify our theoretical results.  相似文献   

5.
Abstract

The standard method of obtaining a two-sided confidence interval for the Poisson mean produces an interval which is exact but can be shortened without violating the minimum coverage requirement. We classify the intervals proposed as alternatives to the standard method interval. We carry out the classification using two desirable properties of two-sided confidence intervals.  相似文献   

6.
ABSTRACT

The correlation coefficient (CC) is a standard measure of a possible linear association between two continuous random variables. The CC plays a significant role in many scientific disciplines. For a bivariate normal distribution, there are many types of confidence intervals for the CC, such as z-transformation and maximum likelihood-based intervals. However, when the underlying bivariate distribution is unknown, the construction of confidence intervals for the CC is not well-developed. In this paper, we discuss various interval estimation methods for the CC. We propose a generalized confidence interval for the CC when the underlying bivariate distribution is a normal distribution, and two empirical likelihood-based intervals for the CC when the underlying bivariate distribution is unknown. We also conduct extensive simulation studies to compare the new intervals with existing intervals in terms of coverage probability and interval length. Finally, two real examples are used to demonstrate the application of the proposed methods.  相似文献   

7.
ABSTRACT

We consider the use of modern likelihood asymptotics in the construction of confidence intervals for the parameter which determines the skewness of the distribution of the maximum/minimum of an exchangeable bivariate normal random vector. Simulation studies were conducted to investigate the accuracy of the proposed methods and to compare them to available alternatives. Accuracy is evaluated in terms of both coverage probability and expected length of the interval. We furthermore illustrate the suitability of our proposals by means of two data sets, consisting of, respectively, measurements taken on the brains of 10 mono-zygotic twins and measurements of mineral content of bones in the dominant and non-dominant arms for 25 elderly women.  相似文献   

8.
ABSTRACT

We consider a model consisting of two fluid queues driven by the same background continuous-time Markov chain, such that the rates of change of the fluid in the second queue depend on whether the first queue is empty or not: when the first queue is nonempty, the content of the second queue increases, and when the first queue is empty, the content of the second queue decreases.

We analyze the stationary distribution of this tandem model using operator-analytic methods. The various densities (or Laplace–Stieltjes transforms thereof) and probability masses involved in this stationary distribution are expressed in terms of the stationary distribution of some embedded process. To find the latter from the (known) transition kernel, we propose a numerical procedure based on discretization and truncation. For some examples we show the method works well, although its performance is clearly affected by the quality of these approximations, both in terms of accuracy and run time.  相似文献   

9.
ABSTRACT

Given a sample from a finite population, we provide a nonparametric Bayesian prediction interval for a finite population mean when a standard normal assumption may be tenuous. We will do so using a Dirichlet process (DP), a nonparametric Bayesian procedure which is currently receiving much attention. An asymptotic Bayesian prediction interval is well known but it does not incorporate all the features of the DP. We show how to compute the exact prediction interval under the full Bayesian DP model. However, under the DP, when the population size is much larger than the sample size, the computational task becomes expensive. Therefore, for simplicity one might still want to consider useful and accurate approximations to the prediction interval. For this purpose, we provide a Bayesian procedure which approximates the distribution using the exchangeability property (correlation) of the DP together with normality. We compare the exact interval and our approximate interval with three standard intervals, namely the design-based interval under simple random sampling, an empirical Bayes interval and a moment-based interval which uses the mean and variance under the DP. However, these latter three intervals do not fully utilize the posterior distribution of the finite population mean under the DP. Using several numerical examples and a simulation study we show that our approximate Bayesian interval is a good competitor to the exact Bayesian interval for different combinations of sample sizes and population sizes.  相似文献   

10.
ABSTRACT

We consider asymptotic and resampling-based interval estimation procedures for the stress-strength reliability P(X < Y). We developed and studied several types of intervals. Their performances are investigated using simulation techniques and compared in terms of attainment of the nominal confidence level, symmetry of lower and upper error rates, and expected length. Recommendations concerning their use are given.  相似文献   

11.
ABSTRACT

In the literature of information theory, there exist many well known measures of entropy suitable for entropy optimization principles towards applications in different disciplines of science and technology. The object of this article is to develop a new generalized measure of entropy and to establish the relation between entropy and queueing theory. To fulfill our aim, we have made use of maximum entropy principle which provides the most uncertain probability distribution subject to some constraints expressed by mean values.  相似文献   

12.
Abstract

In survival or reliability data analysis, it is often useful to estimate the quantiles of the lifetime distribution, such as the median time to failure. Different nonparametric methods can construct confidence intervals for the quantiles of the lifetime distributions, some of which are implemented in commonly used statistical software packages. We here investigate the performance of different interval estimation procedures under a variety of settings with different censoring schemes. Our main objectives in this paper are to (i) evaluate the performance of confidence intervals based on the transformation approach commonly used in statistical software, (ii) introduce a new density-estimation-based approach to obtain confidence intervals for survival quantiles, and (iii) compare it with the transformation approach. We provide a comprehensive comparative study and offer some useful practical recommendations based on our results. Some numerical examples are presented to illustrate the methodologies developed.  相似文献   

13.
Relative potency estimations in both multiple parallel-line and slope-ratio assays involve construction of simultaneous confidence intervals for ratios of linear combinations of general linear model parameters. The key problem here is that of determining multiplicity adjusted percentage points of a multivariate t-distribution, the correlation matrix R of which depends on the unknown relative potency parameters. Several methods have been proposed in the literature on how to deal with R . In this article, we introduce a method based on an estimate of R (also called the plug-in approach) and compare it with various methods including conservative procedures based on probability inequalities. Attention is restricted to parallel-line assays though the theory is applicable for any ratios of coefficients in the general linear model. Extension of the plug-in method to linear mixed effect models is also discussed. The methods will be compared with respect to their simultaneous coverage probabilities via Monte Carlo simulations. We also evaluate the methods in terms of confidence interval width through application to data on multiple parallel-line assay.  相似文献   

14.
Group testing has been used in many fields of study to estimate proportions. When groups are of different size, the derivation of exact confidence intervals is complicated by the lack of a unique ordering of the event space. An exact interval estimation method is described here, in which outcomes are ordered according to a likelihood ratio statistic. The method is compared with another exact method, in which outcomes are ordered by their associated MLE. Plots of the P‐value against the proportion are useful in examining the properties of the methods. Coverage provided by the intervals is assessed using several realistic grouptesting procedures. The method based on the likelihood ratio, with a mid‐P correction, is shown to give very good coverage in terms of closeness to the nominal level, and is recommended for this type of problem.  相似文献   

15.

We propose a semiparametric framework based on sliced inverse regression (SIR) to address the issue of variable selection in functional regression. SIR is an effective method for dimension reduction which computes a linear projection of the predictors in a low-dimensional space, without loss of information on the regression. In order to deal with the high dimensionality of the predictors, we consider penalized versions of SIR: ridge and sparse. We extend the approaches of variable selection developed for multidimensional SIR to select intervals that form a partition of the definition domain of the functional predictors. Selecting entire intervals rather than separated evaluation points improves the interpretability of the estimated coefficients in the functional framework. A fully automated iterative procedure is proposed to find the critical (interpretable) intervals. The approach is proved efficient on simulated and real data. The method is implemented in the R package SISIR available on CRAN at https://cran.r-project.org/package=SISIR.

  相似文献   

16.
Abstract

The method of tail functions is applied to confidence estimation of the exponential mean in the presence of prior information. It is shown how the “ordinary” confidence interval can be generalized using a class of tail functions and then engineered for optimality, in the sense of minimizing prior expected length over that class, whilst preserving frequentist coverage. It is also shown how to derive the globally optimal interval, and how to improve on this using tail functions when criteria other than length are taken into consideration. Probabilities of false coverage are reported for some of the intervals under study, and the theory is illustrated by application to confidence estimation of a reliability coefficient based on some survival data.  相似文献   

17.
Approximate confidence intervals are given for the lognormal regression problem. The error in the nominal level can be reduced to O(n ?2), where n is the sample size. An alternative procedure is given which avoids the non-robust assumption of lognormality. This amounts to finding a confidence interval based on M-estimates for a general smooth function of both ? and F, where ? are the parameters of the general (possibly nonlinear) regression problem and F is the unknown distribution function of the residuals. The derived intervals are compared using theory, simulation and real data sets.  相似文献   

18.
Abstract

In this paper, we present a flexible mechanism for constructing probability distributions on a bounded intervals which is based on the composition of the baseline cumulative probability function and the quantile transformation from another cumulative probability distribution. In particular, we are interested in the (0, 1) intervals. The composite quantile family of probability distributions contains many models that have been proposed in the recent literature and new probability distributions are introduced on the unit interval. The proposed methodology is illustrated with two examples to analyze a poverty dataset in Peru from the Bayesian paradigm and Likelihood points of view.  相似文献   

19.
ABSTRACT

In this paper, the stress-strength reliability, R, is estimated in type II censored samples from Pareto distributions. The classical inference includes obtaining the maximum likelihood estimator, an exact confidence interval, and the confidence intervals based on Wald and signed log-likelihood ratio statistics. Bayesian inference includes obtaining Bayes estimator, equi-tailed credible interval, and highest posterior density (HPD) interval given both informative and non-informative prior distributions. Bayes estimator of R is obtained using four methods: Lindley's approximation, Tierney-Kadane method, Monte Carlo integration, and MCMC. Also, we compare the proposed methods by simulation study and provide a real example to illustrate them.  相似文献   

20.
Guogen Shan 《Statistics》2018,52(5):1086-1095
In addition to point estimate for the probability of response in a two-stage design (e.g. Simon's two-stage design for binary endpoints), confidence limits should be computed and reported. The current method of inverting the p-value function to compute the confidence interval does not guarantee coverage probability in a two-stage setting. The existing exact approach to calculate one-sided limits is based on the overall number of responses to order the sample space. This approach could be conservative because many sample points have the same limits. We propose a new exact one-sided interval based on p-value for the sample space ordering. Exact intervals are computed by using binomial distributions directly, instead of a normal approximation. Both exact intervals preserve the nominal confidence level. The proposed exact interval based on the p-value generally performs better than the other exact interval with regard to expected length and simple average length of confidence intervals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号