首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The data are n independent random binomial events, each resulting in success or failure. The event outcomes are believed to be trials from a binomial distribution with success probability p, and tests of p=1/2 are desired. However, there is the possibility that some unidentified event has a success probability different from the common value p for the other n?1 events. Then, tests of whether this common p equals 1/2 are desired. Fortunately, two-sided tests can be obtained that simultaneously are applicable for both situations. That is, the significance level for a test is same when one event has a different probability as when all events have the same probability. These tests are the usual equal-tail tests for p=1/2 (based on n independent trials from a binomial distribution).  相似文献   

2.
Random samples are assumed for the univariate two-sample problem. Sometimes this assumption may be violated in that an observation in one “sample”, of size m, is from a population different from that yielding the remaining m—1 observations (which are a random sample). Then, the interest is in whether this random sample of size m—1 is from the same population as the other random sample. If such a violation occurs and can be recognized, and also the non-conforming observation can be identified (without imposing conditional effects), then that observation could be removed and a two-sample test applied to the remaining samples. Unfortunately, satisfactory procedures for such a removal do not seem to exist. An alternative approach is to use two-sample tests whose significance levels remain the same when a non-conforming observation occurs, and is removed, as for the case where the samples were both truly random. The equal-tail median test is shown to have this property when the two “samples” are of the same size (and ties do not occur).  相似文献   

3.
Confidence intervals are developed for the location parameter of a continuous, symmetric, unimodal distribution in the casev where only a single observation from the distribution is available. These intervals are similar to those given by Abbott and Rosenblatt (1963), but shorter. The result is extended to include distributions which can be standardized to have unit scale. The procedure is exemplified for the normal distribution and the power of one- and two-sided significance tests are computed under normality.  相似文献   

4.
In many engineering problems it is necessary to draw statistical inferences on the mean of a lognormal distribution based on a complete sample of observations. Statistical demonstration of mean time to repair (MTTR) is one example. Although optimum confidence intervals and hypothesis tests for the lognormal mean have been developed, they are difficult to use, requiring extensive tables and/or a computer. In this paper, simplified conservative methods for calculating confidence intervals or hypothesis tests for the lognormal mean are presented. In this paper, “conservative” refers to confidence intervals (hypothesis tests) whose infimum coverage probability (supremum probability of rejecting the null hypothesis taken over parameter values under the null hypothesis) equals the nominal level. The term “conservative” has obvious implications to confidence intervals (they are “wider” in some sense than their optimum or exact counterparts). Applying the term “conservative” to hypothesis tests should not be confusing if it is remembered that this implies that their equivalent confidence intervals are conservative. No implication of optimality is intended for these conservative procedures. It is emphasized that these are direct statistical inference methods for the lognormal mean, as opposed to the already well-known methods for the parameters of the underlying normal distribution. The method currently employed in MIL-STD-471A for statistical demonstration of MTTR is analyzed and compared to the new method in terms of asymptotic relative efficiency. The new methods are also compared to the optimum methods derived by Land (1971, 1973).  相似文献   

5.
Given only a random sample of observations, the usual estimator for the population mean is the sample mean. If additional information is provided it might be possible in some situations to obtain a better estimator. The situation considered here is when the variable whose mean is sought is composed of factors that are themselves observable. In the basic case, the variable can be expressed as the product of two, independent, more basic variables, but we also consider the case of more than two, the effect of correlation, and when there are observation costs.  相似文献   

6.
A General Multiconsequence Intervention Model class that describes the simultaneous occurrence of a change in the process mean and covariance structure is introduced. When the covariance change is negligible, this model class reduces to intervention models described by Box and Tiao (1975). Maximum Likelihood Estimators for the parameters of the multiconsequence model class are developed for various important modeling situations that result from different a priori information about the form of the mean shift function form and the model parameters. As a consequence of these estimation results, an identification procedure for determining an appropriate dynamic mean shift form is suggested. The necessary hypothesis tests and corresponding confidence intervals.  相似文献   

7.
Read  Robert  Thomas  Lyn  Washburn  Alan 《Statistics and Computing》2000,10(3):245-252
Consider the random sampling of a discrete population. The observations, as they are collected one by one, are enhanced in that the probability mass associated with each observation is also observed. The goal is to estimate the population mean. Without this extra information about probability mass, the best general purpose estimator is the arithmetic average of the observations, XBAR. The issue is whether or not the extra information can be used to improve on XBAR. This paper examines the issues and offers four new estimators, each with its own strengths and liabilities. Some comparative performances of the four with XBAR are made.The motivating application is a Monte Carlo simulation that proceeds in two stages. The first stage independently samples n characteristics to obtain a configuration of some kind, together with a configuration probability p obtained, if desired, as a product of n individual probabilities. A relatively expensive calculation then determines an output X as a function of the configuration. A random sample of X could simply be averaged to estimate the mean output, but there are possibly more efficient estimators on account of the known configuration probabilities.  相似文献   

8.
Modified maximum likelihood estimators of the parameters of a multivariate normal distribution are developed when the smallest or largest observations on one of the components are censored. These estimators are used to construct tests for means and correlation coefficients. The robustness of these tests to deviations from normality is investigated.  相似文献   

9.
Consider the usual linear regression model y = x’β+?, relating a response y to a vector of predictors x. Suppose that n observations on y together with the corresponding values of x are available , and it is desired to construct simultaneous prediction intervals for k future values of y at values of x which can not be ascertained beforehand. In most applications the regression model contains an intercept. This paper presents two sets of prediction intervals appropriate to this case. The proposed intervals are compared with those of Carlstein (1986), and the improvements are illustrated in the case of simple linear regression.  相似文献   

10.
In a number of situations only observations that exceed or only those that fall below the current extreme value are recorded. Examples include meteorology, hydrology, athletic events and mining. Industrial stress testing is also an example in which only items that are weaker than all the observed items are destroyed. In this paper, it is shown that, how record values can be used to provide distribution-free confidence intervals for population quantiles and tolerance intervals. We provide some tables that help one choose the appropriate record values and present a numerical example. Also universal upper bounds for the expectation of the length of the confidence intervals are derived. The results may be of interest in situation where only record values are stored.  相似文献   

11.
In sample surveys and many other areas of application, the ratio of variables is often of great importance. This often occurs when one variable is available at the population level while another variable of interest is available for sample data only. In this case, using the sample ratio, we can often gather valuable information on the variable of interest for the unsampled observations. In many other studies, the ratio itself is of interest, for example when estimating proportions from a random number of observations. In this note we compare three confidence intervals for the population ratio: A large sample interval, a log based version of the large sample interval, and Fieller’s interval. This is done through data analysis and through a small simulation experiment. The Fieller method has often been proposed as a superior interval for small sample sizes. We show through a data example and simulation experiments that Fieller’s method often gives nonsensical and uninformative intervals when the observations are noisy relative to the mean of the data. The large sample interval does not similarly suffer and thus can be a more reliable method for small and large samples.  相似文献   

12.
Abstract.  The paper develops empirical Bayes (EB) confidence intervals for population means with distributions belonging to the natural exponential family-quadratic variance function (NEF-QVF) family when the sample size for a particular population is moderate or large. The basis for such development is to find an interval centred around the posterior mean which meets the target coverage probability asymptotically, and then show that the difference between the coverage probabilities of the Bayes and EB intervals is negligible up to a certain order. The approach taken is Edgeworth expansion so that the sample sizes from the different populations need not be significantly large. The proposed intervals meet the target coverage probabilities asymptotically, and are easy to construct. We illustrate use of these intervals in the context of small area estimation both through real and simulated data. The proposed intervals are different from the bootstrap intervals. The latter can be applied quite generally, but the order of accuracy of these intervals in meeting the desired coverage probability is unknown.  相似文献   

13.
The comparison of two treatments with normally distributed data is considered. Inferences are considered based upon the difference between single potential future observations from each of the two treatments, which provides a useful and easily interpretable assessment of the difference between the two treatments. These methodologies combine information from a standard confidence interval analysis of the difference between the two treatment means, with information available from standard prediction intervals of future observations. Win-probabilities, which are the probabilities that a future observation from one treatment will be superior to a future observation from the other treatment, are a special case of these methodologies. The theoretical derivation of these methodologies is based upon inferences about the non-centrality parameter of a non-central t-distribution. Equal and unequal variance situations are addressed, and extensions to groups of future observations from the two treatments are also considered. Some examples and discussions of the methodologies are presented.  相似文献   

14.
In longitudinal surveys where a number of observations have to be made on the same sampling unit at specified time intervals, it is not uncommon that observations for some of the time stages for some of the sampled units are found missing. In the present investigation an estimation procedure for estimating the population total based on such incomplete data from multiple observations is suggested which makes use of all the available information and is seen to be more efficient than the one based on only completely observed units. Estimators are also proposed for two other situations; firstly when data is collected only for a sample of time stages and secondly when data is observed for only one time stage per sampled unit.  相似文献   

15.
Exact confidence interval estimation for accelerated life regression models with censored smallest extreme value (or Weibull) data is often impractical. This paper evaluates the accuracy of approximate confidence intervals based on the asymptotic normality of the maximum likelihood estimator, the asymptotic X2distribution of the likelihood ratio statistic, mean and variance correction to the likelihood ratio statistic, and the so-called Bartlett correction to the likelihood ratio statistic. The Monte Carlo evaluations under various degrees of time censoring show that uncorrected likelihood ratio intervals are very accurate in situations with heavy censoring. The benefits of mean and variance correction to the likelihood ratio statistic are only realized with light or no censoring. Bartlett correction tends to result in conservative intervals. Intervals based on the asymptotic normality of maximum likelihood estimators are anticonservative and should be used with much caution.  相似文献   

16.
In this paper, we provide a method for constructing confidence interval for accuracy in correlated observations, where one sample of patients is being rated by two or more diagnostic tests. Confidence intervals for other measures of diagnostic tests, such as sensitivity, specificity, positive predictive value, and negative predictive value, have already been developed for clustered or correlated observations using the generalized estimating equations (GEE) method. Here, we use the GEE and delta‐method to construct confidence intervals for accuracy, the proportion of patients who are correctly classified. Simulation results verify that the estimated confidence intervals exhibit consistent/appropriate coverage rates.  相似文献   

17.
ABSTRACT

This article considers the problem of choosing between two possible treatments which are each modeled with a Poisson distribution. Win-probabilities are defined as the probabilities that a single potential future observation from one of the treatments will be better than, or at least as good as, a potential future observation from the other treatment. Using historical data from the two treatments, it is shown how estimates and confidence intervals can be constructed for the win-probabilities. Extensions to situations with three or more treatments are also discussed. Some examples and illustrations are provided, and the relationship between this methodology and standard inference procedures on the Poisson parameters is discussed.  相似文献   

18.
For given (small) a and β a sequential confidence set that covers the true parameter point with probability at least 1 - a and one or more specified false parameter points with probability at most β can be generated by a family of sequen-tial tests. Several situations are described where this approach would be a natural one. The following example is studied in some detail: obtain an upper (1 - α)-confidence interval for a normal mean μ (variance known) with β-protection at μ - δ(μ), where δ(.) is not bounded away from 0 so that a truly sequential procedure is mandatory. Some numerical results are presented for intervals generated by (1) sequential probability ratio tests (SPRT's), and (2) generalized sequential probability ratio tests (GSPRT's). These results indicate the superiority of the GSPRT-generated intervals over the SPRT-generated ones if expected sample size is taken as performance criterion  相似文献   

19.
This paper discusses inference regarding the mean direction and the concentration parameters based on data from the von Mises distribution from a Bayesian point of view, when k(k < n/2) of the n observations are spurious, that is, are from a von Mises population with a shifted mean direction. The Bayesian analysis for this spuriosity case provides both detection, identification, and estimation for the mean direction and the concentration parameter when indeed spurious observations are present, possibly giving rise to outliers.  相似文献   

20.
The among variance component in the balanced one-factor nested components-of-variance model is of interest in many fields of application. Except for an artificial method that uses a set of random numbers which is of no use in practical situations, an exact-size confidence interval on the among variance has not yet been derived. This paper provides a detailed comparison of three approximate confidence intervals which possess certain desired properties and have been shown to be the better methods among many available approximate procedures. Specifically, the minimum and the maximum of the confidence coefficients for the one- and two-sided intervals of each method are obtained. The expected lengths of the intervals are also compared.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号