首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
An example is given of a uniformly most accurate unbiased confidence belt which yields absurd confidence statements with 100% occurrence. In several known examples, as well as in the 100%-occurrence counterexample, an optimal confidence belt provides absurd statements because it is inclusion-inconsistent with either a null or an all-inclusive belt or both. It is concluded that confidence-theory optimality criteria alone are inadequate for practice, and that a consistency criterion is required. An approach based upon inclusion consistency of belts [C(x) C C C(x), for some x, implies γ ≤ γ for confidence coefficients] is suggested for exact interval estimation in continuous parametric models. Belt inclusion consistency, the existence of a proper-pivotal vector [a pivotal vector T(X, θ) such that the effective range of T(x,.) is independent of x], and the existence of a confidence distribution are proven mutually equivalent. This consistent approach being restrictive, it is shown, using Neyman's anomalous 1954 example, how to determine whether any given parametric function can be estimated consistently and exactly or whether a consistent nonexact solution must be attempted.  相似文献   

2.
In many situations it is necessary to test the equality of the means of two normal populations when the variances are unknown and unequal. This paper studies the celebrated and controversial Behrens-Fisher problem via an adjusted likelihood-ratio test using the maximum likelihood estimates of the parameters under both the null and the alternative models. This procedure allows the significance level to be adjusted in accordance with the degrees of freedom to balance the risk due to the bias in using the maximum likelihood estimates and the risk due to the increase of variance. A large scale Monte Carlo investigation is carried out to show that -2 InA has an empirical chi-square distribution with fractional degrees of freedom instead of a chi-square distribution with one degree of freedom. Also Monte Carlo power curves are investigated under several different conditions to evaluate the performances of several conventional procedures with that of this procedure with respect to control over Type I errors and power.  相似文献   

3.
Abstract

In this article, we have proposed a three-stage procedure for the estimation of the difference of the means of two multivariate normal populations having unknown and unequal variances. Point as well as confidence region estimation is done for the same. Here, we have used the concept of classical Behrens-Fisher problem. Second-order approximations are obtained in both the cases, i.e., point estimation and confidence region estimation.  相似文献   

4.
Some new algebra on pattern and transition matrices is used to determine the degrees of freedom and the parameter matrix, if the distribution of a linear sum of Wishart matrices is approximated by a single Wishart distribution. This approximation is then used to find a solution to the multivariate Behrens-Fisher problem similar to the Welch (1947) solution in the univariate case.  相似文献   

5.
6.
Highly skewed and non-negative data can often be modeled by the delta-lognormal distribution in fisheries research. However, the coverage probabilities of extant interval estimation procedures are less satisfactory in small sample sizes and highly skewed data. We propose a heuristic method of estimating confidence intervals for the mean of the delta-lognormal distribution. This heuristic method is an estimation based on asymptotic generalized pivotal quantity to construct generalized confidence interval for the mean of the delta-lognormal distribution. Simulation results show that the proposed interval estimation procedure yields satisfactory coverage probabilities, expected interval lengths and reasonable relative biases. Finally, the proposed method is employed in red cod densities data for a demonstration.  相似文献   

7.
The Behrens–Fisher problem concerns the inferences for the difference between means of two independent normal populations without the assumption of equality of variances. In this article, we compare three approximate confidence intervals and a generalized confidence interval for the Behrens–Fisher problem. We also show how to obtain simultaneous confidence intervals for the three population case (analysis of variance, ANOVA) by the Bonferroni correction factor. We conduct an extensive simulation study to evaluate these methods in respect to their type I error rate, power, expected confidence interval width, and coverage probability. Finally, the considered methods are applied to two real dataset.  相似文献   

8.
Sequential fixed-width and risk-efficient estimation of the variance of an unspecified distribution is considered. The second-order asymptotic properties of the sequential rules are studied. Extensive simulation studies are carried out in order to study the small sample behavior of the sequential rules for some frequently used distributions.  相似文献   

9.
Practitioners of statistics are too often guilty of routinely selecting a 95% confidence level in interval estimation and ignoring the sample size and the expected size of the interval. One way to balance coverage and size is to use a loss function in a decision problem. Then either the Bayes risk or usual risk (if a pivotal quantity exists) may be minimized. It is found that some non-Bayes solutions are equivalent to Bayes results based on non-informative priors. The decision theory approach is applied to the mean and standard deviation of the univariate normal model and the mean of the multivariate normal. Tables are presented for critical values, expected size, confidence and sample size.  相似文献   

10.
In many clinical trials and epidemiological studies, comparing the mean count response of an exposed group to a control group is often of interest. This type of data is often over-dispersed with respect to Poisson variation, and previous studies usually compared groups using confidence intervals (CIs) of the difference between the two means. However, in some situations, especially when the means are small, interval estimation of the mean ratio (MR) is preferable. Moreover, Cox and Lewis [4 D.R. Cox and P.A.W. Lewis, The Statistical Analysis of Series of Events, Methuen, London, 1966.[Crossref] [Google Scholar]] pointed out many other situations where the MR is more relevant than the difference of means. In this paper, we consider CI construction for the ratio of means between two treatments for over-dispersed Poisson data. We develop several CIs for the situation by hybridizing two separate CIs for two individual means. Extensive simulations show that all hybrid-based CIs perform reasonably well in terms of coverage. However, the CIs based on the delta method using the logarithmic transformation perform better than other intervals in the sense that they have slightly shorter interval lengths and show better balance of tail errors. These proposed CIs are illustrated with three real data examples.  相似文献   

11.
This paper shows that a minimax Bayes rule and shrinkage estimators can be effectively applied to portfolio selection under the Bayesian approach. Specifically, it is shown that the portfolio selection problem can result in a statistical decision problem in some situations. Following that, we present a method for solving a problem involved in portfolio selection under the Bayesian approach.  相似文献   

12.
The classic solution of the Monty Hall problem tacitly assumes that, after the candidate made his/her first choice, the host always allows the candidate to switch doors after he/she showed to the candidate a losing door, not initially chosen by the candidate. In view of actual TV shows, it seems a more credible assumption that the host will or will not allow switching. Under this assumption, possible strategies for the candidate are discussed, with respect to a minimax solution of the problem. In conclusion, the classic solution does not necessarily provide a good guidance for a candidate on a game show. It is discussed that the popularity of the problem is due to its incompleteness.  相似文献   

13.
In this paper, we propose a method of estimation of parameters and quantiles of the three-parameter gamma distribution based on Type-II right-censored data. In the proposed method, under mild conditions, the estimates always exist uniquely, and the estimators have consistency over the entire parameter space. Through Monte Carlo simulations, we further show that the proposed method performs well compared with another prominent method of estimation in terms of bias and root mean-squared error in small-sample situations. Finally, two real data sets are used for illustrating the proposed method.  相似文献   

14.
In this article, static light scattering (SLS) measurements are processed to estimate the particle size distribution of particle systems incorporating prior information obtained from an alternative experimental technique: scanning electron microscopy (SEM). For this purpose we propose two Bayesian schemes (one parametric and another non-parametric) to solve the stated light scattering problem and take advantage of the obtained results to summarize some features of the Bayesian approach within the context of inverse problems. The features presented in this article include the improvement of the results when some useful prior information from an alternative experiment is considered instead of a non-informative prior as it occurs in a deterministic maximum likelihood estimation. This improvement will be shown in terms of accuracy and precision in the corresponding results and also in terms of minimizing the effect of multiple minima by including significant information in the optimization. Both Bayesian schemes are implemented using Markov Chain Monte Carlo methods. They have been developed on the basis of the Metropolis–Hastings (MH) algorithm using Matlab® and are tested with the analysis of simulated and experimental examples of concentrated and semi-concentrated particles. In the simulated examples, SLS measurements were generated using a rigorous model, while the inversion stage was solved using an approximate model in both schemes and also using the rigorous model in the parametric scheme. Priors from SEM micrographs were also simulated and experimented, where the simulated ones were obtained using a Monte Carlo routine. In addition to the presentation of these features of the Bayesian approach, some other topics will be discussed, such as regularization and some implementation issues of the proposed schemes, among which we remark the selection of the parameters used in the MH algorithm.  相似文献   

15.
The classical D-optimality principle in regression design may be motivated by a desire to maximize the coverage probability of a fixed-volume confidence ellipsoid on the regression parameters. When the fitted model is exactly correct, this amounts to minimizing the determinant of the covariance matrix of the estimators. We consider an analogue of this problem, under the approximately linear model E[y|x] = θTz(x) + f(x). The nonlinear disturbance f(x) is essentially unknown, and the experimenter fits only to the linear part of the response. The resulting bias affects the coverage probability of the confidence ellipsoid on θ. We study the construction of designs which maximize the minimum coverage probability as f varies over a certain class. Explicit designs are given in the case that the fitted response surface is a plane.  相似文献   

16.
17.
ABSTRACT

In this paper we present a class of continuous-time processes arising from the solution of the generalized Langevin equation and show some of its properties. We define the theoretical and empirical codifference as a measure of dependence for stochastic processes. As an alternative dependence measure we also consider the spectral covariance. These dependence measures replace the autocovariance function when it is not well defined. Results for the theoretical codifference and theoretical spectral covariance functions for the mentioned process are presented. The maximum likelihood estimation procedure is proposed to estimate the parameters of the process arising from the classical Langevin equation, i.e. the Ornstein–Uhlenbeck process, and of the so-called Cosine process. We also present a simulation study for particular processes arising from this class showing the generation, and the theoretical and empirical counterpart for both codifference and spectral covariance measures.  相似文献   

18.
A problem where one subpopulation is compared with several other subpopulations in terms of means with the goal of estimating the smallest difference between the means commonly arises in biology, medicine, and many other scientific fields. A generalization of Strass-burger-Bretz-Hochberg approach for two comparisons is presented for cases with three and more comparisons. The method allows constructing an interval estimator for the smallest mean difference, which is compatible with the Min test. An application to a fluency-disorder study is illustrated. Simulations confirmed adequate probability coverage for normally distributed outcomes for a number of designs.  相似文献   

19.
We consider the problem of density estimation when the data is in the form of a continuous stream with no fixed length. In this setting, implementations of the usual methods of density estimation such as kernel density estimation are problematic. We propose a method of density estimation for massive datasets that is based upon taking the derivative of a smooth curve that has been fit through a set of quantile estimates. To achieve this, a low-storage, single-pass, sequential method is proposed for simultaneous estimation of multiple quantiles for massive datasets that form the basis of this method of density estimation. For comparison, we also consider a sequential kernel density estimator. The proposed methods are shown through simulation study to perform well and to have several distinct advantages over existing methods.  相似文献   

20.
A new general model for the bio-assay problem is introduced. It is shown that when the slope of the dose-response curve and the median effective dose is known, the Robbins-Monro method yields an asymptotically optimal estimation procedure. Adaptive procedures are discussed for the case of unknown slope. Results of Monte Carlo studies are given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号