首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The methodology for deriving the exact confidence coefficient of some confidence intervals for a binomial proportion is proposed in Wang [2007. Exact confidence coefficients of confidence intervals for a binomial proportion. Statist. Sinica 17, 361–368]. The methodology requires two conditions of confidence intervals: the monotone boundary property and the full coverage property. In this paper, we show that for some confidence intervals of a binomial proportion, the two properties hold for any sample size. Based on results presented in this paper, the procedure in Wang [2007. Exact confidence coefficients of confidence intervals for a binomial proportion. Statist. Sinica 17, 361–368] can be directly used to calculate the exact confidence coefficients of these confidence intervals for any fixed sample size.  相似文献   

2.
This paper presents a method for constructing confidence intervals for the median of a finite population under unequal probability sampling. The model-assisted approach makes use of the L1L1-norm to motivate the estimating function which is then used to develop a unified approach to inference which includes not only confidence intervals but hypothesis tests and point estimates. The approach relies on large sample theory to construct the confidence intervals. In cases when second-order inclusion probabilities are not available or easy to compute, the Hartley–Rao variance approximation is employed. Simulations show that the confidence intervals achieve the appropriate confidence level, whether or not the Hartley–Rao variance is employed.  相似文献   

3.
In this paper, we consider the problem wherein one desires to estimate a linear combination of binomial probabilities from k>2k>2 independent populations. In particular, we create a new family of asymptotic confidence intervals, extending the approach taken by Beal [1987. Asymptotic confidence intervals for the difference between two binomial parameters for use with small samples. Biometrics 73, 941–950] in the two-sample case. One of our new intervals is shown to perform very well when compared to the best available intervals documented in Price and Bonett [2004. An improved confidence interval for a linear function of binomial proportions. Comput. Statist. Data Anal. 45, 449–456]. Furthermore, our interval estimation approach is quite general and could be extended to handle more complicated parametric functions and even to other discrete probability models in stratified settings. We illustrate our new intervals using two real data examples, one from an ecology study and one from a multicenter clinical trial.  相似文献   

4.
Exact confidence intervals for a proportion of total variance, based on pivotal quantities, only exist for mixed linear models having two variance components. Generalized confidence intervals (GCIs) introduced by Weerahandi [1993. Generalized confidence intervals (Corr: 94V89 p726). J. Am. Statist. Assoc. 88, 899–905] are based on generalized pivotal quantities (GPQs) and can be constructed for a much wider range of models. In this paper, the author investigates the coverage probabilities, as well as the utility of GCIs, for a proportion of total variance in mixed linear models having more than two variance components. Particular attention is given to the formation of GPQs and GCIs in mixed linear models having three variance components in situations where the data exhibit complete balance, partial balance, and partial imbalance. The GCI procedure is quite general and provides a useful method to construct confidence intervals in a variety of applications.  相似文献   

5.
In this paper, the hypothesis testing and interval estimation for the intraclass correlation coefficients are considered in a two-way random effects model with interaction. Two particular intraclass correlation coefficients are described in a reliability study. The tests and confidence intervals for the intraclass correlation coefficients are developed when the data are unbalanced. One approach is based on the generalized p-value and generalized confidence interval, the other is based on the modified large-sample idea. These two approaches simplify to the ones in Gilder et al. [2007. Confidence intervals on intraclass correlation coefficients in a balanced two-factor random design. J. Statist. Plann. Inference 137, 1199–1212] when the data are balanced. Furthermore, some statistical properties of the generalized confidence intervals are investigated. Finally, some simulation results to compare the performance of the modified large-sample approach with that of the generalized approach are reported. The simulation results indicate that the modified large-sample approach performs better than the generalized approach in the coverage probability and expected length of the confidence interval.  相似文献   

6.
In this paper, conservative simultaneous confidence intervals for multiple comparisons among mean vectors in multivariate normal distributions are considered. Some properties of the multivariate Tukey–Kramer procedure for pairwise comparisons and the conservative simultaneous confidence procedure for comparisons with a control are presented. Particularly, the upper bound for the conservativeness of the simultaneous confidence procedure for comparisons with a control is obtained. Finally, numerical results by Monte Carlo simulations and an example to illustrate the procedure are given.  相似文献   

7.
Zhou and Qin [2004. New intervals for the difference between two independent binomial proportions. J. Statist. Plann. Inference 123, 97–115; 2005. A new confidence interval for the difference between two binomial proportions of paired data. J. Statist. Plann. Inference 128, 527–542] “new confidence intervals” for the difference between two treatment proportions exhibit a severe lack of invariance property that is a compelling reason not to use them.  相似文献   

8.
In this article, robust estimation and prediction in multivariate autoregressive models with exogenous variables (VARX) are considered. The conditional least squares (CLS) estimators are known to be non-robust when outliers occur. To obtain robust estimators, the method introduced in Duchesne [2005. Robust and powerful serial correlation tests with new robust estimates in ARX models. J. Time Ser. Anal. 26, 49–81] and Bou Hamad and Duchesne [2005. On robust diagnostics at individual lags using RA-ARX estimators. In: Duchesne, P., Rémillard, B. (Eds.), Statistical Modeling and Analysis for Complex Data Problems. Springer, New York] is generalized for VARX models. The asymptotic distribution of the new estimators is studied and from this is obtained in particular the asymptotic covariance matrix of the robust estimators. Classical conditional prediction intervals normally rely on estimators such as the usual non-robust CLS estimators. In the presence of outliers, such as additive outliers, these classical predictions can be severely biased. More generally, the occurrence of outliers may invalidate the usual conditional prediction intervals. Consequently, the new robust methodology is used to develop robust conditional prediction intervals which take into account parameter estimation uncertainty. In a simulation study, we investigate the finite sample properties of the robust prediction intervals under several scenarios for the occurrence of the outliers, and the new intervals are compared to non-robust intervals based on classical CLS estimators.  相似文献   

9.
This paper discusses method for constructing the prediction intervals for time series model with trend using the sieve bootstrap procedure. Gasser–Müller type of kernel estimator is used for trend estimation and prediction. The boundary modification of the kernel is applied to control the edge effect and to construct the predictor of a trend.  相似文献   

10.
Confidence intervals for parameters that can be arbitrarily close to being unidentified are unbounded with positive probability [e.g. Dufour, J.-M., 1997. Some impossibility theorems in econometrics with applications to instrumental variables and dynamic models. Econometrica 65, 1365–1388; Pfanzagl, J. 1998. The nonexistence of confidence sets for discontinuous functionals. Journal of Statistical Planning and Inference 75, 9–20], and the asymptotic risks of their estimators are unbounded [Pötscher, B.M., 2002. Lower risk bounds and properties of confidence sets for ill-posed estimation problems with applications to spectral density and persistence estimation, unit roots, and estimation of long memory parameters. Econometrica 70, 1035–1065]. We extend these “impossibility results” and show that all tests of size α concerning parameters that can be arbitrarily close to being unidentified have power that can be as small as α for any sample size even if the null and the alternative hypotheses are not adjacent. The results are proved for a very general framework that contains commonly used models.  相似文献   

11.
In order to estimate the effective dose such as the 0.5 quantile ED50ED50 in a bioassay problem various parametric and semiparametric models have been used in the literature. If the true dose–response curve deviates significantly from the model, the estimates will generally be inconsistent. One strategy is to analyze the data making only a minimal assumption on the model, namely, that the dose–response curve is non-decreasing. In the present paper we first define an empirical dose–response curve based on the estimated response probabilities by using the “pool-adjacent-violators” (PAV) algorithm, then estimate effective doses ED100pED100p for a large range of p by taking inverse of this empirical dose–response curve. The consistency and asymptotic distribution of these estimated effective doses are obtained. The asymptotic results can be extended to the estimated effective doses proposed by Glasbey [1987. Tolerance-distribution-free analyses of quantal dose–response data. Appl. Statist. 36 (3), 251–259] and Schmoyer [1984. Sigmoidally constrained maximum likelihood estimation in quantal bioassay. J. Amer. Statist. Assoc. 79, 448–453] under the additional assumption that the dose–response curve is symmetric or sigmoidal. We give some simulations on constructing confidence intervals using different methods.  相似文献   

12.
We propose a method for saddlepoint approximating the distribution of estimators in single lag subset autoregressive models of order one. By viewing the estimator as the root of an appropriate estimating equation, the approach circumvents the difficulty inherent in more standard methods that require an explicit expression for the estimator to be available. Plots of the densities reveal that the distributions of the Burg and maximum likelihood estimators are nearly identical. We show that one possible reason for this is the fact that Burg enjoys the property of estimation equation optimality among a class of estimators expressible as a ratio of quadratic forms in normal random variables, which includes Yule–Walker and least squares. By inverting a two-sided hypothesis test, we show how small sample confidence intervals for the parameters can be constructed from the saddlepoint approximations. Simulation studies reveal that the resulting intervals generally outperform traditional ones based on asymptotics and have good robustness properties with respect to heavy-tailed and skewed innovations. The applicability of the models is illustrated by analyzing a longitudinal data set in a novel manner.  相似文献   

13.
Bivariate extreme value theory was used to estimate a rare event (see de Haan and de Ronde [1998. Sea and wind: multivariate extremes at work. Extremes 1, 7–45]). This procedure involves estimating a tail dependence function. There are several estimators for the tail dependence function in the literature, but their limiting distributions depend on partial derivatives of the tail dependence function. In this paper smooth estimators are proposed for estimating partial derivatives of bivariate tail dependence functions and their asymptotic distributions are derived as well. A simulation study is conducted to compare different estimators of partial derivatives in terms of both mean squared errors and coverage accuracy of confidence intervals of the bivariate tail dependence function based on these different estimators of partial derivatives.  相似文献   

14.
The aim of this paper is twofold. First we discuss the maximum likelihood estimators of the unknown parameters of a two-parameter Birnbaum–Saunders distribution when the data are progressively Type-II censored. The maximum likelihood estimators are obtained using the EM algorithm by exploiting the property that the Birnbaum–Saunders distribution can be expressed as an equal mixture of an inverse Gaussian distribution and its reciprocal. From the proposed EM algorithm, the observed information matrix can be obtained quite easily, which can be used to construct the asymptotic confidence intervals. We perform the analysis of two real and one simulated data sets for illustrative purposes, and the performances are quite satisfactory. We further propose the use of different criteria to compare two different sampling schemes, and then find the optimal sampling scheme for a given criterion. It is observed that finding the optimal censoring scheme is a discrete optimization problem, and it is quite a computer intensive process. We examine one sub-optimal censoring scheme by restricting the choice of censoring schemes to one-step censoring schemes as suggested by Balakrishnan (2007), which can be obtained quite easily. We compare the performances of the sub-optimal censoring schemes with the optimal ones, and observe that the loss of information is quite insignificant.  相似文献   

15.
A modified large-sample (MLS) approach and a generalized confidence interval (GCI) approach are proposed for constructing confidence intervals for intraclass correlation coefficients. Two particular intraclass correlation coefficients are considered in a reliability study. Both subjects and raters are assumed to be random effects in a balanced two-factor design, which includes subject-by-rater interaction. Computer simulation is used to compare the coverage probabilities of the proposed MLS approach (GiTTCH) and GCI approaches with the Leiva and Graybill [1986. Confidence intervals for variance components in the balanced two-way model with interaction. Comm. Statist. Simulation Comput. 15, 301–322] method. The competing approaches are illustrated with data from a gauge repeatability and reproducibility study. The GiTTCH method maintains at least the stated confidence level for interrater reliability. For intrarater reliability, the coverage is accurate in several circumstances but can be liberal in some circumstances. The GCI approach provides reasonable coverage for lower confidence bounds on interrater reliability, but its corresponding upper bounds are too liberal. Regarding intrarater reliability, the GCI approach is not recommended because the lower bound coverage is liberal. Comparing the overall performance of the three methods across a wide array of scenarios, the proposed modified large-sample approach (GiTTCH) provides the most accurate coverage for both interrater and intrarater reliability.  相似文献   

16.
In this article, we present a novel approach to clustering finite or infinite dimensional objects observed with different uncertainty levels. The novelty lies in using confidence sets rather than point estimates to obtain cluster membership and the number of clusters based on the distance between the confidence set estimates. The minimal and maximal distances between the confidence set estimates provide confidence intervals for the true distances between objects. The upper bounds of these confidence intervals can be used to minimize the within clustering variability and the lower bounds can be used to maximize the between clustering variability. We assign objects to the same cluster based on a min–max criterion and we separate clusters based on a max–min criterion. We illustrate our technique by clustering a large number of curves and evaluate our clustering procedure with a synthetic example and with a specific application.  相似文献   

17.
We apply the stochastic approximation method to construct a large class of recursive kernel estimators of a probability density, including the one introduced by Hall and Patil [1994. On the efficiency of on-line density estimators. IEEE Trans. Inform. Theory 40, 1504–1512]. We study the properties of these estimators and compare them with Rosenblatt's nonrecursive estimator. It turns out that, for pointwise estimation, it is preferable to use the nonrecursive Rosenblatt's kernel estimator rather than any recursive estimator. A contrario, for estimation by confidence intervals, it is better to use a recursive estimator rather than Rosenblatt's estimator.  相似文献   

18.
Recently Jammalamadaka and Mangalam [2003. Non-parametric estimation for middle censored data. J. Nonparametric Statist. 15, 253–265] introduced a general censoring scheme called the “middle-censoring” scheme in non-parametric set up. In this paper we consider this middle-censoring scheme when the lifetime distribution of the items is exponentially distributed and the censoring mechanism is independent and non-informative. In this set up, we derive the maximum likelihood estimator and study its consistency and asymptotic normality properties. We also derive the Bayes estimate of the exponential parameter under a gamma prior. Since a theoretical construction of the credible interval becomes quite difficult, we propose and implement Gibbs sampling technique to construct the credible intervals. Monte Carlo simulations are performed to evaluate the small sample behavior of the techniques proposed. A real data set is analyzed to illustrate the practical application of the proposed methods.  相似文献   

19.
For a discrete time, second-order stationary process the Levinson–Durbin recursion is used to determine best fitting one-step-ahead linear autoregressive predictors of successively increasing order, best in the sense of minimizing the mean square error. Whittle [1963. On the fitting of multivariate autoregressions, and the approximate canonical factorization of a spectral density matrix. Biometrika 50, 129–134] generalized the recursion to the case of vector autoregressive processes. The recursion defines what is termed a Levinson–Durbin–Whittle sequence, and a generalized Levinson–Durbin–Whittle sequence is also defined. Generalized Levinson–Durbin–Whittle sequences are shown to satisfy summation formulas which generalize summation formulas satisfied by binomial coefficients. The formulas can be expressed in terms of the partial correlation sequence, and they assume simple forms for time-reversible processes. The results extend comparable formulas obtained in Shaman [2007. Generalized Levinson–Durbin sequences, binomial coefficients and autoregressive estimation. Working paper] for univariate processes.  相似文献   

20.
This paper considers 2×2 tables arising from case–control studies in which the binary exposure may be misclassified. We found circumstances under which the inverse matrix method provides a more efficient odds ratio estimator than the naive estimator. We provide some intuition for the findings, and also provide a formula for obtaining the minimum size of a validation study such that the variance of the odds ratio estimator from the inverse matrix method is smaller than that of the naive estimator, thereby ensuring an advantage for the misclassification corrected result. As a corollary of this result, we show that correcting for misclassification does not necessarily lead to a widening of the confidence intervals, but, rather, in addition to producing a consistent estimate, can also produce one that is more efficient.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号