首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 749 毫秒
1.
In this paper, the hypothesis testing and interval estimation for the intraclass correlation coefficients are considered in a two-way random effects model with interaction. Two particular intraclass correlation coefficients are described in a reliability study. The tests and confidence intervals for the intraclass correlation coefficients are developed when the data are unbalanced. One approach is based on the generalized p-value and generalized confidence interval, the other is based on the modified large-sample idea. These two approaches simplify to the ones in Gilder et al. [2007. Confidence intervals on intraclass correlation coefficients in a balanced two-factor random design. J. Statist. Plann. Inference 137, 1199–1212] when the data are balanced. Furthermore, some statistical properties of the generalized confidence intervals are investigated. Finally, some simulation results to compare the performance of the modified large-sample approach with that of the generalized approach are reported. The simulation results indicate that the modified large-sample approach performs better than the generalized approach in the coverage probability and expected length of the confidence interval.  相似文献   

2.
We consider a multivariate linear model for multivariate controlled calibration, and construct some conservative confidence regions, which are nonempty and invariant under nonsingular transformations. The computation of our confidence region is easier compared to some of the existing procedures. We illustrate the results using two examples. The simulation results show the closeness of the coverage probability of our confidence regions to the assumed confidence level.  相似文献   

3.
A p-value is developed for testing the equivalence of the variances of a bivariate normal distribution. The unknown correlation coefficient is a nuisance parameter in the problem. If the correlation is known, the proposed p-value provides an exact test. For large samples, the p-value can be computed by replacing the unknown correlation by the sample correlation, and the resulting test is quite satisfactory. For small samples, it is proposed to compute the p-value by replacing the unknown correlation by a scalar multiple of the sample correlation. However, a single scalar is not satisfactory, and it is proposed to use different scalars depending on the magnitude of the sample correlation coefficient. In order to implement this approach, tables are obtained providing sub-intervals for the sample correlation coefficient, and the scalars to be used if the sample correlation coefficient belongs to a particular sub-interval. Once such tables are available, the proposed p-value is quite easy to compute since it has an explicit analytic expression. Numerical results on the type I error probability and power are reported on the performance of such a test, and the proposed p-value test is also compared to another test based on a rejection region. The results are illustrated with two examples: an example dealing with the comparability of two measuring devices, and an example dealing with the assessment of bioequivalence.  相似文献   

4.
The problem of finding confidence regions (CR) for a q-variate vector γ given as the solution of a linear functional relationship (LFR) Λγ = μ is investigated. Here an m-variate vector μ and an m × q matrix Λ = (Λ1, Λ2,…, Λq) are unknown population means of an m(q+1)-variate normal distribution Nm(q+1)(ζΩ?Σ), where ζ′ = (μ′, Λ1′, Λ2′,…, ΛqΣ is an unknown, symmetric and positive definite m × m matrix and Ω is a known, symmetric and positive definite (q+1) × (q+1) matrix and ? denotes the Kronecker product. This problem is a generalization of the univariate special case for the ratio of normal means.A CR for γ with level of confidence 1 ? α, is given by a quadratic inequality, which yields the so-called ‘pseudo’ confidence regions (PCR) valid conditionally in subsets of the parameter space. Our discussion is focused on the ‘bounded pseudo’ confidence region (BPCR) given by the interior of a hyperellipsoid. The two conditions necessary for a BPCR to exist are shown to be the consistency conditions concerning the multivariate LFR. The probability that these conditions hold approaches one under ‘reasonable circumstances’ in many practical situations. Hence, we may have a BPCR with confidence approximately 1 ? α. Some simulation results are presented.  相似文献   

5.
We consider robust permutation tests for a location shift in the two sample case based on estimating equations, comparing the test statistics based on a score function and an M-estimate. First we obtain a form for both tests so that the exact tests may be carried out using the same algorithms as used for permutation tests based on the mean. Then we obtain the Bahadur slopes of the tests in these two statistics, giving numerical results for two cases equivalent to a test based on Huber scores and a particular case of this related to a median test. We show that they have different Bahadur slopes with neither exceeding the other over the whole range. Finally, we give some numerical results illustrating the robustness properties of the tests and confirming the theoretical results on Bahadur slopes.  相似文献   

6.
It is frequently the case that a response will be related to both a vector of finite length and a function-valued random variable as predictor variables. In this paper, we propose new estimators for the parameters of a partial functional linear model which explores the relationship between a scalar response variable and mixed-type predictors. Asymptotic properties of the proposed estimators are established and finite sample behavior is studied through a small simulation experiment.  相似文献   

7.
The hypothesis testing and confidence region are considered for the common mean vector of several multivariate normal populations when the covariance matrices are unknown and possibly unequal. A generalized confidence region is derived using the concepts of generalized method based on the generalized pp-value. The generalized confidence region is illustrated with two numerical examples. The merits of the proposed method are numerically compared with those of existing methods with respect to their expected area or expected d-dimensional volumes and coverage probabilities under different scenarios.  相似文献   

8.
Scientific experiments commonly result in clustered discrete and continuous data. Existing methods for analyzing such data include the use of quasi-likelihood procedures and generalized estimating equations to estimate marginal mean response parameters. In applications to areas such as developmental toxicity studies, where discrete and continuous measurements are recorded on each fetus, or clinical ophthalmologic trials, where different types of observations are made on each eye, the assumption that data within cluster are exchangeable is often very reasonable. We use this assumption to formulate fully parametric regression models for clusters of bivariate data with binary and continuous components. The regression models proposed have marginal interpretations and reproducible model structures. Tractable expressions for likelihood equations are derived and iterative schemes are given for computing efficient estimates (MLEs) of the marginal mean, correlations, variances and higher moments. We demonstrate the use the ‘exchangeable’ procedure with an application to a developmental toxicity study involving fetal weight and malformation data.  相似文献   

9.
The problems of estimating the mean and an upper percentile of a lognormal population with nonnegative values are considered. For estimating the mean of a such population based on data that include zeros, a simple confidence interval (CI) that is obtained by modifying Tian's [Inferences on the mean of zero-inflated lognormal data: the generalized variable approach. Stat Med. 2005;24:3223—3232] generalized CI, is proposed. A fiducial upper confidence limit (UCL) and a closed-form approximate UCL for an upper percentile are developed. Our simulation studies indicate that the proposed methods are very satisfactory in terms of coverage probability and precision, and better than existing methods for maintaining balanced tail error rates. The proposed CI and the UCL are simple and easy to calculate. All the methods considered are illustrated using samples of data involving airborne chlorine concentrations and data on diagnostic test costs.  相似文献   

10.
We deal with the problem of estimating constructing a confidence band for the 100γth percentile line in the multiple linear regression model with independent identically normally distributed errors. A method for computing the exact Scheffé type confidence band over a limited space of the particular covariates region is suggested. A confidence band depends on an estimator of the percentile line. The confidence bands based on the different estimators of the percentile line are compared with respect to the average bandwidth.  相似文献   

11.
In the wood industry, it is common practice to compare in terms of the ratio of the same-strength properties for lumber of two different dimensions, grades, or species. Because United States lumber standards are given in terms of population fifth percentile, and strength problems arise from the weaker fifth percentile rather than the stronger mean, so the ratio should be expressed in terms of the fifth percentiles rather than the means of two strength distributions. Percentiles are estimated by order statistics. This paper assumes small samples to derive new non parametric methods such as percentile sign test and percentile Wilcoxon signed rank test, construct confidence intervals with covergage rate 1 – αx for single percentiles, and compute confidence regions for ratio of percentiles based on confidence intervals for single percentiles. Small 1 – αx is enough to obtain good coverage rates of confidence regions most of the time.  相似文献   

12.
A modified large-sample (MLS) approach and a generalized confidence interval (GCI) approach are proposed for constructing confidence intervals for intraclass correlation coefficients. Two particular intraclass correlation coefficients are considered in a reliability study. Both subjects and raters are assumed to be random effects in a balanced two-factor design, which includes subject-by-rater interaction. Computer simulation is used to compare the coverage probabilities of the proposed MLS approach (GiTTCH) and GCI approaches with the Leiva and Graybill [1986. Confidence intervals for variance components in the balanced two-way model with interaction. Comm. Statist. Simulation Comput. 15, 301–322] method. The competing approaches are illustrated with data from a gauge repeatability and reproducibility study. The GiTTCH method maintains at least the stated confidence level for interrater reliability. For intrarater reliability, the coverage is accurate in several circumstances but can be liberal in some circumstances. The GCI approach provides reasonable coverage for lower confidence bounds on interrater reliability, but its corresponding upper bounds are too liberal. Regarding intrarater reliability, the GCI approach is not recommended because the lower bound coverage is liberal. Comparing the overall performance of the three methods across a wide array of scenarios, the proposed modified large-sample approach (GiTTCH) provides the most accurate coverage for both interrater and intrarater reliability.  相似文献   

13.
Many of the existing methods of finding calibration intervals in simple linear regression rely on the inversion of prediction limits. In this article, we propose an alternative procedure which involves two stages. In the first stage, we find a confidence interval for the value of the explanatory variable which corresponds to the given future value of the response. In the second stage, we enlarge the confidence interval found in the first stage to form a confidence interval called, calibration interval, for the value of the explanatory variable which corresponds to the theoretical mean value of the future observation. In finding the confidence interval in the first stage, we have used the method based on hypothesis testing and percentile bootstrap. When the errors are normally distributed, the coverage probability of resulting calibration interval based on hypothesis testing is comparable to that of the classical calibration interval. In the case of non normal errors, the coverage probability of the calibration interval based on hypothesis testing is much closer to the target value than that of the calibration interval based on percentile bootstrap.  相似文献   

14.
Eva Fišerová 《Statistics》2013,47(3):241-251
We consider an unbiased estimator of a function of mean value parameters, which is not efficient. This inefficient estimator is correlated with a residual vector. Thus, if a unit dispersion is unknown, it is impossible to determine the correct confidence region for a function of mean value parameters via a standard estimator of an unknown dispersion with the exception of the case when the ordinary least squares (OLS) estimator is considered in a model with a special covariance structure such that the OLS and the generalized least squares (GLS) estimator are the same, that is the OLS estimator is efficient. Two different estimators of a unit dispersion independent of an inefficient estimator are derived in a singular linear statistical model. Their quality was verified by simulations for several types of experimental designs. Two new estimators of the unit dispersion were compared with the standard estimators based on the GLS and the OLS estimators of the function of the mean value parameters. The OLS estimator was considered in the incorrect model with a different covariance matrix such that the originally inefficient estimator became efficient. The numerical examples led to a slightly surprising result which seems to be due to data behaviour. An example from geodetic practice is presented in the paper.  相似文献   

15.
In this paper, we consider the classification of high-dimensional vectors based on a small number of training samples from each class. The proposed method follows the Bayesian paradigm, and it is based on a small vector which can be viewed as the regression of the new observation on the space spanned by the training samples. The classification method provides posterior probabilities that the new vector belongs to each of the classes, hence it adapts naturally to any number of classes. Furthermore, we show a direct similarity between the proposed method and the multicategory linear support vector machine introduced in Lee et al. [2004. Multicategory support vector machines: theory and applications to the classification of microarray data and satellite radiance data. Journal of the American Statistical Association 99 (465), 67–81]. We compare the performance of the technique proposed in this paper with the SVM classifier using real-life military and microarray datasets. The study shows that the misclassification errors of both methods are very similar, and that the posterior probabilities assigned to each class are fairly accurate.  相似文献   

16.
Suppose we observe two independent random vectors each having a multivariate normal distribution with covariance matrix known up to an unknown scale factor σ . The first random vector has a known mean vector while the second has an unknown mean vector. Interest centers around finding confidence intervals for σ2 with confidence coefficient 1 ? α. Standard results show that, when we only observe the first random vector, an optimal (i.e., smallest length) confidence interval C, based on the well-known chi- squared statistic, can be constructed for σ2 . When we additionally observe the second random vector, the confidence interval C is no longer optimal for estimating σ2. One criterion useful for detecting the non-optimality of a confidence interval C concerns whether C admits positively or negatively biased relevant subsets. This criterion has recently received a good deal of attention. It is shown here that under some conditions the confidence interval C admits positively biased relevant subsets.

Applications of this result to the construction of ‘better‘ unconditional confidence intervals for σ2 are presented. Some simulation results are given to indicate the typical extent of improvement attained.  相似文献   

17.
The results of analyzing experimental data using a parametric model may heavily depend on the chosen model for regression and variance functions, moreover also on a possibly underlying preliminary transformation of the variables. In this paper we propose and discuss a complex procedure which consists in a simultaneous selection of parametric regression and variance models from a relatively rich model class and of Box-Cox variable transformations by minimization of a cross-validation criterion. For this it is essential to introduce modifications of the standard cross-validation criterion adapted to each of the following objectives: 1. estimation of the unknown regression function, 2. prediction of future values of the response variable, 3. calibration or 4. estimation of some parameter with a certain meaning in the corresponding field of application. Our idea of a criterion oriented combination of procedures (which usually if applied, then in an independent or sequential way) is expected to lead to more accurate results. We show how the accuracy of the parameter estimators can be assessed by a “moment oriented bootstrap procedure", which is an essential modification of the “wild bootstrap” of Härdle and Mammen by use of more accurate variance estimates. This new procedure and its refinement by a bootstrap based pivot (“double bootstrap”) is also used for the construction of confidence, prediction and calibration intervals. Programs written in Splus which realize our strategy for nonlinear regression modelling and parameter estimation are described as well. The performance of the selected model is discussed, and the behaviour of the procedures is illustrated, e.g., by an application in radioimmunological assay.  相似文献   

18.
R. Van de Ven  N. C. Weber 《Statistics》2013,47(3-4):345-352
Upper and lower bounds are obtained for the mean of the negative binomial distribution. These bounds are simple functions of a percentile determined by the shape parameter. The result is then used to obtain a robust estimate of the mean when the shape parameter is known.  相似文献   

19.
We consider the problem of robust M-estimation of a vector of regression parameters, when the errors are dependent. We assume a weakly stationary, but otherwise quite general dependence structure. Our model allows for the representation of the correlations of any time series of finite length. We first construct initial estimates of the regression, scale, and autocorrelation parameters. The initial autocorrelation estimates are used to transform the model to one of approximate independence. In this transformed model, final one-step M-estimates are calculated. Under appropriate assumptions, the regression estimates so obtained are asymptotically normal, with a variance-covariance structure identical to that in the case in which the autocorrelations are known a priori. The results of a simulation study are given. Two versions of our estimator are compared with the L1 -estimator and several Huber-type M-estimators. In terms of bias and mean squared error, the estimators are generally very close. In terms of the coverage probabilities of confidence intervals, our estimators appear to be quite superior to both the L1-estimator and the other estimators. The simulations also indicate that the approach to normality is quite fast.  相似文献   

20.
The mean vector associated with several independent variates from the exponential subclass of Hudson (1978) is estimated under weighted squared error loss. In particular, the formal Bayes and “Stein-like” estimators of the mean vector are given. Conditions are also given under which these estimators dominate any of the “natural estimators”. Our conditions for dominance are motivated by a result of Stein (1981), who treated the Np (θ, I) case with p ≥ 3. Stein showed that formal Bayes estimators dominate the usual estimator if the marginal density of the data is superharmonic. Our present exponential class generalization entails an elliptic differential inequality in some natural variables. Actually, we assume that each component of the data vector has a probability density function which satisfies a certain differential equation. While the densities of Hudson (1978) are particular solutions of this equation, other solutions are not of the exponential class if certain parameters are unknown. Our approach allows for the possibility of extending the parametric Stein-theory to useful nonexponential cases, but the problem of nuisance parameters is not treated here.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号