首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 671 毫秒
1.
The problem of confidence estimation of a normal mean vector when data on different subsets of response variables are missing is considered. A simple approximate confidence region is proposed when the data matrix is of monotone pattern. Simultaneous inferential procedures based on Scheffe's method and Bonferroni's method are outlined. Further, applications of the results to a repeated measurements model are given. The results are illustrated using a practical example.  相似文献   

2.
Taguchi methods are currently attracting much attention, and certain cavalier interpretations of mean squares in saturated fractional designs have recevied criticism. After two examples illustrating the problem, some procedures are tentatively proposed for improving such analyses, but there is scope for refining these methods, and for research into the general problems of using designs with no independent estimate of experimental error.  相似文献   

3.
The multivariate log-normal distribution is a good candidate to describe data that are not only positive and skewed, but also contain many characteristic values. In this study, we apply the generalized variable method to compare the mean vectors of two independent multivariate log-normal populations that display heteroscedasticity. Two generalized pivotal quantities are derived for constructing the generalized confidence region and for testing the difference between two mean vectors. Simulation results indicate that the proposed procedures exhibit satisfactory performance regardless of the sample sizes and heteroscedasticity. The type I error rates obtained are consistent with expectations and the coverage probabilities are close to the nominal level when compared with the other method which is currently available. These features make the proposed method a worthy alternative for inferential analysis of problems involving multivariate log-normal means. The results are illustrated using three examples.  相似文献   

4.
Situations frequently arise in practice in which mean residual life (mrl) functions must be ordered. For example, in a clinical trial of three experiments, let e (1), e (2) and e (3) be the mrl functions, respectively, for the disease groups under the standard and experimental treatments, and for the disease-free group. The well-documented mrl functions e (1) and e (3) can be used to generate a better estimate for e (2) under the mrl restriction e (1) < or = e (2) < or = e (3). In this paper we propose nonparametric estimators of the mean residual life function where both upper and lower bounds are given. Small and large sample properties of the estimators are explored. Simulation study shows that the proposed estimators have uniformly smaller mean squared error compared to the unrestricted empirical mrl functions. The proposed estimators are illustrated using a real data set from a cancer clinical trial study.  相似文献   

5.
Consider a random sample X1, X2,…, Xn, from a normal population with unknown mean and standard deviation. Only the sample size, mean and range are recorded and it is necessary to estimate the unknown population mean and standard deviation. In this paper the estimation of the mean and standard deviation is made from a Bayesian perspective by using a Markov Chain Monte Carlo (MCMC) algorithm to simulate samples from the intractable joint posterior distribution of the mean and standard deviation. The proposed methodology is applied to simulated and real data. The real data refers to the sugar content (oBRIX level) of orange juice produced in different countries.  相似文献   

6.
In the present paper, we introduce and study a class of distributions that has the linear mean residual quantile function. Various distributional properties and reliability characteristics of the class are studied. Some characterizations of the class of distributions are presented. We then present generalizations of this class of distributions using the relationship between various quantile based reliability measures. The method of L-moments is employed to estimate parameters of the class of distributions. Finally, we apply the proposed class of distributions to a real data set.  相似文献   

7.
This paper is concerned with model selection and model averaging procedures for partially linear single-index models. The profile least squares procedure is employed to estimate regression coefficients for the full model and submodels. We show that the estimators for submodels are asymptotically normal. Based on the asymptotic distribution of the estimators, we derive the focused information criterion (FIC), formulate the frequentist model average (FMA) estimators and construct proper confidence intervals for FMA estimators and FIC estimator, a special case of FMA estimators. Monte Carlo studies are performed to demonstrate the superiority of the proposed method over the full model, and over models chosen by AIC or BIC in terms of coverage probability and mean squared error. Our approach is further applied to real data from a male fertility study to explore potential factors related to sperm concentration and estimate the relationship between sperm concentration and monobutyl phthalate.  相似文献   

8.
When describing a failure time distribution, the mean residual life is sometimes preferred to the survival or hazard rate. Regression analysis making use of the mean residual life function has recently drawn a great deal of attention. In this paper, a class of mean residual life regression models are proposed for censored data, and estimation procedures and a goodness-of-fit test are developed. Both asymptotic and finite sample properties of the proposed estimators are established, and the proposed methods are applied to a cancer data set from a clinic trial.  相似文献   

9.
Two-stage procedures are introduced to control the width and coverage (validity) of confidence intervals for the estimation of the mean, the between groups variance component and certain ratios of the variance components in one-way random effects models. The procedures use the pilot sample data to estimate an “optimal” group size and then proceed to determine the number of groups by a stopping rule. Such sampling plans give rise to unbalanced data, which are consequently analyzed by the harmonic mean method. Several asymptotic results concerning the proposed procedures are given along with simulation results to assess their performance in moderate sample size situations. The proposed procedures were found to effectively control the width and probability of coverage of the resulting confidence intervals in all cases and were also found to be robust in the presence of missing observations. From a practical point of view, the procedures are illustrated using a real data set and it is shown that the resulting unbalanced designs tend to require smaller sample sizes than is needed in a corresponding balanced design where the group size is arbitrarily pre-specified.  相似文献   

10.
Highly skewed and non-negative data can often be modeled by the delta-lognormal distribution in fisheries research. However, the coverage probabilities of extant interval estimation procedures are less satisfactory in small sample sizes and highly skewed data. We propose a heuristic method of estimating confidence intervals for the mean of the delta-lognormal distribution. This heuristic method is an estimation based on asymptotic generalized pivotal quantity to construct generalized confidence interval for the mean of the delta-lognormal distribution. Simulation results show that the proposed interval estimation procedure yields satisfactory coverage probabilities, expected interval lengths and reasonable relative biases. Finally, the proposed method is employed in red cod densities data for a demonstration.  相似文献   

11.
The counting process with the Cox-type intensity function has been commonly used to analyse recurrent event data. This model essentially assumes that the underlying counting process is a time-transformed Poisson process and that the covariates have multiplicative effects on the mean and rate function of the counting process. Recently, Pepe and Cai, and Lawless and co-workers have proposed semiparametric procedures for making inferences about the mean and rate function of the counting process without the Poisson-type assumption. In this paper, we provide a rigorous justification of such robust procedures through modern empirical process theory. Furthermore, we present an approach to constructing simultaneous confidence bands for the mean function and describe a class of graphical and numerical techniques for checking the adequacy of the fitted mean–rate model. The advantages of the robust procedures are demonstrated through simulation studies. An illustration with multiple-infection data taken from a clinical study on chronic granulomatous disease is also provided.  相似文献   

12.
The first step in statistical analysis is the parameter estimation. In multivariate analysis, one of the parameters of interest to be estimated is the mean vector. In multivariate statistical analysis, it is usually assumed that the data come from a multivariate normal distribution. In this situation, the maximum likelihood estimator (MLE), that is, the sample mean vector, is the best estimator. However, when outliers exist in the data, the use of sample mean vector will result in poor estimation. So, other estimators which are robust to the existence of outliers should be used. The most popular robust multivariate estimator for estimating the mean vector is S-estimator with desirable properties. However, computing this estimator requires the use of a robust estimate of mean vector as a starting point. Usually minimum volume ellipsoid (MVE) is used as a starting point in computing S-estimator. For high-dimensional data computing, the MVE takes too much time. In some cases, this time is so large that the existing computers cannot perform the computation. In addition to the computation time, for high-dimensional data set the MVE method is not precise. In this paper, a robust starting point for S-estimator based on robust clustering is proposed which could be used for estimating the mean vector of the high-dimensional data. The performance of the proposed estimator in the presence of outliers is studied and the results indicate that the proposed estimator performs precisely and much better than some of the existing robust estimators for high-dimensional data.  相似文献   

13.
Several, multivariate, pairwise, multiple comparison procedures are proposed as follow-ups for a significant multivariate analysis of variance. The Peritz procedure is generalized from univariate to several multivariate applications. Procedures are evaluated using overall power, any-pair power and all-pairs power applied to mean vectors with common sample sizes of 4, 5, and 9. Monte Carlo simulation demonstrated greater power than previously proposed univariate procedures in many conditions especially for all-pairs power. The multivariate Peritz procedure based on the Lawley–Hotelling trace was found to be most powerful in many conditions.  相似文献   

14.
In this article, we propose a new class of estimators to estimate the finite population mean by using two auxiliary variables under two different sampling schemes such as simple random sampling and stratified random sampling. The proposed class of estimators gives minimum mean squared error as compared to all other considered estimators. Some real data sets are used to observe the performances of the estimators. We show numerically that the proposed class of estimators performs better as compared to all other competitor estimators.  相似文献   

15.
Widely recognized in many fields including economics, engineering, epidemiology, health sciences, technology and wildlife management, length-biased sampling generates biased and right-censored data but often provide the best information available for statistical inference. Different from traditional right-censored data, length-biased data have unique aspects resulting from their sampling procedures. We exploit these unique aspects and propose a general imputation-based estimation method for analyzing length-biased data under a class of flexible semiparametric transformation models. We present new computational algorithms that can jointly estimate the regression coefficients and the baseline function semiparametrically. The imputation-based method under the transformation model provides an unbiased estimator regardless whether the censoring is independent or not on the covariates. We establish large-sample properties using the empirical processes method. Simulation studies show that under small to moderate sample sizes, the proposed procedure has smaller mean square errors than two existing estimation procedures. Finally, we demonstrate the estimation procedure by a real data example.  相似文献   

16.
Recurrence data are collected to study the recurrent events in biological, physical, and other systems. Quantities of interest include the mean cumulative number of events and the mean cumulative cost of the events. The mean cumulative function (MCF) can be estimated using non-parametric (NP) methods or by fitting parametric models, and many procedures have been suggested to construct the confidence intervals (CIs) for the MCF. This paper summarizes the results of a large simulation study that was designed to compare five CI procedures for both NP and parametric estimation. When performing parametric estimation, we assume the power law non-homogeneous Poisson process (NHPP) model. Our results include the evaluation of these procedures when they are used for window-observation recurrence data where recurrence histories of some systems are available only in observation windows with gaps in between.  相似文献   

17.
The authors discuss the bias of the estimate of the variance of the overall effect synthesized from individual studies by using the variance weighted method. This bias is proven to be negative. Furthermore, the conditions, the likelihood of underestimation and the bias from this conventional estimate are studied based on the assumption that the estimates of the effect are subject to normal distribution with common mean. The likelihood of underestimation is very high (e.g. it is greater than 85% when the sample sizes in two combined studies are less than 120). The alternative less biased estimates for the cases with and without the homogeneity of the variances are given in order to adjust for the sample size and the variation of the population variance. In addition, the sample size weight method is suggested if the consistence of the sample variances is violated Finally, a real example is presented to show the difference by using the above three estimate methods.  相似文献   

18.
The Birnbaum–Saunders distribution is a widely used distribution in reliability applications to model failure times. For several samples from possible different Birnbaum–Saunders distributions, if their means can be considered as the same, it is of importance to make inference for the common mean. This paper presents procedures for interval estimation and hypothesis testing for the common mean of several Birnbaum–Saunders populations. The proposed approaches are hybrids between the generalized inference method and the large sample theory. Some simulation results are conducted to present the performance of the proposed approaches. The simulation results indicate that our proposed approaches perform well. Finally, the proposed approaches are applied to analyze a real example on the fatigue life of 6061-T6 aluminum coupons for illustration.  相似文献   

19.
In this paper, we develop non-parametric estimation of the mean residual quantile function based on right-censored data. Two non-parametric estimators, one based on the empirical quantile function and the other using the kernel smoothing method, are proposed. Asymptotic properties of the estimators are discussed. Monte Carlo simulation studies are conducted to compare the two estimators. The method is illustrated with the aid of two real data sets.  相似文献   

20.
In this paper we illustrate the usefulness of influence functions for studying properties of various statistical estimators of mean rain rate using space-borne radar data. In Martin (1999), estimators using censoring, minimum chi-square, and least squares are compared in terms of asymptotic variance. Here, we use influence functions to consider robustness properties of the same estimators. We also obtain formulas for the asymptotic variance of the estimators using influence functions, and thus show that they may also be used for studying relative efficiency. The least squares estimator, although less efficient, is shown to be more robust in the sense that it has the smallest gross-error sensitivity. In some cases, influence functions associated with the estimators reveal counterintuitive behaviour. For example, observations that are less than the mean rain rate may increase the estimated mean. The additional information gleaned from influence functions may be used to understand better and improve the estimation procedures themselves.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号