首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The hypothesis testing and confidence region are considered for the common mean vector of several multivariate normal populations when the covariance matrices are unknown and possibly unequal. A generalized confidence region is derived using the concepts of generalized method based on the generalized pp-value. The generalized confidence region is illustrated with two numerical examples. The merits of the proposed method are numerically compared with those of existing methods with respect to their expected area or expected d-dimensional volumes and coverage probabilities under different scenarios.  相似文献   

2.
In this article, we consider the problems of constructing confidence interval for a Weibull mean and setting prediction limits for future samples. Specifically, we construct upper prediction limits that include at least ll of mm samples from a Weibull distribution at each of rr locations. The methods are based on the concept of generalized variable approach. The procedures can be easily extended to the type II censored samples, and they can be used to find approximate inferential procedures for type I censored samples. The proposed methods are conceptually simple and easy to use. The results are illustrated using some practical examples.  相似文献   

3.
The problem of interval estimation of the stress–strength reliability involving two independent Weibull distributions is considered. An interval estimation procedure based on the generalized variable (GV) approach is given when the shape parameters are unknown and arbitrary. The coverage probabilities of the GV approach are evaluated by Monte Carlo simulation. Simulation studies show that the proposed generalized variable approach is very satisfactory even for small samples. For the case of equal shape parameter, it is shown that the generalized confidence limits are exact. Some available asymptotic methods for the case of equal shape parameter are described and their coverage probabilities are evaluated using Monte Carlo simulation. Simulation studies indicate that no asymptotic approach based on the likelihood method is satisfactory even for large samples. Applicability of the GV approach for censored samples is also discussed. The results are illustrated using an example.  相似文献   

4.
This paper discusses a new perspective in fitting spatial point process models. Specifically the spatial point process of interest is treated as a marked point process where at each observed event xx a stochastic process M(x;t)M(x;t), 0<t<r0<t<r, is defined. Each mark process M(x;t)M(x;t) is compared with its expected value, say F(t;θ)F(t;θ), to produce a discrepancy measure at xx, where θθ is a set of unknown parameters. All individual discrepancy measures are combined to define an overall measure which will then be minimized to estimate the unknown parameters. The proposed approach can be easily applied to data with sample size commonly encountered in practice. Simulations and an application to a real data example demonstrate the efficacy of the proposed approach.  相似文献   

5.
Various subset selection methods are based on the least squares parameter estimation method. The performance of these methods is not reasonably well in the presence of outlier or multicollinearity or both. Few subset selection methods based on the M-estimator are available in the literature for outlier data. Very few subset selection methods account the problem of multicollinearity with ridge regression estimator.In this article, we develop a generalized version of Sp statistic based on the jackknifed ridge M-estimator for subset selection in the presence of outlier and multicollinearity. We establish the equivalence of this statistic with the existing Cp, Sp and Rp statistics. The performance of the proposed method is illustrated through some numerical examples and the correct model selection ability is evaluated using simulation study.  相似文献   

6.
We consider methods for reducing the effect of fitting nuisance parameters on a general estimating function, when the estimating function depends on not only a vector of parameters of interest, θθ, but also on a vector of nuisance parameters, λλ. We propose a class of modified profile estimating functions with plug-in bias reduced by two orders. A robust version of the adjustment term does not require any information about the probability mechanism beyond that required by the original estimating function. An important application of this method is bias correction for the generalized estimating equation in analyzing stratified longitudinal data, where the stratum-specific intercepts are considered as fixed nuisance parameters, the dependence of the expected outcome on the covariates is of interest, and the intracluster correlation structure is unknown. Furthermore, when the quasi-scores for θθ and λλ are available, we propose an additional multiplicative adjustment term such that the modified profile estimating function is approximately information unbiased. This multiplicative adjustment term can serve as an optimal weight in the analysis of stratified studies. A brief simulation study shows that the proposed method considerably reduces the impact of the nuisance parameters.  相似文献   

7.
8.
A supersaturated design is a design whose run size is not enough for estimating all the main effects. It is commonly used in screening experiments, where the goals are to identify sparse and dominant active factors with low cost. In this paper, we study a variable selection method via the Dantzig selector, proposed by Candes and Tao [2007. The Dantzig selector: statistical estimation when pp is much larger than nn. Annals of Statistics 35, 2313–2351], to screen important effects. A graphical procedure and an automated procedure are suggested to accompany with the method. Simulation shows that this method performs well compared to existing methods in the literature and is more efficient at estimating the model size.  相似文献   

9.
In the context of Bayesian statistical analysis, elicitation is the process of formulating a prior density f(·)f(·) about one or more uncertain quantities to represent a person's knowledge and beliefs. Several different methods of eliciting prior distributions for one unknown parameter have been proposed. However, there are relatively few methods for specifying a multivariate prior distribution and most are just applicable to specific classes of problems and/or based on restrictive conditions, such as independence of variables. Besides, many of these procedures require the elicitation of variances and correlations, and sometimes elicitation of hyperparameters which are difficult for experts to specify in practice. Garthwaite et al. (2005) discuss the different methods proposed in the literature and the difficulties of eliciting multivariate prior distributions. We describe a flexible method of eliciting multivariate prior distributions applicable to a wide class of practical problems. Our approach does not assume a parametric form for the unknown prior density f(·)f(·), instead we use nonparametric Bayesian inference, modelling f(·)f(·) by a Gaussian process prior distribution. The expert is then asked to specify certain summaries of his/her distribution, such as the mean, mode, marginal quantiles and a small number of joint probabilities. The analyst receives that information, treating it as a data set D   with which to update his/her prior beliefs to obtain the posterior distribution for f(·)f(·). Theoretical properties of joint and marginal priors are derived and numerical illustrations to demonstrate our approach are given.  相似文献   

10.
In this article we study the problem of classification of three-level multivariate data, where multiple qq-variate observations are measured on uu-sites and over pp-time points, under the assumption of multivariate normality. The new classification rules with certain structured and unstructured mean vectors and covariance structures are very efficient in small sample scenario, when the number of observations is not adequate to estimate the unknown variance–covariance matrix. These classification rules successfully model the correlation structure on successive repeated measurements over time. Computation algorithms for maximum likelihood estimates of the unknown population parameters are presented. Simulation results show that the introduction of sites in the classification rules improves their performance over the existing classification rules without the sites.  相似文献   

11.
Statistical approaches for addressing multiplicity in clinical trials range from the very conservative (the Bonferroni method) to the least conservative the fixed sequence approach. Recently, several authors proposed methods that combine merits of the two extreme approaches. Wiens [2003. A fixed sequence Bonferroni procedure for testing multiple endpoints. Pharmaceutical Statist. 2003, 2, 211–215], for example, considered an extension of the Bonferroni approach where the type I error rate (α)(α) is allocated among the endpoints, however, testing proceeds in a pre-determined order allowing the type I error rate to be saved for later use as long as the null hypotheses are rejected. This leads to a higher power of the test in testing later null hypotheses. In this paper, we consider an extension of Wiens’ approach by taking into account correlations among endpoints for achieving higher flexibility in testing. We show strong control of the family-wise type I error rate for this extension and provide critical values and significance levels for testing up to three endpoints with equal correlations and show how to calculate them for other correlation structures. We also present results of a simulation experiment for comparing the power of the proposed method with those of Wiens’ and others. The results of this experiment show that the magnitude of the gain in power of the proposed method depends on the prospective ordering of testing of the endpoints, the magnitude of the treatment effects of the endpoints and the magnitude of correlation between endpoints. Finally, we consider applications of the proposed method for clinical trials with multiple time points and multiple doses, where correlations among endpoints frequently arise.  相似文献   

12.
The kth ( 1<k 2) power expectile regression (ER) can balance robustness and effectiveness between the ordinary quantile regression and ER simultaneously. Motivated by a longitudinal ACTG 193A data with nonignorable dropouts, we propose a two-stage estimation procedure and statistical inference methods based on the kth power ER and empirical likelihood to accommodate both the within-subject correlations and nonignorable dropouts. Firstly, we construct the bias-corrected generalized estimating equations by combining the kth power ER and inverse probability weighting approaches. Subsequently, the generalized method of moments is utilized to estimate the parameters in the nonignorable dropout propensity based on sufficient instrumental estimating equations. Secondly, in order to incorporate the within-subject correlations under an informative working correlation structure, we borrow the idea of quadratic inference function to obtain the improved empirical likelihood procedures. The asymptotic properties of the corresponding estimators and their confidence regions are derived. The finite-sample performance of the proposed estimators is studied through simulation and an application to the ACTG 193A data is also presented.  相似文献   

13.
Optimal symmetrical fractional factorial designs with nn runs and mm factors of ss levels each are constructed. We consider only designs such that no two factors are aliases. The minimum moment aberration criterion proposed by Xu (2003) is used to judge the optimality of the designs. The minimum moment aberration criterion is equivalent to the popular generalized minimum aberration criterion proposed by Xu and Wu (2001), but the minimum moment criterion is simpler to formulate and employ computationally. Some optimal designs are constructed by using generalized Hadamard matrices.  相似文献   

14.
The randomized response (RR) procedures for estimating the proportion (π)(π) of a population belonging to a sensitive or stigmatized group ask each respondent to report a response by randomly transforming his/her true attribute into one of several response categories. In this paper, we present a common framework for discussing various RR surveys of dichotomous populations with polychotomous responses. The unified approach is focused on the substantive issues relating to respondents’ privacy and statistical efficiency and is helpful for fair comparison of various procedures. We describe a general technique for constructing unbiased estimators of ππ based on arbitrary RR procedures, from unbiased estimators based on an open survey with the same sampling design. The technique works well for any sampling design p(s)p(s) and also for variance estimation. We develop an approach for comparing RR procedures, taking both respondents’ protection and statistical efficiency into account. For any given RR procedure with three or more response categories, we present a method for designing an RR procedure with a binary response variable which provides the same respondents’ protection and at least as much statistical information. This result suggests that RR surveys of dichotomous populations should use only binary response variables.  相似文献   

15.
We consider the estimation of smooth regression functions in a class of conditionally parametric co-variate-response models. Independent and identically distributed observations are available from the distribution of (Z,X)(Z,X), where Z is a real-valued co-variate with some unknown distribution, and the response X conditional on Z   is distributed according to the density p(·,ψ(Z))p(·,ψ(Z)), where p(·,θ)p(·,θ) is a one-parameter exponential family. The function ψψ is a smooth monotone function. Under this formulation, the regression function E(X|Z)E(X|Z) is monotone in the co-variate Z   (and can be expressed as a one–one function of ψψ); hence the term “monotone response model”. Using a penalized least squares approach that incorporates both monotonicity and smoothness, we develop a scheme for producing smooth monotone estimates of the regression function and also the function ψψ across this entire class of models. Point-wise asymptotic normality of this estimator is established, with the rate of convergence depending on the smoothing parameter. This enables construction of Wald-type (point-wise) as well as pivotal confidence sets for ψψ and also the regression function. The methodology is extended to the general heteroscedastic model, and its asymptotic properties are discussed.  相似文献   

16.
We propose a new test procedure for testing linear hypothesis on the mean vectors of normal populations with unequal covariance matrices when the dimensionality, p exceeds the sample size N  , i.e. p/N→c<∞p/Nc<. Our procedure is based on the Dempster trace criterion and is shown to be consistent in high dimensions.  相似文献   

17.
Many multiple testing procedures (MTPs) are available today, and their number is growing. Also available are many type I error rates: the family-wise error rate (FWER), the false discovery rate, the proportion of false positives, and others. Most MTPs are designed to control a specific type I error rate, and it is hard to compare different procedures. We approach the problem by studying the exact level at which threshold step-down (TSD) procedures (an important class of MTPs exemplified by the classic Holm procedure) control the generalized FWER   defined as the probability of kk or more false rejections. We find that level explicitly for any TSD procedure and any kk. No assumptions are made about the dependency structure of the pp-values of the individual tests. We derive from our formula a criterion for unimprovability   of a procedure in the class of TSD procedures controlling the generalized FWER at a given level. In turn, this criterion implies that for each kk the number of such unimprovable procedures is finite and is greater than one if k>1k>1. Consequently, in this case the most rejective procedure in the above class does not exist.  相似文献   

18.
This paper presents a method for constructing confidence intervals for the median of a finite population under unequal probability sampling. The model-assisted approach makes use of the L1L1-norm to motivate the estimating function which is then used to develop a unified approach to inference which includes not only confidence intervals but hypothesis tests and point estimates. The approach relies on large sample theory to construct the confidence intervals. In cases when second-order inclusion probabilities are not available or easy to compute, the Hartley–Rao variance approximation is employed. Simulations show that the confidence intervals achieve the appropriate confidence level, whether or not the Hartley–Rao variance is employed.  相似文献   

19.
In this article, the problem of testing the equality of coefficients of variation in a multivariate normal population is considered, and an asymptotic approach and a generalized p-value approach based on the concepts of generalized test variable are proposed. Monte Carlo simulation studies show that the proposed generalized p-value test has good empirical sizes, and it is better than the asymptotic approach. In addition, the problem of hypothesis testing and confidence interval for the common coefficient variation of a multivariate normal population are considered, and a generalized p-value and a generalized confidence interval are proposed. Using Monte Carlo simulation, we find that the coverage probabilities and expected lengths of this generalized confidence interval are satisfactory, and the empirical sizes of the generalized p-value are close to nominal level. We illustrate our approaches using a real data.  相似文献   

20.
In this article, the hypothesis testing and interval estimation for the reliability parameter are considered in balanced and unbalanced one-way random models. The tests and confidence intervals for the reliability parameter are developed using the concepts of generalized p-value and generalized confidence interval. Furthermore, some simulation results are presented to compare the performances between the proposed approach and the existing approach. For balanced models, the simulation results indicate that the proposed approach can provide satisfactory coverage probabilities and performs better than the existing approaches across the wide array of scenarios, especially for small sample sizes. For unbalanced models, the simulation results show that the two proposed approaches perform more satisfactorily than the existing approach in most cases. Finally, the proposed approaches are illustrated using two real examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号