首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
In this paper, procedures for all pairwise comparisons of location parameters of negative exponential populations are developed when the common scale parameter is known or unknown using large sample distributional approximations of the relevant random variables. The small sample performance of these procedures are then examined using Monte Carlo simulation.  相似文献   

2.
ABSTRACT

Nonparametric charts are useful in statistical process control when there is a lack of or limited knowledge about the underlying process distribution. Most existing approaches in the literature of Phase I monitoring assume that outliers have the same distributions as the in-control sample but only differ in location or scale parameters, they may not be effective with distributional changes. This article develops a new procedure based on the integration of the classical Anderson–Darling goodness-of-fit test and the stepwise isolation method. Our proposed procedure is efficient in detecting potential shifts in location, scale, or shape, and thus it offers robust protection against variation in various underlying distributions. The finite sample performance of our method is evaluated through simulations and is compared with that of available outlier detection methods for Phase I monitoring.  相似文献   

3.
All-pairs power in a one-way ANOVA is the probability of detecting all true differences between pairs of means. Ramsey (1978) found that for normal distributions having equal variances, step-down multiple comparison procedures can have substantially more all-pairs power than single-step procedures, such as Tukey’s HSD, when equal sample sizes are randomly sampled from each group. This paper suggests a step-down procedure for the case of unequal variances and compares it to Dunnett's T3 technique. The new procedure is similar in spirit to one of the heteroscedastic procedures described by Hochberg and Tamhane (1987), but it has certain advantages that are discussed in the paper. Included are results on unequal sample sizes.  相似文献   

4.
The problem of selecting the best population from among a finite number of populations in the presence of uncertainty is a problem one faces in many scientific investigations, and has been studied extensively, Many selection procedures have been derived for different selection goals. However, most of these selection procedures, being frequentist in nature, don't tell how to incorporate the information in a particular sample to give a data-dependent measure of correct selection achieved for this particular sample. They often assign the same decision and probability of correct selection for two different sample values, one of which actually seems intuitively much more conclusive than the other. The methodology of conditional inference offers an approach which achieves both frequentist interpret ability and a data-dependent measure of conclusiveness. By partitioning the sample space into a family of subsets, the achieved probability of correct selection is computed by conditioning on which subset the sample falls in. In this paper, the partition considered is the so called continuum partition, while the selection rules are both the fixed-size and random-size subset selection rules. Under the distributional assumption of being monotone likelihood ratio, results on least favourable configuration and alpha-correct selection are established. These re-sults are not only useful in themselves, but also are used to design a new sequential procedure with elimination for selecting the best of k Binomial populations. Comparisons between this new procedure and some other se-quential selection procedures with regard to total expected sample size and some risk functions are carried out by simulations.  相似文献   

5.
Summary In recent years, the bootstrap method has been extended to time series analysis where the observations are serially correlated. Contributions have focused on the autoregressive model producing alternative resampling procedures. In contrast, apart from some empirical applications, very little attention has been paid to the possibility of extending the use of the bootstrap method to pure moving average (MA) or mixed ARMA models. In this paper, we present a new bootstrap procedure which can be applied to assess the distributional properties of the moving average parameters estimates obtained by a least square approach. We discuss the methodology and the limits of its usage. Finally, the performance of the bootstrap approach is compared with that of the competing alternative given by the Monte Carlo simulation. Research partially supported by CNR and MURST.  相似文献   

6.
In this paper we consider conditional inference procedures for the Pareto and power function distributions. We develop procedures for obtaining confidence intervals for the location and scale parameters as well as upper and lower n probability tolerance intervals for a proportion g, given a Type-II right censored sample from the corresponding distribution. The intervals are exact, and are obtained by conditioning on the observed values of the ancillary statistics. Since, for each distribution, the procedures assume that a shape parameter x is known, a sensitivity analysis is also carried out to see how the procedures are affected by changes in x.  相似文献   

7.
One characterization of group sequential methods uses alpha spending functions to allocate the false positive rate throughout a study. We consider and evaluate several such spending functions as well as the time points of the interim analyses at which they apply. In addition, we evaluate the double triangular test as an alternative procedure that allows for early termination of the trial not only due to efficacy differences between treatments, but also due to lack of such differences. We motivate and illustrate our work by reference to the analysis of survival data from a proposed oncology study. Such group sequential procedures with one or two interim analyses are only slightly less powerful than fixed sample trials, but provide for the strong possibility of early stopping. Therefore, in all situations where they can practically be applied, we recommend their routine use in clinical trials. The double triangular test provides a suitable alternative to the group sequential procedures in that they do not provide for early stopping with acceptance of the null hypothesis. Again, there is only a modest loss in power relative to fixed sample tests. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

8.
We consider a life testing situation in which systems are subject to failure from independent competing risks. Following a failure, immediate (stage-1) procedures are used in an attempt to reach a definitive diagnosis. If these procedures fail to result in a diagnosis, this phenomenon is called masking. Stage-2 procedures, such as failure analysis or autopsy, provide definitive diagnosis for a sample of the masked cases. We show how stage-1 and stage-2 information can be combined to provide statistical inference about (a) survival functions of the individual risks, (b) the proportions of failures associated with individual risks and (c) probability, for a specified masked case, that each of the masked competing risks is responsible for the failure. Our development is based on parametric distributional assumptions and the special case for which the failure times for the competing risks have a Weibull distribution is discussed in detail.  相似文献   

9.
This paper sheds light on the large sample performance of the three stage sam- pling procedure, as it pertains to estimating the scale parameter(s) of the Pareto distribution(s). This group sampling procedure merges the efficiency of the purely sequential procedure of Anscombe (1953) and Chow and Robbins (1965) with substan-tial savings in the number of sampling operations, as noted by Hall (1981). Both its simplicity and its economical features provide visible advantages over the one-by-one sampling as an alternative. In this paper we develop some asymptotic properties for the final stage sample size of the triple stage sampling originated by Hall (1981). These results are used to study both the point and the interval estimation problems for the scale parameters of the Pareto distributions. Since our results are asymptotic in nature, a simulation study is given to discuss the moderate sample size peformance of the proposed procedures.  相似文献   

10.
Statistical models are often based on normal distributions and procedures for testing this distributional assumption are needed. Many goodness-of-fit tests suffer from the presence of outliers, in the sense that they may reject the null hypothesis even in the case of a single extreme observation. We show a possible extension of the Shapiro-Wilk test that is not affected by such a problem. The presented method is inspired by the forward search (FS), a new, recently proposed, diagnostic tool. An application to univariate observations shows how the procedure is able to capture the structure of the data, even in the presence of outliers. Other properties are also investigated.  相似文献   

11.
The maximum likeihood estimate is considered for an intraclass correlation coefficent in a bivariate normal distribution when some observations on either of the varibles are missuing. The estimate is given as the soulution of a polynomial equation of degree seven. An approximate confidence interval and a test procedure for the intraclass correlation are constricted based on an asymptotic variance stabilizing transformation of the resulting estimator. The distributional results are also considered under violation of the normality assumption. A Monte Carlo study was performed to examine the finite sample properties of the maximum likelihood estimator and to evaluate the proposed procedures for hypotheses testing and interval estimation.  相似文献   

12.
Data on the Likert scale are ubiquitous in medical research, including randomized trials. Statistical analysis of such data may be conducted using the means of raw scores or the rank information of the scores. In the context of parallel-group randomized trials, we quantify treatment effects by the probability that a subject in the treatment group has a better score than (or a win over) a subject in the control group. Asymptotic parametric and nonparametric confidence intervals for this win probability and associated sample size formulas are derived for studies with only follow-up scores, and those with both baseline and follow-up measurements. We assessed the performance of both the parametric and nonparametric approaches using simulation studies based on real studies with Likert item and Likert scale data. The simulation results demonstrate that even without baseline adjustment, the parametric methods did not perform well, in terms of bias, interval coverage percentage, balance of tail error, and assurance of achieving a pre-specified precision. In contrast, the nonparametric approach performed very well for both the unadjusted and adjusted win probability. We illustrate the methods with two examples: one using Likert item data and the other using Like scale data. We conclude that non-parametric methods are preferable for two-group randomization trials with Likert data. Illustrative SAS code for the nonparametric approach using existing procedures is provided.  相似文献   

13.
Various statistical tests have been developed for testing the equality of means in matched pairs with missing values. However, most existing methods are commonly based on certain distributional assumptions such as normality, 0-symmetry or homoscedasticity of the data. The aim of this paper is to develop a statistical test that is robust against deviations from such assumptions and also leads to valid inference in case of heteroscedasticity or skewed distributions. This is achieved by applying a clever randomization approach to handle missing data. The resulting test procedure is not only shown to be asymptotically correct but is also finitely exact if the distribution of the data is invariant with respect to the considered randomization group. Its small sample performance is further studied in an extensive simulation study and compared to existing methods. Finally, an illustrative data example is analysed.  相似文献   

14.
We study a factor analysis model with two normally distributed observations and one factor. Two approximate conditional inference procedures for the factor loading are developed. The first proposal is a very simple procedure but it is not very accurate. The second proposal gives extremely accurate results even for very small sample size. Moreover, the calculations require only the signed log-likelihood ratio statistic and a measure of the standardized maximum likelihood departure. Simulations are used to study the accuracy of the proposed procedures.  相似文献   

15.
In this paper we are interested in the derivation of the asymptotic and finite-sample distributional properties of a ‘quasi-maximum likelihood’ estimator of a ‘scale’ second-order parameter β, directly based on the log-excesses of an available sample. Such estimation is of primordial importance for the adaptive selection of the optimal sample fraction to be used in the classical semi-parametric tail index estimation as well as for the reduced-bias estimation of the tail index, high quantiles and other parameters of extreme or even rare events. An application in the area of survival analysis is provided, on the basis of a data set on males diagnosed with cancer of the tongue.  相似文献   

16.
Change point monitoring for distributional changes in time-series models is an important issue. In this article, we propose two monitoring procedures to detect distributional changes of squared residuals in GARCH models. The asymptotic properties of our monitoring statistics are derived under both the null of no change in distribution and the alternative of a change in distribution. The finite sample properties are investigated by a simulation.  相似文献   

17.
In the classical discriminant analysis, when two multivariate normal distributions with equal variance–covariance matrices are assumed for two groups, the classical linear discriminant function is optimal with respect to maximizing the standardized difference between the means of two groups. However, for a typical case‐control study, the distributional assumption for the case group often needs to be relaxed in practice. Komori et al. (Generalized t ‐statistic for two‐group classification. Biometrics 2015, 71: 404–416) proposed the generalized t ‐statistic to obtain a linear discriminant function, which allows for heterogeneity of case group. Their procedure has an optimality property in the class of consideration. We perform a further study of the problem and show that additional improvement is achievable. The approach we propose does not require a parametric distributional assumption on the case group. We further show that the new estimator is efficient, in that no further improvement is possible to construct the linear discriminant function more efficiently. We conduct simulation studies and real data examples to illustrate the finite sample performance and the gain that it produces in comparison with existing methods.  相似文献   

18.
Influence functions are derived for the parameters in covariance structure analysis, where the parameters are estimated by minimizing a discrepancy function between the assumed covariance matrix and the sample covariance matrix. The case of confirmatory factor analysis is studied precisely with a numerical example. Comparing with a general procedure called one-step estimation, the proposed procedure has two advantages:1) computing cost is cheaper, 2) the property that arbitrary influence can be decomposed into a fi-nite number of components discussed by Tanaka and Castano-Tostado(1990) can be used for efficient computing and the characterization of a covariance structure model from the sensitivity perspective. A numerical comparison is made among the confirmatory factor analysis and some procedures of ex-ploratory factor analysis by using the decomposition mentioned above.  相似文献   

19.
This paper presents variance extraction procedures for univariate time series. The volatility of a times series is monitored allowing for non-linearities, jumps and outliers in the level. The volatility is measured using the height of triangles formed by consecutive observations of the time series. This idea was proposed by Rousseeuw and Hubert [1996. Regression-free and robust estimation of scale for bivariate data. Comput. Statist. Data Anal. 21, 67–85] in the bivariate setting. This paper extends their procedure to apply for online scale estimation in time series analysis. The statistical properties of the new methods are derived and finite sample properties are given. A financial and a medical application illustrate the use of the procedures.  相似文献   

20.
The starship, as an alternative or companion procedure to the bootstrap, introduced by Owen (1988), and the well known maximum likelihood estimation procedure were used to find prediction intervals for the future sample mean of an exponential distribution. Some remarks based on a simulation study are made on the differences between the two procedures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号