首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT

In this article we present a new solution to test for effects in unreplicated two-level factorial designs. The proposed test statistic, in case the error components are normally distributed, follows an F random variable, though our attention is on its nonparametric permutation version. The proposed procedure does not require any transformation of data such as residualization and it is exact for each effect and distribution-free. Our main aim is to discuss a permutation solution conditional to the original vector of responses. We give two versions of the same nonparametric testing procedure in order to control both the individual error rate and the experiment-wise error rate. A power comparison with Loughin and Noble's test is provided in the case of a unreplicated 24 full factorial design.  相似文献   

2.
Abstract

Type III methods were introduced by SAS to address difficulties in dummy-variable models for effects of multiple factors and covariates. They are widely used in practice; they are the default method in several statistical computing packages. Type III sums of squares (SSs) are defined by a set of instructions; an explicit mathematical formulation does not seem to exist.

An explicit formulation is derived in this paper. It is used to illustrate Type III SSs and to establish their properties in the two-factor ANOVA model.  相似文献   

3.
ABSTRACT

The randomized response technique is an effective survey method designed to elicit sensitive information while ensuring the privacy of the respondents. In this article, we present some new results on the randomization response model in situations wherein one or two response variables are assumed to follow a multinomial distribution. For a single sensitive question, we use the well-known Hopkins randomization device to derive estimates, both under the assumption of truthful and untruthful responses, and present a technique for making pairwise comparisons. When there are two sensitive questions of interest, we derive a Pearson product moment correlation estimator based on the multinomial model assumption. This estimator may be used to quantify the linear relationship between two variables when multinomial response data are observed according to a randomized-response protocol.  相似文献   

4.
Summary: The Hodrick-Prescott (HP) filter has become a widely used tool for detrending integrated time series. Even if the methodological literature sums up an extensive catalogue of severe criticism against an econometric analysis of HP filtered data, the original Hodrick and Prescott (1980, 1997) suggestion to measure the strength of association between economic variables by a regression analysis of corresponding HP filtered time series appears to be very popular. This might be justified if HP induced distortions were quantitatively negligible in empirical applications. However, the simulated regression analyses presented in our paper demonstrate that any attempts of inference based on HP prefiltered series are challenged by a serious risk of spurious regression results. We would like to thank the participants of the Fourth Workshop in Macroeconometrics at the Halle Institute for Economic Research for their comments on a preliminary version of this paper. We are also indebted to the participants of the Thirtieth Macromodels International Conference, in particular David Hendry, S?ren Johansen, Katarina Juselius and Helmut Lütkepohl, for stimulating discussions and fruitful suggestions which helped to improve our paper. Finally, Larry Arnoldy helped to improve the final version of the paper.  相似文献   

5.
Abstract

The normal distribution has been playing a key role in stochastic modeling for a continuous setup. But its distribution function does not have an analytical form. Moreover, the distribution of a complex multicomponent system made of normal variates occasionally poses derivational difficulties. It may be worth exploring the possibility of developing a discrete version of the normal distribution so that the same can be used for modeling discrete data. Keeping in mind the above requirement we propose a discrete version of the continuous normal distribution. The Increasing Failure Rate property in the discrete setup has been ensured. Characterization results have also been made to establish a direct link between the discrete normal distribution and its continuous counterpart. The corresponding concept of a discrete approximator for the normal deviate has been suggested. An application of the discrete normal distributions for evaluating the reliability of complex systems has been elaborated as an alternative to simulation methods.  相似文献   

6.
7.
ABSTRACT

It is well known that ignoring heteroscedasticity in regression analysis adversely affects the efficiency of estimation and renders the usual procedure for constructing prediction intervals inappropriate. In some applications, such as off-line quality control, knowledge of the variance function is also of considerable interest in its own right. Thus the modeling of variance constitutes an important part of regression analysis. A common practice in modeling variance is to assume that a certain function of the variance can be closely approximated by a function of a known parametric form. The logarithm link function is often used even if it does not fit the observed variation satisfactorily, as other alternatives may yield negative estimated variances. In this paper we propose a rich class of link functions for more flexible variance modeling which alleviates the major difficulty of negative variances. We suggest also an alternative analysis for heteroscedastic regression models that exploits the principle of “separation” discussed in Box (Signal-to-Noise Ratios, Performance Criteria and Transformation. Technometrics 1988, 30, 1–31). The proposed method does not require any distributional assumptions once an appropriate link function for modeling variance has been chosen. Unlike the analysis in Box (Signal-to-Noise Ratios, Performance Criteria and Transformation. Technometrics 1988, 30, 1–31), the estimated variances and their associated asymptotic variances are found in the original metric (although a transformation has been applied to achieve separation in a different scale), making interpretation of results considerably easier.  相似文献   

8.
A novel method is proposed for choosing the tuning parameter associated with a family of robust estimators. It consists of minimising estimated mean squared error, an approach that requires pilot estimation of model parameters. The method is explored for the family of minimum distance estimators proposed by [Basu, A., Harris, I.R., Hjort, N.L. and Jones, M.C., 1998, Robust and efficient estimation by minimising a density power divergence. Biometrika, 85, 549–559.] Our preference in that context is for a version of the method using the L 2 distance estimator [Scott, D.W., 2001, Parametric statistical modeling by minimum integrated squared error. Technometrics, 43, 274–285.] as pilot estimator.  相似文献   

9.
ABSTRACT

Local linear estimator is a popularly used method to estimate the non-parametric regression functions, and many methods have been derived to estimate the smoothing parameter, or the bandwidth in this case. In this article, we propose an information criterion-based bandwidth selection method, with the degrees of freedom originally derived for non-parametric inferences. Unlike the plug-in method, the new method does not require preliminary parameters to be chosen in advance, and is computationally efficient compared to the cross-validation (CV) method. Numerical study shows that the new method performs better or comparable to existing plug-in method or CV method in terms of the estimation of the mean functions, and has lower variability than CV selectors. Real data applications are also provided to illustrate the effectiveness of the new method.  相似文献   

10.
ABSTRACT

Often in data arising out of epidemiologic studies, covariates are subject to measurement error. In addition ordinal responses may be misclassified into a category that does not reflect the true state of the respondents. The goal of the present work is to develop an ordered probit model that corrects for the classification errors in ordinal responses and/or measurement error in covariates. Maximum likelihood method of estimation is used. Simulation study reveals the effect of ignoring measurement error and/or classification errors on the estimates of the regression coefficients. The methodology developed is illustrated through a numerical example.  相似文献   

11.
ABSTRACT

In this paper, we propose two new simple estimation methods for the two-parameter gamma distribution. The first one is a modified version of the method of moments, whereas the second one makes use of some key properties of the distribution. We then derive the asymptotic distributions of these estimators. Also, bias-reduction methods are suggested to reduce the bias of these estimators. The performance of the estimators are evaluated through a Monte Carlo simulation study. The probability coverages of confidence intervals are also discussed. Finally, two examples are used to illustrate the proposed methods.  相似文献   

12.
Abstract

Inferential methods based on ranks present robust and powerful alternative methodology for testing and estimation. In this article, two objectives are followed. First, develop a general method of simultaneous confidence intervals based on the rank estimates of the parameters of a general linear model and derive the asymptotic distribution of the pivotal quantity. Second, extend the method to high dimensional data such as gene expression data for which the usual large sample approximation does not apply. It is common in practice to use the asymptotic distribution to make inference for small samples. The empirical investigation in this article shows that for methods based on the rank-estimates, this approach does not produce a viable inference and should be avoided. A method based on the bootstrap is outlined and it is shown to provide a reliable and accurate method of constructing simultaneous confidence intervals based on rank estimates. In particular it is shown that commonly applied methods of normal or t-approximation are not satisfactory, particularly for large-scale inferences. Methods based on ranks are uniquely suitable for analysis of microarray gene expression data since they often involve large scale inferences based on small samples containing a large number of outliers and violate the assumption of normality. A real microarray data is analyzed using the rank-estimate simultaneous confidence intervals. Viability of the proposed method is assessed through a Monte Carlo simulation study under varied assumptions.  相似文献   

13.
Abstract

We propose a method to determine the order q of a model in a general class of time series models. For the subset of linear moving average models (MA(q)), our method is compared with that of the sample autocorrelations. Since the sample autocorrelation is meant to detect a linear structure of dependence between random variables, it turns out to be more suitable for the linear case. However, our method presents a competitive option in that case, and for nonlinear models (NLMA(q)) it is shown to work better. The main advantages of our approach are that it does not make assumptions on the existence of moments and on the distribution of the noise involved in the moving average models. We also include an example with real data corresponding to the daily returns of the exchange rate process of mexican pesos and american dollars.  相似文献   

14.
ABSTRACT

Because of its flexibility and usefulness, Akaike Information Criterion (AIC) has been widely used for clinical data analysis. In general, however, AIC is used without paying much attention to sample size. If sample sizes are not large enough, it is possible that the AIC approach does not lead us to the conclusions which we seek. This article focuses on the sample size determination for AIC approach to clinical data analysis. We consider a situation in which outcome variables are dichotomous and propose a method for sample size determination under this situation. The basic idea is also applicable to the situations in which outcome variables have more than two categories or outcome variables are continuous. We present simulation studies and an application to an actual clinical trial.  相似文献   

15.
ABSTRACT

The estimation of variance function plays an extremely important role in statistical inference of the regression models. In this paper we propose a variance modelling method for constructing the variance structure via combining the exponential polynomial modelling method and the kernel smoothing technique. A simple estimation method for the parameters in heteroscedastic linear regression models is developed when the covariance matrix is unknown diagonal and the variance function is a positive function of the mean. The consistency and asymptotic normality of the resulting estimators are established under some mild assumptions. In particular, a simple version of bootstrap test is adapted to test misspecification of the variance function. Some Monte Carlo simulation studies are carried out to examine the finite sample performance of the proposed methods. Finally, the methodologies are illustrated by the ozone concentration dataset.  相似文献   

16.
In this paper, the notion of the general linear estimator and its modified version are introduced using the singular value decomposition theorem in the linear regression model y=X β+e to improve some classical linear estimators. The optimal selections of the biasing parameters involved are theoretically given under the prediction error sum of squares criterion. A numerical example and a simulation study are finally conducted to illustrate the superiority of the proposed estimators.  相似文献   

17.
Although still modest, non response rates in multipurpose household surveys have recently increased, especially in some metropolitan areas. Previous analyses have shown that refusal risk depends on the interviewers' characteristics. The aim of this paper is to explain the difference in refusal risk among metropolitan areas by analysing the strategies adopted in the recruitment of interviewers through a multilevel approach. The Annual Survey on Living conditions is a PAPI survey of the "Multipurpose" integrated system of social surveys and it represents our data base. For non responding household, data on non response by reason, municipality and characteristics of the interviewer are available. The results highlight that those cities recruiting interviewers mainly among young students have a higher refusal risk. These results are particularly important as they indicate that recruitment strategies may have a substantial impact on non sampling errors. Acknowledgements An earlier version of this article was presented at the International Conference on Improving Survey, University of Copenhagen, Denmark, August 25-28, 2002. We would like to thank the participants to the presentation for their useful comments and suggestions. Opinions expressed are those of the authors and do not necessarily represent the official position of any of the institutions they work for.  相似文献   

18.
In this article, we consider the problem of testing for variance breaks in time series in the presence of a changing trend. In performing the test, we employ the cumulative sum of squares (CUSSQ) test introduced by Inclán and Tiao (1994, J.?Amer.?Statist.?Assoc., 89, 913 ? 923). It is shown that CUSSQ test is not robust in the case of broken trend and its asymptotic distribution does not convergence to the sup of a standard Brownian bridge. As a remedy, a bootstrap approximation method is designed to alleviate the size distortions of test statistic while preserving its high power. Via a bootstrap functional central limit theorem, the consistency of these bootstrap procedures is established under general assumptions. Simulation results are provided for illustration and an empirical example of application to a set of high frequency real data is given.  相似文献   

19.
Abstract

In this Standards Update column, Todd Carpenter, Executive Director of the National Information Standards Organization (NISO), provides a preview of NISO’s upcoming RA21 Recommended Practice for single sign-on systems. Carpenter describes previous attempts at single sign-on standards, the essential difficulties in authenticating users, and how RA21 will address these issues while ensuring the privacy of user data.  相似文献   

20.

Approximate lower confidence bounds on percentiles of the Weibull and the Birnbaum-Saunders distributions are investigated. Asymptotic lower confidence bounds based on Bonferroni's inequality and the Fisher information are discussed, and parametric bootstrap methods to provide better bounds are considered. Since the standard percentile bootstrap method typically does not perform well for confidence bounds on quantiles, several other bootstrap procedures are studied via extensive computer simulations. Results of the simulations indicate that the bootstrap methods generally give sharper lower bounds than the Bonferroni bounds but with coverages still near the nominal confidence level. Two illustrative examples are also presented, one for tensile strength of carbon micro-composite specimens and the other for cycles-to-failure data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号