首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose a new statistic for testing linear hypotheses in the non parametric regression model in the case of a homoscedastic error structure and fixed design. In contrast to most models suggested in the literature, our procedure is applicable in the non parametric model case without regularity condition, and also under either the null or the alternative hypotheses. We show the asymptotic normality of the test statistic under the null hypothesis and the alternative one. A simulation study is conducted to investigate the finite sample properties of the test with application to regime switching.  相似文献   

2.
This article deals with the Granger non causality test in cointegrated vector autoregressive processes. We propose a new testing procedure that yields an asymptotically standard distribution and performs well in small samples by combining the standard Wald test and the generalized inverse procedure. We also propose a few simple modifications to the test statistics in order to help our procedure perform better in finite samples. Monte Carlo simulations show that our procedure works better than the conventional approach.  相似文献   

3.
In this article, we propose the non parametric mixture of strictly monotone regression models. For implementation, a two-step procedure is derived. We further establish the asymptotic normality of the resultant estimator and demonstrate its good performance through numerical examples.  相似文献   

4.
This article is concerned with partially non linear models when the response variables are missing at random. We examine the empirical likelihood (EL) ratio statistics for unknown parameter in non linear function based on complete-case data, semiparametric regression imputation, and bias-corrected imputation. All the proposed statistics are proven to be asymptotically chi-square distribution under some suitable conditions. Simulation experiments are conducted to compare the finite sample behaviors of the proposed approaches in terms of confidence intervals. It showed that the EL method has advantage compared to the conventional method, and moreover, the imputation technique performs better than the complete-case data.  相似文献   

5.
Multi-sample inference for simple-tree alternatives with ranked-set samples   总被引:1,自引:0,他引:1  
This paper develops a non‐parametric multi‐sample inference for simple‐tree alternatives for ranked‐set samples. The multi‐sample inference provides simultaneous one‐sample sign confidence intervals for the population medians. The decision rule compares these intervals to achieve the desired type I error. For the specified upper bounds on the experiment‐wise error rates, corresponding individual confidence coefficients are presented. It is shown that the testing procedure is distribution‐free. To achieve the desired confidence coefficients for multi‐sample inference, a nonparametric confidence interval is constructed by interpolating the adjacent order statistics. Interpolation coefficients and coverage probabilities are provided, along with the nominal levels.  相似文献   

6.
Testing for differences between two groups is a fundamental problem in statistics, and due to developments in Bayesian non parametrics and semiparametrics there has been renewed interest in approaches to this problem. Here we describe a new approach to developing such tests and introduce a class of such tests that take advantage of developments in Bayesian non parametric computing. This class of tests uses the connection between the Dirichlet process (DP) prior and the Wilcoxon rank sum test but extends this idea to the DP mixture prior. Here tests are developed that have appropriate frequentist sampling procedures for large samples but have the potential to outperform the usual frequentist tests. Extensions to interval and right censoring are considered and an application to a high-dimensional data set obtained from an RNA-Seq investigation demonstrates the practical utility of the method.  相似文献   

7.
When the distribution of a process characterized by a profile is non normal, process capability analysis using normal assumption often leads to erroneous interpretations of the process performance. Profile monitoring is a relatively new set of techniques in quality control that is used in situations where the state of product or process is represented by a function of two or more quality characteristics. Such profiles can be modeled using linear or nonlinear regression models. In some applications, it is assumed that the quality characteristics follow a normal distribution; however, in certain applications this assumption may fail to hold and may yield misleading results. In this article, we consider process capability analysis of non normal linear profiles. We investigate and compare five methods to estimate non normal process capability index (PCI) in profiles. In three of the methods, an estimation of the cumulative distribution function (cdf) of the process is required to analyze process capability in profiles. In order to estimate cdf of the process, we use a Burr XII distribution as well as empirical distributions. However, the resulted PCI with estimating cdf of the process is sometimes far from its true value. So, here we apply artificial neural network with supervised learning which allows the estimation of PCIs in profiles without the need to estimate cdf of the process. Box-Cox transformation technique is also developed to deal with non normal situations. Finally, a comparison study is performed through the simulation of Gamma, Weibull, Lognormal, Beta and student-t data.  相似文献   

8.
Usual tests for trends stand under null hypothesis. This article presents a test of non null hypothesis for linear trends in proportions. A weighted least squares method is used to estimate the regression coefficient of proportions. A non null hypothesis is defined as its expectation equal to a prescribed regression coefficient margin. Its variance is used to construct an equation of basic relationship for linear trends in proportions along the asymptotic normal method. Then follow derivations for the sample size formula, the power function, and the test statistic. The expected power is obtained from the power function and the observed power is exhibited by Monte Carlo method. It reduces to the classical test for linear trends in proportions on setting the margin equal to zero. The agreement between the expected and the observed power is excellent. It is the non null hypothesis test matched with the classical test and can be applied to assess the clinical significance of trends among several proportions. By contrast, the classical test is restricted in testing the statistical significance. A set of data from a website is used to illustrate the methodology.  相似文献   

9.
The comparison of increasing doses of a compound to a zero dose control is of interest in medical and toxicological studies. Assume that the mean dose effects are non-decreasing among the non-zero doses of the compound. A simple procedure that modifies Dunnett's procedure is proposed to construct simultaneous confidence intervals for pairwise comparisons of each dose group with the zero dose control by utilizing the ordering of the means. The simultaneous lower bounds and upper bounds by the new procedure are monotone, which is not the case with Dunnett's procedure. This is useful to categorize dose levels. The expected gains of the new procedure over Dunnett's procedure are studied. The procedure is shown by real data to compare well with its predecessor.  相似文献   

10.
Given pollution measurement from a network of monitoring sites in the area of a city and over an extended period of time, an important problem is to identify the spatial and temporal structure of the data. In this paper we focus on the identification and estimate of a statistical non parametric model to analyse the SO2 in the city of Padua, where data are collected by some fixed stations and some mobile stations moving without any specific rule in different new locations. The impact of the use of mobile stations is that for each location there are times when data was not collected. Assuming temporal stationarity and spatial isotropy for the residuals of an additive model for the logarithm of SO2 concentration, we estimate the semivariogram using a kernel-type estimator. Attempts are made to avoid the assumption of spatial isotropy. Bootstrap confidence bands are obtained for the spatial component of the additive model that is a deterministic function which defines the spatial structure. Finally, an example is proposed to design an optimal network for the mobiles monitoring stations in a fixed future time, given all the information available.  相似文献   

11.
In many linear inverse problems the unknown function f (or its discrete approximation Θ p×1), which needs to be reconstructed, is subject to the non negative constraint(s); we call these problems the non negative linear inverse problems (NNLIPs). This article considers NNLIPs. However, the error distribution is not confined to the traditional Gaussian or Poisson distributions. We adopt the exponential family of distributions where Gaussian and Poisson are special cases. We search for the non negative maximum penalized likelihood (NNMPL) estimate of Θ. The size of Θ often prohibits direct implementation of the traditional methods for constrained optimization. Given that the measurements and point-spread-function (PSF) values are all non negative, we propose a simple multiplicative iterative algorithm. We show that if there is no penalty, then this algorithm is almost sure to converge; otherwise a relaxation or line search is necessitated to assure its convergence.  相似文献   

12.
ABSTRACT

Holm's step-down testing procedure starts with the smallest p-value and sequentially screens larger p-values without any information on confidence intervals. This article changes the conventional step-down testing framework by presenting a nonparametric procedure that starts with the largest p-value and sequentially screens smaller p-values in a step-by-step manner to construct a set of simultaneous confidence sets. We use a partitioning approach to prove that the new procedure controls the simultaneous confidence level (thus strongly controlling the familywise error rate). Discernible features of the new stepwise procedure include consistency with individual inference, coherence, and confidence estimations for follow-up investigations. In a simple simulation study, the proposed procedure (treated as a testing procedure), is more powerful than Holm's procedure when the correlation coefficient is large, and vice versa when it is small. In the data analysis of a medical study, the new procedure is able to detect the efficacy of Aspirin as a cardiovascular prophylaxis in a nonparametric setting.  相似文献   

13.
Numerous methods have been developed to calculate confidence intervals for the binomial proportion π. Boundedness and discreteness of the sample space imply that none achieves exactly the nominal α/2 left and right non coverage. We consider whether intervals calculated by a particular method tend to be located too close to, or too far out from, the center of symmetry of the support scale, 1/2. Interval location may be characterized by the balance of mesial and distal non coverage in a study evaluating coverage. A complementary approach, applicable to a calculated interval, is derived from the Box–Cox family of scale transformations.  相似文献   

14.
This article studies the non null distribution of the two-sample t-statistic, or Welch statistic, under non normality. The asymptotic expansion of the non null distribution is derived up to n ?1, where n is the pooled sample size, under general conditions. It is used to compare the power with that obtained by normal theory method. A simple technique is recommended to achieve more power through a monotone transformation in practice.  相似文献   

15.
One of the most important issues in toxicity studies is the identification of the equivalence of treatments with a placebo. Because it is unacceptable to declare non‐equivalent treatments to be equivalent, it is important to adopt a reliable statistical method to properly control the family‐wise error rate (FWER). In dealing with this issue, it is important to keep in mind that overestimating toxicity equivalence is a more serious error than underestimating toxicity equivalence. Consequently asymmetric loss functions are more appropriate than symmetric loss functions. Recently Tao, Tang & Shi (2010) developed a new procedure based on an asymmetric loss function. However, their procedure is somewhat unsatisfactory because it assumes that the variances of various dose levels are known. This assumption is restrictive for some applications. In this study we propose an improved approach based on asymmetric confidence intervals without the restrictive assumption of known variances. The asymmetry guarantees reliability in the sense that the FWER is well controlled. Although our procedure is developed assuming that the variances of various dose levels are unknown but equal, simulation studies show that our procedure still performs quite well when the variances are unequal.  相似文献   

16.
This paper considers the problem of identifying which treatments are strictly worse than the best treatment or treatments in a one-way layout, which has many important applications in screening trials for new product development. A procedure is proposed that selects a subset of the treatments containing only treatments that are known to be strictly worse than the best treatment or treatments. In addition, simultaneous confidence intervals are obtained which provide upper bounds on how inferior the treatments are compared with these best treatments. In this way, the new procedure shares the characteristics of both subset selection procedures and multiple comparison procedures. Some tables of critical points are provided for implementing the new procedure, and some examples of its use are given.  相似文献   

17.
In many applications, the parameters of interest are estimated by solving non‐smooth estimating functions with U‐statistic structure. Because the asymptotic covariances matrix of the estimator generally involves the underlying density function, resampling methods are often used to bypass the difficulty of non‐parametric density estimation. Despite its simplicity, the resultant‐covariance matrix estimator depends on the nature of resampling, and the method can be time‐consuming when the number of replications is large. Furthermore, the inferences are based on the normal approximation that may not be accurate for practical sample sizes. In this paper, we propose a jackknife empirical likelihood‐based inferential procedure for non‐smooth estimating functions. Standard chi‐square distributions are used to calculate the p‐value and to construct confidence intervals. Extensive simulation studies and two real examples are provided to illustrate its practical utilities.  相似文献   

18.
This article proposes a confidence interval procedure for an open question posted by Tamhane and Dunnett regarding the inference on the minimum effective dose of a drug for binary data. We use a partitioning approach in conjunction with a confidence interval procedure to provide a solution to this problem. Binary data frequently arise in medical investigations in connection with dichotomous outcomes such as the development of a disease or the efficacy of a drug. The proposed procedure not only detects the minimum effective dose of the drug, but also provides estimation information on the treatment effect of the closest ineffective dose. Such information benefits follow-up investigations in clinical trials. We prove that, when the confidence interval for the pairwise comparison has (or asymptotically controls) confidence level 1 ? α, the stepwise procedure strongly controls (or asymptotically controls) the familywise error rate at level α, which is a key criterion in dose finding. The new method is compared with other procedures in terms of power performance and coverage probability using simulations. The simulated results shed new light on the discernible features of the new procedure. An example on the investigation of acetaminophen is included.  相似文献   

19.
We consider a 2r factorial experiment with at least two replicates. Our aim is to find a confidence interval for θ, a specified linear combination of the regression parameters (for the model written as a regression, with factor levels coded as ?1 and 1). We suppose that preliminary hypothesis tests are carried out sequentially, beginning with the rth‐order interaction. After these preliminary hypothesis tests, a confidence interval for θ with nominal coverage 1 ?α is constructed under the assumption that the selected model had been given to us a priori. We describe a new efficient Monte Carlo method, which employs conditioning for variance reduction, for estimating the minimum coverage probability of the resulting confidence interval. The application of this method is demonstrated in the context of a 23 factorial experiment with two replicates and a particular contrast θ of interest. The preliminary hypothesis tests consist of the following two‐step procedure. We first test the null hypothesis that the third‐order interaction is zero against the alternative hypothesis that it is non‐zero. If this null hypothesis is accepted, we assume that this interaction is zero and proceed to the second step; otherwise, we stop. In the second step, for each of the second‐order interactions we test the null hypothesis that the interaction is zero against the alternative hypothesis that it is non‐zero. If this null hypothesis is accepted, we assume that this interaction is zero. The resulting confidence interval, with nominal coverage probability 0.95, has a minimum coverage probability that is, to a good approximation, 0.464. This shows that this confidence interval is completely inadequate.  相似文献   

20.
ABSTRACT

Neighbor designs are recommended for the cases where the performance of treatment is affected by the neighboring treatments as in biometrics and agriculture. In this paper we have constructed two new series of non binary partially neighbor balanced designs for v = 2n and v = 2n+1 number of treatments, respectively. The blocks in the design are non binary and circular but no treatment is ever a neighbor to itself. The designs proposed here are partially balanced in terms of nearest neighbors. No such series are known in the literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号