首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider the likelihood ratio test (LRT) process related to the test of the absence of QTL (a QTL denotes a quantitative trait locus, i.e. a gene with quantitative effect on a trait) on the interval [0, T] representing a chromosome. The originality of this study is that we are under selective genotyping: only the individuals with extreme phenotypes are genotyped. We give the asymptotic distribution of this LRT process under the null hypothesis that there is no QTL on [0, T] and under local alternatives with a QTL at t on [0, T]. We show that the LRT process is asymptotically the square of a ‘non-linear interpolated and normalized Gaussian process’. We have an easy formula in order to compute the supremum of the square of this interpolated process. We prove that we have to genotype symmetrically and that the threshold is exactly the same as in the situation where all the individuals are genotyped.  相似文献   

2.
Testing the existence of a quantitative trait locus (QTL) effect is an important task in QTL mapping studies. Most studies concentrate on the case where the phenotype distributions of different QTL groups follow normal distributions with the same unknown variance. In this paper we make a more general assumption that the phenotype distributions come from a location-scale distribution family. We derive the limiting distribution of the likelihood ratio test (LRT) for the existence of the QTL effect in both location and scale in genetic backcross studies. We further identify an explicit representation for this limiting distribution. As a complement, we study the limiting distribution of the LRT and its explicit representation for the existence of the QTL effect in the location only. The asymptotic properties of the LRTs under a local alternative are also investigated. Simulation studies are used to evaluate the asymptotic results, and a real-data example is included for illustration.  相似文献   

3.
Social network analysis is an important analytic tool to forecast social trends by modeling and monitoring the interactions between network members. This paper proposes an extension of a statistical process control method to monitor social networks by determining the baseline periods when the reference network set is collected. We consider probability density profile (PDP) to identify baseline periods using Poisson regression to model the communications between members. Also, Hotelling T2 and likelihood ratio test (LRT) statistics are developed to monitor the network in Phase I. The results based on signal probability indicate a satisfactory performance for the proposed method.  相似文献   

4.
We consider the problem of testing the equality of two population means when the population variances are not necessarily equal. We propose a Welch-type statistic, say T* c, based on Tiku!s ‘1967, 1980’ modified maximum likelihood estimators, and show that this statistic is robust to symmetric and moderately skew distributions. We investigate the power properties of the statistic T* c; T* c clearly seems to be more powerful than Yuen's ‘1974’ Welch-type robust statistic based on the trimmed sample means and the matching sample variances. We show that the analogous statistics based on the ‘adaptive’ robust estimators give misleading Type I errors. We generalize the results to testing linear contrasts among k population means  相似文献   

5.
Quantitative trait loci (QTL) mapping has been a standard means in identifying genetic regions harboring potential genes underlying complex traits. Likelihood ratio test (LRT) has been commonly applied to assess the significance of a genetic locus in a mixture model content. Given the time constraint in commonly used permutation tests to assess the significance of LRT in QTL mapping, we study the behavior of the LRT statistic in mixture model when the proportions of the distributions are unknown. We found that the asymptotic null distribution is stationary Gaussian process after suitable transformation. The result can be applied to one-parameter exponential family mixture model. Under certain condition, such as in a backcross mapping model, the tail probability of the supremum of the process is calculated and the threshold values can be determined by solving the distribution function. Simulation studies were performed to evaluate the asymptotic results.  相似文献   

6.
In this article, the general linear profile-monitoring problem in multistage processes is addressed. An approach based on the U statistic is first proposed to remove the effect of the cascade property in multistage processes. Then, the T2 chart and a likelihood ratio test (LRT)-based scheme on the adjusted parameters are constructed for Phase-I monitoring of the parameters of general linear profiles in each stage. Using simulation experiments, the performance of the proposed methods is evaluated and compared in terms of the signal probability for both weak and strong autocorrelations, for processes with two and three stages, as well as for two sample sizes. According to the results, the effect of the cascade property is effectively removed and hence each stage can be monitored independently. In addition, the result shows that the LRT approach provides significantly better results than the T2 method and outperforms it under different shift and autocorrelation scenarios. Moreover, the proposed methods perform better when larger sample sizes are used in the process. Two illustrative examples, including a real case and a simulated example, are used to show the applicability of the proposed methods.  相似文献   

7.
Abstract

It is common to monitor several correlated quality characteristics using the Hotelling's T 2 statistic. However, T 2 confounds the location shift with scale shift and consequently it is often difficult to determine the factors responsible for out of control signal in terms of the process mean vector and/or process covariance matrix. In this paper, we propose a diagnostic procedure called ‘D-technique’ to detect the nature of shift. For this purpose, two sets of regression equations, each consisting of regression of a variable on the remaining variables, are used to characterize the ‘structure’ of the ‘in control’ process and that of ‘current’ process. To determine the sources responsible for an out of control state, it is shown that it is enough to compare these two structures using the dummy variable multiple regression equation. The proposed method is operationally simpler and computationally advantageous over existing diagnostic tools. The technique is illustrated with various examples.  相似文献   

8.
We study the problem of testing: H0 : μ ∈ P against H1 : μ ? P, based on a random sample of N observations from a p-dimensional normal distribution Np(μ, Σ) with Σ > 0 and P a closed convex positively homogeneous set. We develop the likelihood-ratio test (LRT) for this problem. We show that the union-intersection principle leads to a test equivalent to the LRT. It also gives a large class of tests which are shown to be admissible by Stein's theorem (1956). Finally, we give the α-level cutoff points for the LRT.  相似文献   

9.
The average availability of a repairable system is the expected proportion of time that the system is operating in the interval [0, t]. The present article discusses the nonparametric estimation of the average availability when (i) the data on ‘n’ complete cycles of system operation are available, (ii) the data are subject to right censorship, and (iii) the process is observed upto a specified time ‘T’. In each case, a nonparametric confidence interval for the average availability is also constructed. Simulations are conducted to assess the performance of the estimators.  相似文献   

10.
In some industrial applications, the quality of a process or product is characterized by a relationship between the response variable and one or more independent variables which is called as profile. There are many approaches for monitoring different types of profiles in the literature. Most researchers assume that the response variable follows a normal distribution. However, this assumption may be violated in many cases. The most likely situation is when the response variable follows a distribution from generalized linear models (GLMs). For example, when the response variable is the number of defects in a certain area of a product, the observations follow Poisson distribution and ignoring this fact will cause misleading results. In this paper, three methods including a T2-based method, likelihood ratio test (LRT) method and F method are developed and modified in order to be applied in monitoring GLM regression profiles in Phase I. The performance of the proposed methods is analysed and compared for the special case that the response variable follows Poisson distribution. A simulation study is done regarding the probability of the signal criterion. Results show that the LRT method performs better than two other methods and the F method performs better than the T2-based method in detecting either small or large step shifts as well as drifts. Moreover, the F method performs better than the other two methods, and the LRT method performs poor in comparison with the F and T2-based methods in detecting outliers. A real case, in which the size and number of agglomerates ejected from a volcano in successive days form the GLM profile, is illustrated and the proposed methods are applied to determine whether the number of agglomerates of each size is under statistical control or not. Results showed that the proposed methods could handle the mentioned situation and distinguish the out-of-control conditions.  相似文献   

11.
We consider an inhomogeneous Poisson process X on [0, T]. The intensity function of X is supposed to be strictly positive and smooth on [0, T] except at the point θ, in which it has either a 0-type singularity (tends to 0 like |x| p , p∈(0, 1)), or an ∞-type singularity (tends to ∞ like |x| p , p∈(?1, 0)). We suppose that we know the shape of the intensity function, but not the location of the singularity. We consider the problem of estimation of this location (shift) parameter θ based on n observations of the process X. We study the Bayesian estimators and, in the case p>0, the maximum-likelihood estimator. We show that these estimators are consistent, their rate of convergence is n 1/(p+1), they have different limit distributions, and the Bayesian estimators are asymptotically efficient.  相似文献   

12.
Consider an inhomogeneous Poisson process X on [0, T] whose unk-nown intensity function “switches” from a lower function g* to an upper function h* at some unknown point ?* that has to be identified. We consider two known continuous functions g and h such that g*(t) ? g(t) < h(t) ? h*(t) for 0 ? t ? T. We describe the behavior of the generalized likelihood ratio and Wald’s tests constructed on the basis of a misspecified model in the asymptotics of large samples. The power functions are studied under local alternatives and compared numerically with help of simulations. We also show the following robustness result: the Type I error rate is preserved even though a misspecified model is used to construct tests.  相似文献   

13.
The phenotype of a quantitative trait locus (QTL) is often modeled by a finite mixture of normal distributions. If the QTL effect depends on the number of copies of a specific allele one carries, then the mixture model has three components. In this case, the mixing proportions have a binomial structure according to the Hardy–Weinberg equilibrium. In the search for QTL, a significance test of homogeneity against the Hardy–Weinberg normal mixture model alternative is an important first step. The LOD score method, a likelihood ratio test used in genetics, is a favored choice. However, there is not yet a general theory for the limiting distribution of the likelihood ratio statistic in the presence of unknown variance. This paper derives the limiting distribution of the likelihood ratio statistic, which can be described by the supremum of a quadratic form of a Gaussian process. Further, the result implies that the distribution of the modified likelihood ratio statistic is well approximated by a chi-squared distribution. Simulation results show that the approximation has satisfactory precision for the cases considered. We also give a real-data example.  相似文献   

14.
This paper is concerned with the Bernstein estimator [Vitale, R.A. (1975), ‘A Bernstein Polynomial Approach to Density Function Estimation’, in Statistical Inference and Related Topics, ed. M.L. Puri, 2, New York: Academic Press, pp. 87–99] to estimate a density with support [0, 1]. One of the major contributions of this paper is an application of a multiplicative bias correction [Terrell, G.R., and Scott, D.W. (1980), ‘On Improving Convergence Rates for Nonnegative Kernel Density Estimators’, The Annals of Statistics, 8, 1160–1163], which was originally developed for the standard kernel estimator. Moreover, the renormalised multiplicative bias corrected Bernstein estimator is studied rigorously. The mean squared error (MSE) in the interior and mean integrated squared error of the resulting bias corrected Bernstein estimators as well as the additive bias corrected Bernstein estimator [Leblanc, A. (2010), ‘A Bias-reduced Approach to Density Estimation Using Bernstein Polynomials’, Journal of Nonparametric Statistics, 22, 459–475] are shown to be O(n?8/9) when the underlying density has a fourth-order derivative, where n is the sample size. The condition under which the MSE near the boundary is O(n?8/9) is also discussed. Finally, numerical studies based on both simulated and real data sets are presented.  相似文献   

15.
We study the problem of approximating a stochastic process Y = {Y(t: tT} with known and continuous covariance function R on the basis of finitely many observations Y(t 1,), …, Y(t n ). Dependent on the knowledge about the mean function, we use different approximations ? and measure their performance by the corresponding maximum mean squared error sub t∈T E(Y(t) ? ?(t))2. For a compact T ? ? p we prove sufficient conditions for the existence of optimal designs. For the class of covariance functions on T 2 = [0, 1]2 which satisfy generalized Sacks/Ylvisaker regularity conditions of order zero or are of product type, we construct sequences of designs for which the proposed approximations perform asymptotically optimal.  相似文献   

16.
Motivated by several practical issues, we consider the problem of estimating the mean of a p-variate population (not necessarily normal) with unknown finite covariance. A quadratic loss function is used. We give a number of estimators (for the mean) with their loss functions admitting expansions to the order of p ?1/2 as p→∞. These estimators contain Stein's [Inadmissibility of the usual estimator for the mean of a multivariate normal population, in Proceedings of the Third Berkeley Symposium in Mathematical Statistics and Probability, Vol. 1, J. Neyman, ed., University of California Press, Berkeley, 1956, pp. 197–206] estimate as a particular case and also contain ‘multiple shrinkage’ estimates improving on Stein's estimate. Finally, we perform a simulation study to compare the different estimates.  相似文献   

17.
Various regression models based on sib-pair data have been developed for mapping quantitative trait loci (QTL) in humans since the seminal paper published in 1972 by Haseman and Elston. Fulker and Cardon [D.W. Fulker, L.R. Cardon, A sib-pair approach to interval mapping of quantitative trait loci, Am. J. Hum. Genet. 54 (1994) 1092–1103] adapted the idea of interval mapping [E.S. Lander, D. Botstein, Mapping Mendelian factors underlying quantitative traits using RFLP linkage maps, Genetics 121 (1989) 185–199] to the Haseman–Elston regression model in order to increase the power of QTL mapping. However, in the interval mapping approach of Fulker and Cardon, the statistic for testing QTL effects does not obey the classical statistical theory and hence critical values of the test can not be appropriately determined. In this article, we consider a new interval mapping approach based on a general sib-pair regression model. A modified Wald test is proposed for the testing of QTL effects. The asymptotic distribution of the modified Wald test statistic is provided and hence the critical values or the p-values of the test can be well determined. Simulation studies are carried out to verify the validity of the modified Wald test and to demonstrate its desirable power.  相似文献   

18.
Thermal, viscoelastic and mechanical properties of polyphenylene sulfide (PPS) were optimized as a function of extrusion and injection molding parameters. For this purpose, design of experiments approach utilizing Taguchi's L27 (37) orthogonal arrays was used. Effect of the parameters on desired properties was determined using the analysis of variance. Differential scanning calorimeter (DSC) tests were performed for the analysis of thermal properties such as melting temperature (Tm) and melting enthalpy (ΔHM). Dynamic mechanical analysis (DMA) tests were performed for the analysis of viscoelastic properties such as damping factor (tan?δ) and glass transition temperature (Tg). Tensile tests were performed for the analysis of mechanical properties such as tensile strength and modulus. With optimized process parameters, verification DSC, DMA and tensile tests were performed for thermal, viscoelastic and mechanical properties, respectively. The Taguchi method showed that ‘barrel temperature’ and its level of ‘340°C’ were seen to be the most effective parameter and its level; respectively. It was suggested that PPS can be reinforced for further improvement after optimized thermal, viscoelastic and mechanical properties.  相似文献   

19.
The Hotelling's T 2 control chart, a direct analogue of the univariate Shewhart chart, is perhaps the most commonly used tool in industry for simultaneous monitoring of several quality characteristics. Recent studies have shown that using variable sampling size (VSS) schemes results in charts with more statistical power when detecting small to moderate shifts in the process mean vector. In this paper, we build a cost model of a VSS T 2 control chart for the economic and economic statistical design using the general model of Lorenzen and Vance [The economic design of control charts: A unified approach, Technometrics 28 (1986), pp. 3–11]. We optimize this model using a genetic algorithm approach. We also study the effects of the costs and operating parameters on the VSS T 2 parameters, and show, through an example, the advantage of economic design over statistical design for VSS T 2 charts, and measure the economic advantage of VSS sampling versus fixed sample size sampling.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号