首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In this paper we develop a unified approach to modeling and simulation of a nonhomogeneous Poisson process whose rate function exhibits cyclic behavior as well as a long-term evolutionary trend. The approach can be applied whether the oscillation frequency of the cyclic behavior is known or unknown. To model such a process, we use an exponential rate function whose exponent includes both a polynomial and a trigonometric component.Maximum likelihood estimates of the unknown continuous parameters of this function are obtained numerically, and the degree of the polynomial component is determined by a likelihood ratio test. If the oscillation frequency is unknown, then an initial estimate of this parameter is obtained via spectral analysis of the observed series of events; initial estimates of the remaining trigonometric (respectively, polynomial) parameters are computed from a standard maximum likelihood (respectively, moment-matching) procedure for an exponential-trigonometric (respectively, exponential-polynomial) rate function. To simulate the fitted process by the method of thinning, we present (a) a procedure for constructing an optimal piecewise linear majorizing rate function; and(b)a "piecewise thinning" simulation procedure based on the inverse transform method for generating events from a piecewise linear rate function. These procedures are applied to the storm-arrival process observed at an off-shore drilling site.  相似文献   

2.
Assuming that all components of a normal mean vector are simultaneously non negative or non positive, we consider a multivariate two-sided test for testing whether the normal mean vector is equal to zero or not. Since the likelihood ratio test is accompanied with theoretical and computational complications, we discuss two kinds of approximations of the likelihood ratio test. One is based on a conservative critical value determined by a certain inequality. The other is constructed by the approximation of the likelihood ratio test proposed by Tang et al. (1989). We compare the likelihood ratio test and two kinds of approximations through numerical examples regarding critical values and the power of the test.  相似文献   

3.
This paper examines modeling and inference questions for experiments in which different subsets of a set of k possibly dependent components are tested in r different environments. In each environment, the failure times of the set of components on test is assumed to be governed by a particular type of multivariate exponential (MVE) distribution. For any given component tested in several environments, it is assumed that its marginal failure rate varies from one environment to another via a change of scale between the environments, resulting in a joint MVE model which links in a natural way the applicable MVE distributions describing component behavior in each fixed environment. This study thus extends the work of Proschan and Sullo (1976) to multiple environments and the work of Kvam and Samaniego (1993) to dependent data. The problem of estimating model parameters via the method of maximum likelihood is examined in detail. First, necessary and sufficient conditions for the identifiability of model parameters are established. We then treat the derivation of the MLE via a numerically-augmented application of the EM algorithm. The feasibility of the estimation method is demonstrated in an example in which the likelihood ratio test of the hypothesis of equal component failure rates within any given environment is carried out.  相似文献   

4.
This paper investigates several techniques to discriminate two multivariate stationary signals. The methods considered include Gaussian likelihood ratio tests for variance equality, a chi-squared time-domain test, and a spectral-based test. The latter two tests assess equality of the multivariate autocovariance function of the two signals over many different lags. The Gaussian likelihood ratio test is perhaps best viewed as principal component analyses (PCA) without dimension reduction aspects; it can be modified to consider covariance features other than variances via dimension augmentation tactics. A simulation study is constructed that shows how one can make inappropriate conclusions with PCA tests, even when dimension augmentation techniques are used to incorporate non-zero lag autocovariances into the analysis. The various discrimination methods are first discussed. A simulation study then illuminates the various properties of the methods. In this pursuit, calculations are needed to identify several multivariate time series models with specific autocovariance properties. To demonstrate the applicability of the methods, nine US and Canadian weather stations from three distinct regions are clustered. Here, the spectral clustering perfectly identified distinct regions, the chi-squared test performed marginally, and the PCA/likelihood ratio method did not perform well.  相似文献   

5.
We investigate the properties of several statistical tests for comparing treatment groups with respect to multivariate survival data, based on the marginal analysis approach introduced by Wei, Lin and Weissfeld [Regression Analysis of multivariate incomplete failure time data by modelling marginal distributians, JASA vol. 84 pp. 1065–1073]. We consider two types of directional tests, based on a constrained maximization and on linear combinations of the unconstrained maximizer of the working likelihood function, and the omnibus test arising from the same working likelihood. The directional tests are members of a larger class of tests, from which an asymptotically optimal test can be found. We compare the asymptotic powers of the tests under general contiguous alternatives for a variety of settings, and also consider the choice of the number of survival times to include in the multivariate outcome. We illustrate the results with simulations and with the results from a clinical trial examining recurring opportunistic infections in persons with HIV.  相似文献   

6.
This paper deals with testing for non-linearity in a regression model with one possibly non-linear component being estimated non-parametrically using smoothing splines. We propose two new variance–covariance based tests for detecting non-linearity applying a likelihood ratio hypothesis testing approach. The first test is for the inclusion of a possibly non-linear component and the second one is for linearity of a possibly non-linear component. The tests are based on a stochastic model in state space form given by Wahba (J. Roy. Statist. Soc. Ser. B 40 (1978) 364), Wecker and Ansley (J. Amer. Statist. Assoc. 78 (1983) 81) and de Jong and Mazzi (Modeling and smoothing unequally spaced sequence data, University of York and University of British Columbia, Unpublished paper) for which smoothing splines provide an optimal estimate. Pitrun (A smoothing spline approach to non-linear interface for time series, Department of Econometrics and Business Statistics, Monash University, Unpublished Ph.D. thesis) derived the variance–covariance structure of this model, which allows the use of a marginal likelihood approach. This leads naturally to marginal-likelihood based likelihood ratio tests for non-linearity. Small sample properties of the new tests have been investigated via Monte Carlo studies.  相似文献   

7.
Multivariate model validation is a complex decision-making problem involving comparison of multiple correlated quantities, based upon the available information and prior knowledge. This paper presents a Bayesian risk-based decision method for validation assessment of multivariate predictive models under uncertainty. A generalized likelihood ratio is derived as a quantitative validation metric based on Bayes’ theorem and Gaussian distribution assumption of errors between validation data and model prediction. The multivariate model is then assessed based on the comparison of the likelihood ratio with a Bayesian decision threshold, a function of the decision costs and prior of each hypothesis. The probability density function of the likelihood ratio is constructed using the statistics of multiple response quantities and Monte Carlo simulation. The proposed methodology is implemented in the validation of a transient heat conduction model, using a multivariate data set from experiments. The Bayesian methodology provides a quantitative approach to facilitate rational decisions in multivariate model assessment under uncertainty.  相似文献   

8.
The Inverse Gaussian (IG) distribution is commonly introduced to model and examine right skewed data having positive support. When applying the IG model, it is critical to develop efficient goodness-of-fit tests. In this article, we propose a new test statistic for examining the IG goodness-of-fit based on approximating parametric likelihood ratios. The parametric likelihood ratio methodology is well-known to provide powerful likelihood ratio tests. In the nonparametric context, the classical empirical likelihood (EL) ratio method is often applied in order to efficiently approximate properties of parametric likelihoods, using an approach based on substituting empirical distribution functions for their population counterparts. The optimal parametric likelihood ratio approach is however based on density functions. We develop and analyze the EL ratio approach based on densities in order to test the IG model fit. We show that the proposed test is an improvement over the entropy-based goodness-of-fit test for IG presented by Mudholkar and Tian (2002). Theoretical support is obtained by proving consistency of the new test and an asymptotic proposition regarding the null distribution of the proposed test statistic. Monte Carlo simulations confirm the powerful properties of the proposed method. Real data examples demonstrate the applicability of the density-based EL ratio goodness-of-fit test for an IG assumption in practice.  相似文献   

9.
Random effects regression mixture models are a way to classify longitudinal data (or trajectories) having possibly varying lengths. The mixture structure of the traditional random effects regression mixture model arises through the distribution of the random regression coefficients, which is assumed to be a mixture of multivariate normals. An extension of this standard model is presented that accounts for various levels of heterogeneity among the trajectories, depending on their assumed error structure. A standard likelihood ratio test is presented for testing this error structure assumption. Full details of an expectation-conditional maximization algorithm for maximum likelihood estimation are also presented. This model is used to analyze data from an infant habituation experiment, where it is desirable to assess whether infants comprise different populations in terms of their habituation time.  相似文献   

10.
We consider the testing problems of the structural parameters for the multivariate linear functional relationship model. We treat the likelihood ratio test statistics and the test statistics based on the asymptotic distributions of the maximum likelihood estimators. We derive their asymptotic distributions under each null hypothesis respectively. A simulation study is made to evaluate how we can trust our asymptotic results when the sample size is rather small.  相似文献   

11.
The class of inflated beta regression models generalizes that of beta regressions [S.L.P. Ferrari and F. Cribari-Neto, Beta regression for modelling rates and proportions, J. Appl. Stat. 31 (2004), pp. 799–815] by incorporating a discrete component that allows practitioners to model data on rates and proportions with observations that equal an interval limit. For instance, one can model responses that assume values in (0, 1]. The likelihood ratio test tends to be quite oversized (liberal, anticonservative) in inflated beta regressions estimated with a small number of observations. Indeed, our numerical results show that its null rejection rate can be almost twice the nominal level. It is thus important to develop alternative testing strategies. This paper develops small-sample adjustments to the likelihood ratio and signed likelihood ratio test statistics in inflated beta regression models. The adjustments do not require orthogonality between the parameters of interest and the nuisance parameters and are fairly simple since they only require first- and second-order log-likelihood cumulants. Simulation results show that the modified likelihood ratio tests deliver much accurate inference in small samples. An empirical application is presented and discussed.  相似文献   

12.
We consider a likelihood ratio test of independence for large two-way contingency tables having both structural (non-random) and sampling (random) zeros in many cells. The solution of this problem is not available using standard likelihood ratio tests. One way to bypass this problem is to remove the structural zeroes from the table and implement a test on the remaining cells which incorporate the randomness in the sampling zeros; the resulting test is a test of quasi-independence of the two categorical variables. This test is based only on the positive counts in the contingency table and is valid when there is at least one sampling (random) zero. The proposed (likelihood ratio) test is an alternative to the commonly used ad hoc procedures of converting the zero cells to positive ones by adding a small constant. One practical advantage of our procedure is that there is no need to know if a zero cell is structural zero or a sampling zero. We model the positive counts using a truncated multinomial distribution. In fact, we have two truncated multinomial distributions; one for the null hypothesis of independence and the other for the unrestricted parameter space. We use Monte Carlo methods to obtain the maximum likelihood estimators of the parameters and also the p-value of our proposed test. To obtain the sampling distribution of the likelihood ratio test statistic, we use bootstrap methods. We discuss many examples, and also empirically compare the power function of the likelihood ratio test relative to those of some well-known test statistics.  相似文献   

13.
In this paper, we consider a new mixture of varying coefficient models, in which each mixture component follows a varying coefficient model and the mixing proportions and dispersion parameters are also allowed to be unknown smooth functions. We systematically study the identifiability, estimation and inference for the new mixture model. The proposed new mixture model is rather general, encompassing many mixture models as its special cases such as mixtures of linear regression models, mixtures of generalized linear models, mixtures of partially linear models and mixtures of generalized additive models, some of which are new mixture models by themselves and have not been investigated before. The new mixture of varying coefficient model is shown to be identifiable under mild conditions. We develop a local likelihood procedure and a modified expectation–maximization algorithm for the estimation of the unknown non‐parametric functions. Asymptotic normality is established for the proposed estimator. A generalized likelihood ratio test is further developed for testing whether some of the unknown functions are constants. We derive the asymptotic distribution of the proposed generalized likelihood ratio test statistics and prove that the Wilks phenomenon holds. The proposed methodology is illustrated by Monte Carlo simulations and an analysis of a CO2‐GDP data set.  相似文献   

14.
Marginal hazard models for multivariate failure time data have been studied extensively in recent literature. However, standard hypothesis test statistics based on the likelihood method are not exactly appropriate for this kind of model. In this paper, extensions of the three commonly used likelihood hypothesis test statistics are discussed. Generalized Wald, generalized score and generalized likelihood ratio tests for hazard ratio parameters in a marginal hazard model for multivariate failure time data are proposed and their asymptotic distributions examined. The finite sample properties of these statistics are studied through simulations. The proposed method is applied to data from Busselton Population Health Surveys.  相似文献   

15.
In this paper, we develop modified versions of the likelihood ratio test for multivariate heteroskedastic errors-in-variables regression models. The error terms are allowed to follow a multivariate distribution in the elliptical class of distributions, which has the normal distribution as a special case. We derive the Skovgaard-adjusted likelihood ratio statistics, which follow a chi-squared distribution with a high degree of accuracy. We conduct a simulation study and show that the proposed tests display superior finite sample behaviour as compared to the standard likelihood ratio test. We illustrate the usefulness of our results in applied settings using a data set from the WHO MONICA Project on cardiovascular disease.  相似文献   

16.
ABSTRACT

We derive the influence function of the likelihood ratio test statistic for multivariate normal sample. The derived influence function does not depend on the influence functions of the parameters under the null hypothesis. So we can obtain directly the empirical influence function with only the maximum likelihood estimators under the null hypothesis. Since the derived formula is a general form, it can be applied to influence analysis on many statistical testing problems.  相似文献   

17.
In each study testing the survival experience of one or more populations, one must not only choose an appropriate class of tests, but further an appropriate weight function. As the optimal choice depends on the true shape of the hazard ratio, one is often not capable of getting the best results with respect to a specific dataset. For the univariate case several methods were proposed to conquer this problem. However, most of the interesting datasets contain multivariate observations nowadays. In this work we propose a multivariate version of a method based on multiple constrained censored empirical likelihood where the constraints are formulated as linear functionals of the cumulative hazard functions. By considering the conditional hazards, we take the correlation between the components into account with the goal of obtaining a test that exhibits a high power irrespective of the shape of the hazard ratio under the alternative hypothesis.  相似文献   

18.
In this paper, we study a k-step-stress accelerated life test under Type-I censoring. The lifetime of the items follows the multivariate exponential distribution and a cumulative exposure model is considered. We derive the maximum likelihood estimators of the model parameters and establish the asymptotic properties of them. The problem of choosing the optimal time is addressed by using V-optimality as well as D-optimality criteria. Finally, some numerical studies are discussed to illustrate the proposed procedures.  相似文献   

19.
Tests on multivariate means that are hypothesized to be in a specified direction have received attention from both theoretical and applied points of view. One of the most common procedures used to test this cone alternative is the likelihood ratio test (LRT) assuming a multivariate normal model for the data. However, the resulting test for an ordered alternative is biased in that the only usable critical values are bounds on the null distribution. The present paper provides empirical evidence that bootstrapping the null distribution of the likelihood ratio statistic results in a bootstrap test (BT) with comparable power properties without the additional burden of assuming multivariate normality. Additionally, the tests based on the LRT statistic can reject the null hypothesis in favor of the alternative even though the true means are far from the alternative region. The BT also has similar properties for normal and nonnormal data. This anomalous behavior is due to the formulation of the null hypothesis and a possible remedy is to reformulate the null to be the complement of the alternative hypothesis. We discuss properties of a BT for the modified set of hypotheses (MBT) based on a simulation study. The resulting test is conservative in general and in some specific cases has power estimates comparable to those for existing methods. The BT has higher sensitivity but relatively lower specificity, whereas the MBT has higher specificity but relatively lower sensitivity.  相似文献   

20.
The class of Multivariate BiLinear GARCH (MBL-GARCH) models is proposed and its statistical properties are investigated. The model can be regarded as a generalization to a multivariate setting of the univariate BL-GARCH model proposed by Storti and Vitale (Stat Methods Appl 12:19–40, 2003a; Comput Stat 18:387–400, 2003b). It is shown how MBL-GARCH models allow to account for asymmetric effects in both conditional variances and correlations. An EM algorithm for the maximum likelihood estimation of the model parameters is derived. Furthermore, in order to test for the appropriateness of the conditional variance and covariance specifications, a set of robust conditional moments test statistics are defined. Finally, the effectiveness of MBL-GARCH models in a risk management setting is assessed by means of an application to the estimation of the optimal hedge ratio in futures hedging.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号