首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 15 毫秒
1.
In this article, we propose a general method for testing the Granger noncausality hypothesis in stationary nonlinear models of unknown functional form. These tests are based on a Taylor expansion of the nonlinear model around a given point in the sample space. We study the performance of our tests by a Monte Carlo experiment and compare these to the most widely used linear test. Our tests appear to be well-sized and have reasonably good power properties.  相似文献   

2.
文章通过对隶属函数的集中度系数确定的探讨,将模糊数学应用于贝叶斯统计学,从而形成了一种新的假设检验方法.  相似文献   

3.
In the 1960's Benoit Mandelbrot and Eugene Fama argued strongly in favor of the stable Paretian distribution as a model for the unconditional distribution of asset returns. Although a substantial body of subsequent empirical studies supported this position, the stable Paretian model plays a minor role in current empirical work.

While in the economics and finance literature stable distributions are virtually exclusively associated with stable Paretian distributions, in this paper we adopt a more fundamental view and extend the concept of stability to a variety of probabilistic schemes. These schemes give rise to alternative stable distributions, which we compare empirically using S&P 500 stock return data. In this comparison the Weibull distribution, associated with both the nonrandom-minimum and geometric-random summation schemes dominates the other stable distributions considered-including the stable Paretian model.  相似文献   

4.
Recently, the field of multiple hypothesis testing has experienced a great expansion, basically because of the new methods developed in the field of genomics. These new methods allow scientists to simultaneously process thousands of hypothesis tests. The frequentist approach to this problem is made by using different testing error measures that allow to control the Type I error rate at a certain desired level. Alternatively, in this article, a Bayesian hierarchical model based on mixture distributions and an empirical Bayes approach are proposed in order to produce a list of rejected hypotheses that will be declared significant and interesting for a more detailed posterior analysis. In particular, we develop a straightforward implementation of a Gibbs sampling scheme where all the conditional posterior distributions are explicit. The results are compared with the frequentist False Discovery Rate (FDR) methodology. Simulation examples show that our model improves the FDR procedure in the sense that it diminishes the percentage of false negatives keeping an acceptable percentage of false positives.  相似文献   

5.
Summary.  We propose two test statistics for use in inverse regression problems Y = K θ + ɛ , where K is a given linear operator which cannot be continuously inverted. Thus, only noisy, indirect observations Y for the function θ are available. Both test statistics have a counterpart in classical hypothesis testing, where they are called the order selection test and the data-driven Neyman smooth test. We also introduce two model selection criteria which extend the classical Akaike information criterion and Bayes information criterion to inverse regression problems. In a simulation study we show that the inverse order selection and Neyman smooth tests outperform their direct counterparts in many cases. The theory is motivated by data arising in confocal fluorescence microscopy. Here, images are observed with blurring, modelled as convolution, and stochastic error at subsequent times. The aim is then to reduce the signal-to-noise ratio by averaging over the distinct images. In this context it is relevant to decide whether the images are still equal, or have changed by outside influences such as moving of the object table.  相似文献   

6.
An overview of hypothesis testing for the common mean of independent normal distributions is given. The case of two populations is studied in detail. A number of different types of tests are studied. Among them are a test based on the maximum of the two available t-tests, Fisher's combined test, a test based on Graybill–Deal's estimator, an approximation to the likelihood ratio test, and some tests derived using some Bayesian considerations for improper priors along with intuitive considerations. Based on some theoretical findings and mostly based on a Monte Carlo study the conclusions are that for the most part the Bayes-intuitive type tests are superior and can be recommended. When the variances of the populations are close the approximate likelihood ratio test does best.  相似文献   

7.
This article focuses on minimal upper bound of ruin probability for a discrete time risk model with Markov chain interest rate and stochastic investment return. The interest rate of bond market is assumed to be a stationary Markov chain, and the return process of a stock market can be negative. This article presents two kinds of methods for minimizing the upper bound of ruin probability. One method relies on recursive equations for finite time ruin probabilities and inductive approach, the other one depends on martingale approach. Numerical examples show that the martingale approach is better than the inductive one.  相似文献   

8.
The problem of testing the equality of the noncentrality parameters of two noncentral t-distributions with identical degrees of freedom is considered, which arises from the comparison of two signal-to-noise ratios for simple linear regression models. A test procedure is derived that is guaranteed to maintain Type I error while having only minimal amounts of conservativeness, and comparisons are made with several other approaches to this problem based on variance stabilizing transformations. The new procedure derived in this article is shown to have good properties and will be useful for practitioners.  相似文献   

9.
In this article, the multivariate linear regression model is studied under the assumptions that the error term of this model is described by the elliptically contoured distribution and the observations on the response variables are of a monotone missing pattern. It is primarily concerned with estimation of the model parameters, as well as with the development of the likelihood ratio test in order to examine the existence of linear constraints on the regression coefficients. An illustrative example is presented for the explanation of the results.  相似文献   

10.
In this paper we propose residual-based tests for the null hypothesis of cointegration with a structural break against the alternative of no cointegration. The Lagrange Multiplier (LM) test is proposed and its limiting distribution is obtained for the case in which the timing of a structural break is known. Then the test statistic is extended to deal with a structural break of unknown timing. The test statistic, a plug-in version of the test statistic for known timing, replaces the true break point by the estimated one. We show the limiting properties of the test statistic under the null as well as the alternative. Critical values are calculated for the tests by simulation methods. Finite-sample simulations show that the empirical size of the test is close to the nominal one unless the regression error is very persistent and that the test rejects the null when no cointegrating relationship with a structural break is present. We provide empirical examples based on the present-value model, the term structure model, and the money-output relationship model.  相似文献   

11.
This paper proposes a new treatment for electrical insulation degradation. Some types of insulation which have been used under various circumstances are considered to degrade at various rates in accordance with their stress circumstances. The cross-linked polyethylene (XLPE) insulated cables inspected by major Japanese electric companies clearly indicate such phenomena. By assuming that the inspected specimen is sampled from one of the clustered groups, a mixed degradation model can be constructed. Since the degradation of the insulation under common circumstances is considered to follow a Weibull distribution, a mixture model and a Weibull power law can be combined. This is called The mixture Weibull power law model. By using the maximum likelihood estimation for the newly proposed model to Japanese 22 and 33kV insulation class cables, they are clustered into a certain number of groups by using the AIC and the generalized likelihood ratio test method. The reliability of the cables at specified years are assessed. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

12.
We incorporate a random effect into a multivariate discrete proportional hazards model and propose an efficient semiparametric Bayesian estimation method. By introducing a prior process for the parameters of baseline hazards, we consider a nonparametric estimation of baseline hazards function. Using a state space representation, we derive a dynamic modeling of baseline hazards function and propose an efficient block sampler for Markov chain Monte Carlo method. A numerical example using kidney patients data is given.  相似文献   

13.
14.
Abstract. A right‐censored version of a U ‐statistic with a kernel of degree m 1 is introduced by the principle of a mean preserving reweighting scheme which is also applicable when the dependence between failure times and the censoring variable is explainable through observable covariates. Its asymptotic normality and an expression of its standard error are obtained through a martingale argument. We study the performances of our U ‐statistic by simulation and compare them with theoretical results. A doubly robust version of this reweighted U ‐statistic is also introduced to gain efficiency under correct models while preserving consistency in the face of model mis‐specifications. Using a Kendall's kernel, we obtain a test statistic for testing homogeneity of failure times for multiple failure causes in a multiple decrement model. The performance of the proposed test is studied through simulations. Its usefulness is also illustrated by applying it to a real data set on graft‐versus‐host‐disease.  相似文献   

15.
Summary.  Primary and metastatic brain tumour patients are treated with surgery, radiation therapy and chemotherapy. Such treatments often result in short- and long-term symptoms that impact cognitive, emotional and physical function. Therefore, understanding the transition of symptom burden over time is important for guiding treatment and follow-up of brain tumour patients with symptom-specific interventions. We describe the use of a hidden Markov model with person-specific random effects for the temporal pattern of symptom burden. Clinically relevant covariates are also incorporated in the analysis through the use of generalized linear models.  相似文献   

16.
In this paper, we develop procedures to test hypotheses concerning transition probability matrices arising from certain nonhomogeneous Markov processes. It is assumed that the data consist of sample paths, some of which are observed until a certain terminal state, and the other paths are censored. Problems of this type arise in the context of multi-state models relevant to Health Related Quality of Life (HRQoL) and Competing Risks. The test statistic is based on the estimator for the associated intensity matrix. We show that the asymptotic null distribution of the proposed statistic is Gaussian, and demonstrate how the procedure can be adopted for HRQoL studies and competing risks model using real data sets. Finally, we establish that the test statistic for the HRQoL has greatest local asymptotic power against a sequence of proportional hazards alternatives converging to the null hypothesis.  相似文献   

17.
Two extensive computer simulated tables of percentage points of the asymptotic test statistics for testing lognormal or Weibull population proposed by Pereira (1978) are discussed. Special attention is given to small sample cases. Some of the most commonly used 16 symmetrical probability points are reported. These points are 0.001, 0.005, 0.01. 002. 0.025. 0.05. 0.10.0.15, 0.85, 0.90, 0.95, 0.975, 0.98, 0.99, 0.995 and 0.999. These empirical Sumulated results can be used to test hypotheses for these two particular populations and are adequate when using a normal approximation.  相似文献   

18.
A practical problem with large scale survey data is the potential for overdispersion. Overdispersion occurs when the data display more variability than is predicted by the variance–mean relationship for the assumed sampling model. This paper describes a simple strategy for detecting and adjusting for overdispersion in large scale survey data. The method is primarily motivated by data on the relationship between social class and educational attainment obtained from a 2% sample from the 1991 census of the population of Great Britain. Overdispersion can be detected by first grouping the data into a number of strata of approximately equal size. Under the assumption that the observations are independent and there is no variability in the parameter of interest, there is a direct relationship between the nominal standard errors and the empirical or sample standard deviation of the parameter estimates obtained from each of the separate strata. With the 2% sample from the British census data, quite a discernible departure from this relationship was found, indicating overdispersion. After allowing for overdispersion, improved and more realistic measures of precision of the strength of the social class–education associations were obtained.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号