首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we consider that the degradation of two performance characteristics of a product can be modelled by stochastic processes and jointly by copula functions, but different stochastic processes govern the behaviour of each performance characteristic (PC) degradation. Different heterogeneous and homogeneous models are presented considering copula functions and different combinations of the most used stochastic processes in degradation analysis as marginal distributions. This is an important aspect to consider because the behaviour of the degradation of each PC may be different in its nature. As the joint distributions of the proposed models result in complex distributions, the estimation of the parameters of interest is performed via MCMC. A simulation study is performed to compare heterogeneous and homogeneous models. In addition, the proposed models are implemented to crack propagation data of two terminals of an electronic device, and some insights are provided about the product reliability under heterogeneous models.  相似文献   

2.
Sequential multi-chart detection procedures for detecting changes in multichannel sensor systems are developed. In the case of complete information on pre-change and post-change distributions, the detection algorithm represents a likelihood ratio-based multichannel generalization of Page’s cumulative sum (CUSUM) test that is applied to general stochastic models that may include correlated and nonstationary observations. There are many potential application areas where it is necessary to consider multichannel generalizations and general statistical models. In this paper our main motivation for doing so is network security: rapid anomaly detection for an early detection of attacks in computer networks that lead to changes in network traffic. Moreover, this kind of application encourages the development of a nonparametric multichannel detection test that does not use exact pre-change (legitimate) and post-change (attack) traffic models. The proposed nonparametric method can be effectively applied to detect a wide variety of attacks such as denial-of-service attacks, worm-based attacks, port-scanning, and man-in-the-middle attacks. In addition, we propose a multichannel CUSUM procedure that is based on binary quantized data; this procedure turns out to be more efficient than the previous two algorithms in certain scenarios. All proposed detection algorithms are based on the change-point detection theory. They utilize the thresholding of test statistics to achieve a fixed rate of false alarms, while allowing changes in statistical models to be detected “as soon as possible”. Theoretical frameworks for the performance analysis of detection procedures, as well as results of Monte Carlo simulations for a Poisson example and results of detecting real flooding attacks, are presented.  相似文献   

3.
Self‐regulating processes are stochastic processes whose local regularity, as measured by the pointwise Hölder exponent, is a function of amplitude. They seem to provide relevant models for various signals arising for example in geophysics or biomedicine. We propose in this work an estimator of the self‐regulating function (that is, the function relating amplitude and Hölder regularity) of the self‐regulating midpoint displacement process and study some of its properties. We prove that it is almost surely convergent and obtain a central limit theorem. Numerical simulations show that the estimator behaves well in practice.  相似文献   

4.
This article investigates the application of depth estimators to crack growth models in construction engineering. Many crack growth models are based on the Paris–Erdogan equation which describes crack growth by a deterministic differential equation. By introducing a stochastic error term, crack growth can be modeled by a non-stationary autoregressive process with Lévy-type errors. A regression depth approach is presented to estimate the drift parameter of the process. We then prove the consistency of the estimator under quite general assumptions on the error distribution. By an extension of the depth notion to simplical depth it is possible to use a degenerated U-statistic and to establish tests for general hypotheses about the drift parameter. Since the statistic asymptotically has a transformed ${\chi_1^2}$ distribution, simple confidence intervals for the drift parameter can be obtained. In the second part, simulations of AR(1) processes with different error distributions are used to examine the quality of the constructed test. Finally we apply the presented method to crack growth experiments. We compare two datasets from independent experiments under different conditions but with the same material. We show that the parameter estimates differ significantly in this case.  相似文献   

5.
Goodness-of-fit evaluation of a parametric regression model is often done through hypothesis testing, where the fit of the model of interest is compared statistically to that obtained under a broader class of models. Nonparametric regression models are frequently used as the latter type of model, because of their flexibility and wide applicability. To date, this type of tests has generally been performed globally, by comparing the parametric and nonparametric fits over the whole range of the data. However, in some instances it might be of interest to test for deviations from the parametric model that are localized to a subset of the data. In this case, a global test will have low power and hence can miss important local deviations. Alternatively, a naive testing approach that discards all observations outside the local interval will suffer from reduced sample size and potential overfitting. We therefore propose a new local goodness-of-fit test for parametric regression models that can be applied to a subset of the data but relies on global model fits, and propose a bootstrap-based approach for obtaining the distribution of the test statistic. We compare the new approach with the global and the naive tests, both theoretically and through simulations, and illustrate its practical behavior in an application. We find that the local test has a better ability to detect local deviations than the other two tests.  相似文献   

6.
This article proposes a test to determine whether “big data” nowcasting methods, which have become an important tool to many public and private institutions, are monotonically improving as new information becomes available. The test is the first to formalize existing evaluation procedures from the nowcasting literature. We place particular emphasis on models involving estimated factors, since factor-based methods are a leading case in the high-dimensional empirical nowcasting literature, although our test is still applicable to small-dimensional set-ups like bridge equations and MIDAS models. Our approach extends a recent methodology for testing many moment inequalities to the case of nowcast monotonicity testing, which allows the number of inequalities to grow with the sample size. We provide results showing the conditions under which both parameter estimation error and factor estimation error can be accommodated in this high-dimensional setting when using the pseudo out-of-sample approach. The finite sample performance of our test is illustrated using a wide range of Monte Carlo simulations, and we conclude with an empirical application of nowcasting U.S. real gross domestic product (GDP) growth and five GDP sub-components. Our test results confirm monotonicity for all but one sub-component (government spending), suggesting that the factor-augmented model may be misspecified for this GDP constituent. Supplementary materials for this article are available online.  相似文献   

7.
The standard location and scale unrestricted (or unified) skew-normal (SUN) family studied by Arellano-Valle and Genton [On fundamental skew distributions. J Multivar Anal. 2005;96:93–116] and Arellano-Valle and Azzalini [On the unification of families of skew-normal distributions. Scand J Stat. 2006;33:561–574], allows the modelling of data which is symmetrically or asymmetrically distributed. The family has a number of advantages suitable for the analysis of stochastic processes such as Auto-Regressive Moving-Average (ARMA) models, including being closed under linear combinations, being able to satisfy the consistency condition of Kolmogorov’s theorem and providing the guarantee of the existence of such a SUN stochastic process. The family is able to be represented in a hierarchical form which can be used for the ease of simulation. In addition, it facilitates an EM-type algorithm to estimate the model parameters. The performances and suitability of the proposed model are demonstrated on simulations and using two real data sets in applications.  相似文献   

8.
蒋青嬗等 《统计研究》2019,36(6):115-128
内生性是常见的计量问题,忽略内生性会导致估计量有偏且不一致。现有部分文献研究了内生性随机前沿模型的估计,但实现的前提是能够为内生性自变量寻找到合适的工具变量,而实际情况下合适的工具变量通常不容易获取。本文研究了在难以找到合适的工具变量的情况下内生性随机前沿模型的估计问题:结合Copula方法和极大模拟似然方法估计参数。此外,本文还构造了技术无效率的新的点估计,该点估计额外利用了内生自变量的信息,通常比JLMS法对应的点估计更有效。数值模拟表明,相比于已有研究,本文提出的方法估计精度更高。  相似文献   

9.
A frequent question raised by practitioners doing unit root tests is whether these tests are sensitive to the presence of heteroscedasticity. Theoretically this is not the case for a wide range of heteroscedastic models. However, for some limiting cases such as degenerate and integrated heteroscedastic processes it is not obvious whether this will have an effect. In this paper we report a Monte Carlo study analyzing the implications of various types of heteroscedasticity on three types of unit root tests: The usual Dickey-Fuller test, Phillips' (1987) semi-parametric test and finally a Dickey-Fuller type test using White's (1980) heteroscedasticity consistent standard errors. The sorts of heteroscedasticity we examine are the GARCH model of Bollerslev (1986) and the Exponential ARCH model of Nelson (1991). In particular, we call attention to situations where the conditional variances exhibit a high degree of persistence as is frequently observed for returns of financial time series, and the case where, in fact, the variance process for the first class of models becomes degenerate.  相似文献   

10.
There is a wide variety of stochastic ordering problems where K groups (typically ordered with respect to time) are observed along with a (continuous) response. The interest of the study may be on finding the change-point group, i.e. the group where an inversion of trend of the variable under study is observed. A change point is not merely a maximum (or a minimum) of the time-series function, but a further requirement is that the trend of the time-series is monotonically increasing before that point, and monotonically decreasing afterwards. A suitable solution can be provided within a conditional approach, i.e. by considering some suitable nonparametric combination of dependent tests for simple stochastic ordering problems. The proposed procedure is very flexible and can be extended to trend and/or repeated measure problems. Some comparisons through simulations and examples with the well known Mack & Wolfe test for umbrella alternative and with Page’s test for trend problems with correlated data are investigated.  相似文献   

11.
In statistics, Fourier series have been used extensively in such areas as time series and stochastic processes. These series; however, to a large degree have been neglected with regard to their use in statistical distribution theory. This omission appears quite striking when one considers that, after the elementary functions, the trigonometric functions are the most important functions in applied mathematics. In this paper a procedure is developed for utilizing Fourier series to represent distribution functions of finite range random variables as Fourier series with coefficients easily expressible (using Chebyshev polynomials) In terms of the moments of the distribution. This method allows the evaluation of probabilities for a wide class of distributions. It is applied to the  相似文献   

12.
The present work addresses the question how sampling algorithms for commonly applied copula models can be adapted to account for quasi-random numbers. Besides sampling methods such as the conditional distribution method (based on a one-to-one transformation), it is also shown that typically faster sampling methods (based on stochastic representations) can be used to improve upon classical Monte Carlo methods when pseudo-random number generators are replaced by quasi-random number generators. This opens the door to quasi-random numbers for models well beyond independent margins or the multivariate normal distribution. Detailed examples (in the context of finance and insurance), illustrations and simulations are given and software has been developed and provided in the R packages copula and qrng.  相似文献   

13.
Although both widely used in the financial industry, there is quite often very little justification why GARCH or stochastic volatility is preferred over the other in practice. Most of the relevant literature focuses on the comparison of the fit of various volatility models to a particular data set, which sometimes may be inconclusive due to the statistical similarities of both processes. With an ever growing interest among the financial industry in the risk of extreme price movements, it is natural to consider the selection between both models from an extreme value perspective. By studying the dependence structure of the extreme values of a given series, we are able to clearly distinguish GARCH and stochastic volatility models and to test statistically which one better captures the observed tail behaviour. We illustrate the performance of the method using some stock market returns and find that different volatility models may give a better fit to the upper or lower tails.  相似文献   

14.
Given that the Euclidean distance between the parameter estimates of autoregressive expansions of autoregressive moving average models can be used to classify stationary time series into groups, a test of hypothesis is proposed to determine whether two stationary series in a particular group have significantly different generating processes. Based on this test a new clustering algorithm is also proposed. The results of Monte Carlo simulations are given.  相似文献   

15.
Risks are usually represented and measured by volatility–covolatility matrices. Wishart processes are models for a dynamic analysis of multivariate risk and describe the evolution of stochastic volatility–covolatility matrices, constrained to be symmetric positive definite. The autoregressive Wishart process (WAR) is the multivariate extension of the Cox, Ingersoll, Ross (CIR) process introduced for scalar stochastic volatility. As a CIR process it allows for closed-form solutions for a number of financial problems, such as term structure of T-bonds and corporate bonds, derivative pricing in a multivariate stochastic volatility model, and the structural model for credit risk. Moreover, the Wishart dynamics are very flexible and are serious competitors for less structural multivariate ARCH models.  相似文献   

16.
This article derives the large-sample distributions of Lagrange multiplier (LM) tests for parameter instability against several alternatives of interest in the context of cointegrated regression models. The fully modified estimator of Phillips and Hansen is extended to cover general models with stochastic and deterministic trends. The test statistics considered include the SupF test of Quandt, as well as the LM tests of Nyblom and of Nabeya and Tanaka. It is found that the asymptotic distributions depend on the nature of the regressor processes—that is, if the regressors are stochastic or deterministic trends. The distributions are noticeably different from the distributions when the data are weakly dependent. It is also found that the lack of cointegration is a special case of the alternative hypothesis considered (an unstable intercept), so the tests proposed here may also be viewed as a test of the null of cointegration against the alternative of no cointegration. The tests are applied to three data sets—an aggregate consumption function, a present value model of stock prices and dividends, and the term structure of interest rates.  相似文献   

17.
The composed error of a stochastic frontier (SF) model consists of two random variables, and the identification of the model relies heavily on the distribution assumptions for each of these variables. While the literature has put much effort into applying various SF models to a wide range of empirical problems, little has been done to test the distribution assumptions of these two variables. In this article, by exploiting the specification structures of the SF model, we propose a centered-residuals-based method of moments which can be easily and flexibly applied to testing the distribution assumptions on both of the random variables and to estimating the model parameters. A Monte Carlo simulation is conducted to assess the performance of the proposed method. We also provide two empirical examples to demonstrate the use of the proposed estimator and test using real data.  相似文献   

18.
ABSTRACT

In panel data models and other regressions with unobserved effects, fixed effects estimation is often paired with cluster-robust variance estimation (CRVE) to account for heteroscedasticity and un-modeled dependence among the errors. Although asymptotically consistent, CRVE can be biased downward when the number of clusters is small, leading to hypothesis tests with rejection rates that are too high. More accurate tests can be constructed using bias-reduced linearization (BRL), which corrects the CRVE based on a working model, in conjunction with a Satterthwaite approximation for t-tests. We propose a generalization of BRL that can be applied in models with arbitrary sets of fixed effects, where the original BRL method is undefined, and describe how to apply the method when the regression is estimated after absorbing the fixed effects. We also propose a small-sample test for multiple-parameter hypotheses, which generalizes the Satterthwaite approximation for t-tests. In simulations covering a wide range of scenarios, we find that the conventional cluster-robust Wald test can severely over-reject while the proposed small-sample test maintains Type I error close to nominal levels. The proposed methods are implemented in an R package called clubSandwich. This article has online supplementary materials.  相似文献   

19.
In this paper we introduce a wide class of integer-valued stochastic processes that allows to take into consideration, simultaneously, relevant characteristics observed in count data namely zero inflation, overdispersion and conditional heteroscedasticity. This class includes, in particular, the compound Poisson, the zero-inflated Poisson and the zero-inflated negative binomial INGARCH models, recently proposed in literature. The main probabilistic analysis of this class of processes is here developed. Precisely, first- and second-order stationarity conditions are derived, the autocorrelation function is deduced and the strict stationarity is established in a large subclass. We also analyse in a particular model the existence of higher-order moments and deduce the explicit form for the first four cumulants, as well as its skewness and kurtosis.  相似文献   

20.
Attempts to better the Performance of classical growth curve functions have met with limited success. Construction industry projects highlighted the need to improve deterministic models rather than the stochastic methodologies which are nearly always based on the former. New concepts (changed for the first time since 1825) are formulated and used to generate multi-component deterministic models. Six highly diverse case studies, of which three are presented, were used to test one model and its autocorrelation form. Trial forecast standard errors showed a drop of 50% when compared to classical and stochastic models. Among the by-products of this work are uses of normalisation, scaling and a simple statistical procedure to estimate linear constants. A different consequence of the new concepts thew light on the problem of predicting a consumption process in marketing. The major implications of this research show the import of the new concepts and diversification of the fields of study on deterministic modelling; and also the need to reappraise the functional interface with many of the underlying processes of growth.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号