首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A data-driven approach for modeling volatility dynamics and co-movements in financial markets is introduced. Special emphasis is given to multivariate conditionally heteroscedastic factor models in which the volatilities of the latent factors depend on their past values, and the parameters are driven by regime switching in a latent state variable. We propose an innovative indirect estimation method based on the generalized EM algorithm principle combined with a structured variational approach that can handle models with large cross-sectional dimensions. Extensive Monte Carlo simulations and preliminary experiments with financial data show promising results.  相似文献   

2.
We update a previous approach to the estimation of the size of an open population when there are multiple lists at each time point. Our motivation is 35 years of longitudinal data on the detection of drug users by the Central Registry of Drug Abuse in Hong Kong. We develop a two‐stage smoothing spline approach. This gives a flexible and easily implemented alternative to the previous method which was based on kernel smoothing. The new method retains the property of reducing the variability of the individual estimates at each time point. We evaluate the new method by means of a simulation study that includes an examination of the effects of variable selection. The new method is then applied to data collected by the Central Registry of Drug Abuse. The parameter estimates obtained are compared with the well known Jolly–Seber estimates based on single capture methods.  相似文献   

3.
The multinomial logit model (MNL) is one of the most frequently used statistical models in marketing applications. It allows one to relate an unordered categorical response variable, for example representing the choice of a brand, to a vector of covariates such as the price of the brand or variables characterising the consumer. In its classical form, all covariates enter in strictly parametric, linear form into the utility function of the MNL model. In this paper, we introduce semiparametric extensions, where smooth effects of continuous covariates are modelled by penalised splines. A mixed model representation of these penalised splines is employed to obtain estimates of the corresponding smoothing parameters, leading to a fully automated estimation procedure. To validate semiparametric models against parametric models, we utilise different scoring rules as well as predicted market share and compare parametric and semiparametric approaches for a number of brand choice data sets.  相似文献   

4.
Summary.  We propose an adaptive varying-coefficient spatiotemporal model for data that are observed irregularly over space and regularly in time. The model is capable of catching possible non-linearity (both in space and in time) and non-stationarity (in space) by allowing the auto-regressive coefficients to vary with both spatial location and an unknown index variable. We suggest a two-step procedure to estimate both the coefficient functions and the index variable, which is readily implemented and can be computed even for large spatiotemporal data sets. Our theoretical results indicate that, in the presence of the so-called nugget effect, the errors in the estimation may be reduced via the spatial smoothing—the second step in the estimation procedure proposed. The simulation results reinforce this finding. As an illustration, we apply the methodology to a data set of sea level pressure in the North Sea.  相似文献   

5.
In some applications of statistical quality control, quality of a process or a product is best characterized by a functional relationship between a response variable and one or more explanatory variables. This relationship is referred to as a profile. In certain cases, the quality of a process or a product is better described by a non-linear profile which does not follow a specific parametric model. In these circumstances, nonparametric approaches with greater flexibility in modeling the complicated profiles are adopted. In this research, the spline smoothing method is used to model a complicated non-linear profile and the Hotelling T2 control chart based on the spline coefficients is used to monitor the process. After receiving an out-of-control signal, a maximum likelihood estimator is employed for change point estimation. The simulation studies, which include both global and local shifts, provide appropriate evaluation of the performance of the proposed estimation and monitoring procedure. The results indicate that the proposed method detects large global shifts while it is very sensitive in detecting local shifts.  相似文献   

6.
ABSTRACT

This paper proposes a hysteretic autoregressive model with GARCH specification and a skew Student's t-error distribution for financial time series. With an integrated hysteresis zone, this model allows both the conditional mean and conditional volatility switching in a regime to be delayed when the hysteresis variable lies in a hysteresis zone. We perform Bayesian estimation via an adaptive Markov Chain Monte Carlo sampling scheme. The proposed Bayesian method allows simultaneous inferences for all unknown parameters, including threshold values and a delay parameter. To implement model selection, we propose a numerical approximation of the marginal likelihoods to posterior odds. The proposed methodology is illustrated using simulation studies and two major Asia stock basis series. We conduct a model comparison for variant hysteresis and threshold GARCH models based on the posterior odds ratios, finding strong evidence of the hysteretic effect and some asymmetric heavy-tailness. Versus multi-regime threshold GARCH models, this new collection of models is more suitable to describe real data sets. Finally, we employ Bayesian forecasting methods in a Value-at-Risk study of the return series.  相似文献   

7.
We consider the problem of parameter estimation for inhomogeneous space‐time shot‐noise Cox point processes. We explore the possibility of using a stepwise estimation method and dimensionality‐reducing techniques to estimate different parts of the model separately. We discuss the estimation method using projection processes and propose a refined method that avoids projection to the temporal domain. This remedies the main flaw of the method using projection processes – possible overlapping in the projection process of clusters, which are clearly separated in the original space‐time process. This issue is more prominent in the temporal projection process where the amount of information lost by projection is higher than in the spatial projection process. For the refined method, we derive consistency and asymptotic normality results under the increasing domain asymptotics and appropriate moment and mixing assumptions. We also present a simulation study that suggests that cluster overlapping is successfully overcome by the refined method.  相似文献   

8.
The German Microcensus (MC) is a large scale rotating panel survey over three years. The MC is attractive for longitudinal analysis over the entire participation duration because of the mandatory participation and the very high case numbers (about 200000 respondents). However, as a consequence of the area sampling that is used for the MC, residential mobility is not covered and consequently statistical information at the new residence is lacking in the MC sample. This raises the question whether longitudinal analyses, like transitions between labour market states, are biased and how different methods perform that promise to reduce such a bias. Similar problems occur also for other national Labour Force Surveys (LFS) which are rotating panels and do not cover residential mobility, see Clarke and Tate (2002). Based on data of the German Socio-Economic Panel (SOEP), which covers residential mobility, we analysed the effects of missing data of residential movers by the estimation of labour force flows. By comparing the results from the complete SOEP sample and the results from the SOEP, restricted to the non-movers, we concluded that the non-coverage of the residential movers can not be ignored in Rubin’s sense. With respect to correction methods we analysed weighting by inverse mobility scores and log-linear models for partially observed contingency tables. Our results indicate that weighting by inverse mobility scores reduces the bias to about 60% whereas the official longitudinal weights obtained by calibration result in a bias reduction of about 80%. The estimation of log-linear models for non-ignorable non-response leads to very unstable results.  相似文献   

9.
This paper considers the problem of selecting optimal bandwidths for variable (sample‐point adaptive) kernel density estimation. A data‐driven variable bandwidth selector is proposed, based on the idea of approximating the log‐bandwidth function by a cubic spline. This cubic spline is optimized with respect to a cross‐validation criterion. The proposed method can be interpreted as a selector for either integrated squared error (ISE) or mean integrated squared error (MISE) optimal bandwidths. This leads to reflection upon some of the differences between ISE and MISE as error criteria for variable kernel estimation. Results from simulation studies indicate that the proposed method outperforms a fixed kernel estimator (in terms of ISE) when the target density has a combination of sharp modes and regions of smooth undulation. Moreover, some detailed data analyses suggest that the gains in ISE may understate the improvements in visual appeal obtained using the proposed variable kernel estimator. These numerical studies also show that the proposed estimator outperforms existing variable kernel density estimators implemented using piecewise constant bandwidth functions.  相似文献   

10.
The classical Lorenz curve visualizes and measures the disparity of items which are characterized by a single variable: The more the curve bends, the more scatter the data. Recently a general approach has been proposed and investigated that measures the disparity of multidimensioned items regardless of their dimension. This paper surveys various generalizations of Lorenz curve and Lorenz dominance for multidimensional data. Firstly, the Lorenz zonoid of multivariate data and, more general, of a random vector is introduced. Then three multivariate extensions of univariate Lorenz dominance are surveyed and contrasted, the set inclusion of lift zonoids, the scaled convex order, and the price Lorenz order. The latter is based on the set inclusion of extended Lorenz zonoids. Finally, a decomposition of the multivariate volume-Gini mean difference is given.  相似文献   

11.
Non‐random sampling is a source of bias in empirical research. It is common for the outcomes of interest (e.g. wage distribution) to be skewed in the source population. Sometimes, the outcomes are further subjected to sample selection, which is a type of missing data, resulting in partial observability. Thus, methods based on complete cases for skew data are inadequate for the analysis of such data and a general sample selection model is required. Heckman proposed a full maximum likelihood estimation method under the normality assumption for sample selection problems, and parametric and non‐parametric extensions have been proposed. We generalize Heckman selection model to allow for underlying skew‐normal distributions. Finite‐sample performance of the maximum likelihood estimator of the model is studied via simulation. Applications illustrate the strength of the model in capturing spurious skewness in bounded scores, and in modelling data where logarithm transformation could not mitigate the effect of inherent skewness in the outcome variable.  相似文献   

12.
We consider online monitoring of sequentially arising data as e.g. met in clinical information systems. The general focus thereby is to detect breakpoints, i.e. timepoints where the measurement series suddenly changes the general level. The method suggested is based on local estimation. In particular, local linear smoothing is combined by ridging with local constant smoothing. The procedure is demonstrated by examples and compared with other available online monitoring routines.  相似文献   

13.
In this note we discuss two-step kernel estimation of varying coefficient regression models that have a common smoothing variable. The method allows one to use different bandwidths for different coefficient functions. We consider local polynomial fitting and present explicit formulas for the asymptotic biases and variances of the estimators.  相似文献   

14.
Nonparametric estimation and inferences of conditional distribution functions with longitudinal data have important applications in biomedical studies, such as epidemiological studies and longitudinal clinical trials. Estimation approaches without any structural assumptions may lead to inadequate and numerically unstable estimators in practice. We propose in this paper a nonparametric approach based on time-varying parametric models for estimating the conditional distribution functions with a longitudinal sample. Our model assumes that the conditional distribution of the outcome variable at each given time point can be approximated by a parametric model after local Box–Cox transformation. Our estimation is based on a two-step smoothing method, in which we first obtain the raw estimators of the conditional distribution functions at a set of disjoint time points, and then compute the final estimators at any time by smoothing the raw estimators. Applications of our two-step estimation method have been demonstrated through a large epidemiological study of childhood growth and blood pressure. Finite sample properties of our procedures are investigated through a simulation study. Application and simulation results show that smoothing estimation from time-variant parametric models outperforms the existing kernel smoothing estimator by producing narrower pointwise bootstrap confidence band and smaller root mean squared error.  相似文献   

15.
A sub threshold signal is transmitted through a channel and may be detected when some noise - with known structure and proportional to some level - is added to the data. There is an optimal noise level, called of stochastic resonance, that corresponds to the minimum variance of the estimators in the problem of recovering unobservable signals. For several noise structures it has been shown the evidence of stochastic resonance effect. Here we study the case when the noise is a Markovian process. We propose consistent estimators of the sub threshold signal and we solve further a problem of hypotheses testing. We also discuss evidence of stochastic resonance for both estimation and hypotheses testing problems via examples.  相似文献   

16.
We study the quantile estimation methods for the distortion measurement error data when variables are unobserved and distorted with additive errors by some unknown functions of an observable confounding variable. After calibrating the error-prone variables, we propose the quantile regression estimation procedure and composite quantile estimation procedure. Asymptotic properties of the proposed estimators are established, and we also investigate the asymptotic relative efficiency compared with the least-squares estimator. Simulation studies are conducted to evaluate the performance of the proposed methods, and a real dataset is analyzed as an illustration.  相似文献   

17.
M. C. Jones 《Statistics》2013,47(1-2):65-71
Two types of non-global bandwidth, which may be called local and variable, have been defined in attempts to improve the performance of kernel density estimators. In nonparametric regression, local linear fitting has become a method of much popularity. It is natural, therefore, to consider the use of non-global bandwidths in the local linear context, and indeed local bandwidths are often used. In this paper, it is observed that a natural proposal in the literature for combining variable bandwidths with local linear fitting fails in the sense that the resulting mean squared error properties are those normally associated with local rather than variable bandwidths. We are able to understand why this happens in terms of weightings that are involved. We also attempt to investigate how the bias reduction expected of well-chosen variable bandwidths might be achieved in conjunction with local linear fitting.  相似文献   

18.
As a flexible alternative to the Cox model, the accelerated failure time (AFT) model assumes that the event time of interest depends on the covariates through a regression function. The AFT model with non‐parametric covariate effects is investigated, when variable selection is desired along with estimation. Formulated in the framework of the smoothing spline analysis of variance model, the proposed method based on the Stute estimate ( Stute, 1993 [Consistent estimation under random censorship when covariables are present, J. Multivariate Anal. 45 , 89–103]) can achieve a sparse representation of the functional decomposition, by utilizing a reproducing kernel Hilbert norm penalty. Computational algorithms and theoretical properties of the proposed method are investigated. The finite sample size performance of the proposed approach is assessed via simulation studies. The primary biliary cirrhosis data is analyzed for demonstration.  相似文献   

19.
This paper considers the problem of selecting a robust threshold of wavelet shrinkage. Previous approaches reported in literature to handle the presence of outliers mainly focus on developing a robust procedure for a given threshold; this is related to solving a nontrivial optimization problem. The drawback of this approach is that the selection of a robust threshold, which is crucial for the resulting fit is ignored. This paper points out that the best fit can be achieved by a robust wavelet shrinkage with a robust threshold. We propose data-driven selection methods for a robust threshold. These approaches are based on a coupling of classical wavelet thresholding rules with pseudo data. The concept of pseudo data has influenced the implementation of the proposed methods, and provides a fast and efficient algorithm. Results from a simulation study and a real example demonstrate the promising empirical properties of the proposed approaches.  相似文献   

20.
A new procedure is proposed for deriving variable bandwidths in univariate kernel density estimation, based upon likelihood cross-validation and an analysis of a Bayesian graphical model. The procedure admits bandwidth selection which is flexible in terms of the amount of smoothing required. In addition, the basic model can be extended to incorporate local smoothing of the density estimate. The method is shown to perform well in both theoretical and practical situations, and we compare our method with those of Abramson (The Annals of Statistics 10: 1217–1223) and Sain and Scott (Journal of the American Statistical Association 91: 1525–1534). In particular, we note that in certain cases, the Sain and Scott method performs poorly even with relatively large sample sizes.We compare various bandwidth selection methods using standard mean integrated square error criteria to assess the quality of the density estimates. We study situations where the underlying density is assumed both known and unknown, and note that in practice, our method performs well when sample sizes are small. In addition, we also apply the methods to real data, and again we believe our methods perform at least as well as existing methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号