首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary.  Recently there has been much work on developing models that are suitable for analysing the volatility of a continuous time process. One general approach is to define a volatility process as the convolution of a kernel with a non-decreasing Lévy process, which is non-negative if the kernel is non-negative. Within the framework of time continuous autoregressive moving average (CARMA) processes, we derive a necessary and sufficient condition for the kernel to be non-negative. This condition is in terms of the Laplace transform of the CARMA kernel, which has a simple form. We discuss some useful consequences of this result and delineate the parametric region of stationarity and non-negative kernel for some lower order CARMA models.  相似文献   

2.
Mixture experiments are often carried out in the presence of process variables, such as days of the week or different machines in a manufacturing process, or different ovens in bread and cake making. In such experiments it is particularly useful to be able to arrange the design in orthogonal blocks, so that the model in tue mixture vanauies may ue iitteu inucpenuentiy or tne UIOCK enects mtrouuceu to take account of the changes in the process variables. It is possible in some situations that some of the ingredients in the mixture, such as additives or flavourings, are present in soian quantities, pernaps as iuw a.s 5% ur even !%, resulting in the design space being restricted to only part of the mixture simplex. Hau and Box (1990) discussed the construction of experimental designs for situations where constraints are placed on the design variables. They considered projecting standard response surface designs, including factorial designs and central composite designs, into the restricted design space, and showed that the desirable property of block orthogonality is preserved by the projections considered. Here we present a number of examples of projection designs and illustrate their use when some of the ingredients are restricted to small values, such that the design space is restricted to a sub-region within the usual simplex in the mixture variables.  相似文献   

3.
Circular data are observations that are represented as points on a unit circle. Times of day and directions of wind are two such examples. In this work, we present a Bayesian approach to regress a circular variable on a linear predictor. The regression coefficients are assumed to have a nonparametric distribution with a Dirichlet process prior. The semiparametric Bayesian approach gives added flexibility to the model and is useful especially when the likelihood surface is ill behaved. Markov chain Monte Carlo techniques are used to fit the proposed model and to generate predictions. The method is illustrated using an environmental data set.  相似文献   

4.
Abstract

We propose a novel approach to estimate the Cox model with temporal covariates. Our new approach treats the temporal covariates as arising from a longitudinal process which is modeled jointly with the event time. Different from the literature, the longitudinal process in our model is specified as a bounded variational process and determined by a family of Initial Value Problems associated with an Ordinary Differential Equation. Our specification has the advantage that only the observation of the temporal covariates at the event-time and the event-time itself are needed to fit the model, while it is fine but not necessary to have more longitudinal observations. This fact makes our approach very useful for many medical outcome datasets, such as the SPARCS and NIS, where it is important to find the hazard rate of being discharged given the accumulative cost but only the total cost at the discharge time is available due to the protection of private information. Our estimation procedure is based on maximizing the full information likelihood function. The resulting estimators are shown to be consistent and asymptotically normally distributed. Simulations and a real example illustrate the utility of the proposed model. Finally, a couple of extensions are discussed.  相似文献   

5.
Abstract.  A useful tool while analysing spatial point patterns is the pair correlation function (e.g. Fractals, Random Shapes and Point Fields, Wiley, New York, 1994). In practice, this function is often estimated by some nonparametric procedure such as kernel smoothing, where the smoothing parameter (i.e. bandwidth) is often determined arbitrarily. In this article, a data-driven method for the selection of the bandwidth is proposed. The efficacy of the proposed approach is studied through both simulations and an application to a forest data example.  相似文献   

6.
More flexible semiparametric linear‐index regression models are proposed to describe the conditional distribution. Such a model formulation captures varying effects of covariates over the support of a response variable in distribution, offers an alternative perspective on dimension reduction and covers a lot of widely used parametric and semiparameteric regression models. A feasible pseudo likelihood approach, accompanied with a simple and easily implemented algorithm, is further developed for the mixed case with both varying and invariant coefficients. By justifying some theoretical properties on Banach spaces, the uniform consistency and asymptotic Gaussian process of the proposed estimator are also established in this article. In addition, under the monotonicity of distribution in linear‐index, we develop an alternative approach based on maximizing a varying accuracy measure. By virtue of the asymptotic recursion relation for the estimators, some of the achievements in this direction include showing the convergence of the iterative computation procedure and establishing the large sample properties of the resulting estimator. It is noticeable that our theoretical framework is very helpful in constructing confidence bands for the parameters of interest and tests for the hypotheses of various qualitative structures in distribution. Generally, the developed estimation and inference procedures perform quite satisfactorily in the conducted simulations and are demonstrated to be useful in reanalysing data from the Boston house price study and the World Values Survey.  相似文献   

7.
This paper discusses regression analysis of panel count data with dependent observation and dropout processes. For the problem, a general mean model is presented that can allow both additive and multiplicative effects of covariates on the underlying point process. In addition, the proportional rates model and the accelerated failure time model are employed to describe possible covariate effects on the observation process and the dropout or follow‐up process, respectively. For estimation of regression parameters, some estimating equation‐based procedures are developed and the asymptotic properties of the proposed estimators are established. In addition, a resampling approach is proposed for estimating a covariance matrix of the proposed estimator and a model checking procedure is also provided. Results from an extensive simulation study indicate that the proposed methodology works well for practical situations, and it is applied to a motivating set of real data.  相似文献   

8.
Change in the coefficients or the mean of the innovation of an INAR(p) process is a sign of disturbance that is important to detect. The proposed methods can test for change in any one of these quantities separately, or in any collection of them. They make both one-sided and two-sided tests possible, furthermore, they can be used to test against the “epidemic” alternative. The tests are based on a CUSUM process using CLS estimators of the parameters. Under the one-sided and two-sided alternatives, consistency of the tests is proved and the properties of the change-point estimator are also explored.  相似文献   

9.
A conformance proportion is an important and useful index to assess industrial quality improvement. Statistical confidence limits for a conformance proportion are usually required not only to perform statistical significance tests, but also to provide useful information for determining practical significance. In this article, we propose approaches for constructing statistical confidence limits for a conformance proportion of multiple quality characteristics. Under the assumption that the variables of interest are distributed with a multivariate normal distribution, we develop an approach based on the concept of a fiducial generalized pivotal quantity (FGPQ). Without any distribution assumption on the variables, we apply some confidence interval construction methods for the conformance proportion by treating it as the probability of a success in a binomial distribution. The performance of the proposed methods is evaluated through detailed simulation studies. The results reveal that the simulated coverage probability (cp) for the FGPQ-based method is generally larger than the claimed value. On the other hand, one of the binomial distribution-based methods, that is, the standard method suggested in classical textbooks, appears to have smaller simulated cps than the nominal level. Two alternatives to the standard method are found to maintain their simulated cps sufficiently close to the claimed level, and hence their performances are judged to be satisfactory. In addition, three examples are given to illustrate the application of the proposed methods.  相似文献   

10.
There are many approaches in the estimation of spectral density. With regard to parametric approaches, different divergences are proposed in fitting a certain parametric family of spectral densities. Moreover, nonparametric approaches are also quite common considering the situation when we cannot specify the model of process. In this paper, we develop a local Whittle likelihood approach based on a general score function, with some special cases of which, the approach applies to more applications. This paper highlights the effective asymptotics of our general local Whittle estimator, and presents a comparison with other estimators. Additionally, for a special case, we construct the one-step ahead predictor based on the form of the score function. Subsequently, we show that it has a smaller prediction error than the classical exponentially weighted linear predictor. The provided numerical studies show some interesting features of our local Whittle estimator.  相似文献   

11.
For estimating area‐specific parameters (quantities) in a finite population, a mixed‐model prediction approach is attractive. However, this approach strongly depends on the normality assumption of the response values, although we often encounter a non‐normal case in practice. In such a case, transforming observations to make them suitable for normality assumption is a useful tool, but the problem of selecting a suitable transformation still remains open. To overcome the difficulty, we here propose a new empirical best predicting method by using a parametric family of transformations to estimate a suitable transformation based on the data. We suggest a simple estimating method for transformation parameters based on the profile likelihood function, which achieves consistency under some conditions on transformation functions. For measuring the variability of point prediction, we construct an empirical Bayes confidence interval of the population parameter of interest. Through simulation studies, we investigate the numerical performance of the proposed methods. Finally, we apply the proposed method to synthetic income data in Spanish provinces in which the resulting estimates indicate that the commonly used log transformation would not be appropriate.  相似文献   

12.
Clustering gene expression data are an important step in providing information to biologists. A Bayesian clustering procedure using Fourier series with a Dirichlet process prior for clusters was developed. As an optimal computational tool for this Bayesian approach, Gibbs sampling of a normal mixture with a Dirichlet process was implemented to calculate the posterior probabilities when the number of clusters was unknown. Monte Carlo study results showed that the model was useful for suitable clustering. The proposed method was applied to the budding yeast Saccaromyces cerevisiae and provided biologically interpretable results.  相似文献   

13.
Abstract

Markov processes offer a useful basis for modeling the progression of organisms through successive stages of their life cycle. When organisms are examined intermittently in developmental studies, likelihoods can be constructed based on the resulting panel data in terms of transition probability functions. In some settings however, organisms cannot be tracked individually due to a difficulty in identifying distinct individuals, and in such cases aggregate counts of the number of organisms in different stages of development are recorded at successive time points. We consider the setting in which such aggregate counts are available for each of a number of tanks in a developmental study. We develop methods which accommodate clustering of the transition rates within tanks using a marginal modeling approach followed by robust variance estimation, and through use of a random effects model. Composite likelihood is proposed as a basis of inference in both settings. An extension which incorporates mortality is also discussed. The proposed methods are shown to perform well in empirical studies and are applied in an illustrative example on the growth of the Arabidopsis thaliana plant.  相似文献   

14.
We introduce a process of non-intersecting convex particles by thinning a primary particle process such that the remaining particles are mutually non-intersecting and have maximum total volume among all such subsystems. This approach is based on the idea to construct hardcore processes by suitable dependent thinnings proposed by Matérn but generates packings with higher volume fractions than the known thinning models. Due to the enormous complexity of the computations involved, we develop a two-phase heuristic algorithm whose first phase turns out to yield a structure of Matérn III type. We focus mainly on the generation of packings with high volume fractions and present some simulation results for Poisson primary particle processes of equally sized balls in ?2 and ?3. The results are compared with the well-known random sequential adsorption model and Matérn type models.  相似文献   

15.
This paper discusses the development of a multivariate control charting technique for short-run autocorrelated data manufacturing environment. The proposed approach is a combination of the multivariate residual charts for autocorrelated data and the multivariate transformation technique for i.i.d. process observations of short lengths. The proposed approach consists in fitting adequate multivariate time-series model of various process outputs and computes the residuals, transforming them into standard normal N(0, 1) data and then using standardized data as inputs to plot conventional univariate i.i.d. control charts. The objective for applying multivariate finite horizon techniques for autocorrelated processes is to allow continuous process monitoring, since all process outputs are controlled trough the use of a single control chart with constant control limits. Throughout simulated examples, it is shown that the proposed short-run process monitoring technique provides approximately similar shifts detection properties as VAR residual charts.  相似文献   

16.
Using the data from the AIDS Link to Intravenous Experiences cohort study as an example, an informative censoring model was used to characterize the repeated hospitalization process of a group of patients. Under the informative censoring assumption, the estimators of the baseline rate function and the regression parameters were shown to be related to a latent variable. Hence, it becomes impractical to directly estimate the unknown quantities in the moments of the estimators for the bandwidth selection of a smoothing estimator and the construction of confidence intervals, which are respectively based on the asymptotic mean squared errors and the asymptotic distributions of the estimators. To overcome these difficulties, we develop a random weighted bootstrap procedure to select appropriate bandwidths and to construct approximated confidence intervals. One can see that our method is simple and faster to implement from a practical point of view, and is at least as accurate as other bootstrap methods. In this article, it is shown that the proposed method is useful through the performance of a Monte Carlo simulation. An application of our procedure is also illustrated by a recurrent event sample of intravenous drug users for inpatient cares over time.  相似文献   

17.
Abstract

This paper considers an extension of the classical discrete time risk model for which the claim numbers are assumed to be temporal dependence and overdispersion. The risk model proposed is based on the first-order integer-valued autoregressive (INAR(1)) process with discrete compound Poisson distributed innovations. The explicit expression for the moment generating function of the discounted aggregate claim amount is derived. Some numerical examples are provided to illustrate the impacts of dependence and overdispersion on related quantities such as the stop-loss premium, the value at risk and the tail value at risk.  相似文献   

18.
Robust control charts are useful in statistical process control (SPC) when there is limited knowledge about the underlying process distribution, especially for multivariate observations. This article develops a new robust and self-starting multivariate procedure based on multivariate Smirnov test (MST), which integrates a multivariate two-sample goodness-of-fit (GOF) test based on multivariate empirical distribution function (MEDF) and the change-point model. As expected, simulation results show that our proposed control chart is robust to nonnormally distributed data, and moreover, it is efficient in detecting process shifts, especially large shifts, which is one of the main drawbacks of most robust control charts in the literature. As it avoids the need for a lengthy data-gathering step, the proposed chart is particularly useful in start-up or short-run situations. Comparison results and a real data example show that our proposed chart has great potential for application.  相似文献   

19.
The typical approach in change-point theory is to perform the statistical analysis based on a sample of fixed size. Alternatively, one observes some random phenomenon sequentially and takes action as soon as one observes some statistically significant deviation from the "normal" behaviour. Based on the, perhaps, more realistic situation that the process can only be partially observed, we consider the counting process related to the original process observed at equidistant time points, after which action is taken or not depending on the number of observations between those time points. In order for the procedure to stop also when everything is in order, we introduce a fixed time horizon n at which we stop declaring "no change" if the observed data did not suggest any action until then. We propose some stopping rules and consider their asymptotics under the null hypothesis as well as under alternatives. The main basis for the proofs are strong invariance principles for renewal processes and extreme value asymptotics for Gaussian processes.  相似文献   

20.
Extropy, a complementary dual of entropy, is considered in this paper. A Bayesian approach based on the Dirichlet process is proposed for the estimation of extropy. A goodness of fit test is also developed. Many theoretical properties of the procedure are derived. Several examples are discussed to illustrate the approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号