首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 797 毫秒
1.
Normalization of the bispectrum has been treated differently in the engineering signal processing literature from what is standard in the statistical time series literature. In the signal processing literature, normalization has been treated as a matter of definition and therefore a matter of choice and convenience. In particular, a number of investigators favor the Kim and Powers (Phys. Fluids 21 (8) (1978) 1452) or their “bicoherence” in Kim and Powers (IEEE Trans. Plasma Sci. PS-7 (2) (1979) 120) because they believe it produces a result guaranteed to be bounded by zero and one, and hence that it provides a result that is easily interpretable as the fraction of signal energy due to quadratic coupling. In this contribution, we show that wrong decisions can be obtained by relying on the (1979) normalization which is always bounded by one. This “bicoherence” depends on the resolution bandwidth of the sample bispectrum. Choice of normalization is not solely a matter of definition and this choice has empirical consequences. The term “bicoherence spectrum” is misleading since it is really a skewness spectrum. A statistical normalization is presented that provides a measure of quadratic coupling for stationary random nonlinear processes that has finite dependence.  相似文献   

2.
An effective and efficient search algorithm has been developed to select from an 1(1) system zero-non-zero patterned cointegrating and loading vectors in a subset VECM, Bq(l)y(t-1) + Bq-1 (L)Ay(t) = ε( t ) , where the long term impact matrix Bq(l) contains zero entries. The algorithm can be applied to higher order integrated systems. The Finnish money-output model presented by Johansen and Juselius (1990) and the United States balanced growth model presented by King, Plosser, Stock and Watson (1991) are used to demonstrate the usefulness of this algorithm in examining the cointegrating relationships in vector time series.  相似文献   

3.
Interval estimation of the νth effective dose (ED ν), where νis a prespecified percentage, has been the focus of interest of a number of recent studies, the majority of which have considered the case in which a logistic dose-response curve is correctly assumed. In this paper, we focus our attention upon the 90% effective dose (ED 90 ) and consider the situation in which the assumption of a logistic dose-response curve is incorrect. Specifically, we consider three classes of true model: the probit, the cubic logistic and the asymmetric Aranda-Ordaz models. We investigate the robustness of four large sample parametric methods of interval construction and four methods based upon bootstrap resampling.  相似文献   

4.
A new model combining parametric and semi-parametric approaches and following the lines of a semi-Markov model is developed for multi-stage processes. A Bivariate sojourn time distribution derived from the bivariate exponential distribution of Marshall & Olkin (1967) is adopted. The results compare favourably with the usual semi-parametric approaches that have been in use. Our approach also has several advantages over the models in use including its amenability to statistical inference. For example, the tests for symmetry and also for independence of the marginals of the sojourn time distributions, which were not available earlier, can now be conveniently derived and are enhanced in elegant forms. A unified Goodness-of-Fit test procedure for our proposed model is also presented. An application to the human resource planning involving real-life data from University of Nigeria is given.  相似文献   

5.
Due to the growing importance in maintenance scheduling, the issue of residual life (RL) estimation for some high reliable products based on degradation data has been studied quite extensively. However, most of the existing work only deals with one-dimensional degradation data, which may not be realistic in some cases. Here, an adaptive method of RL estimation is developed based on two-dimensional degradation data. It is assumed that a product has two performance characteristics (PCs) and that the degradation of each PC over time is governed by a non-stationary gamma degradation process. From a practical consideration, it is further assumed that these two PCs are dependent and that their dependency can be characterized by a copula function. As the likelihood function in such a situation is complicated and computationally quite intensive, a two-stage method is used to estimate the unknown parameters of the model. Once new degradation information of the product being monitored becomes available, random effects are first updated by using the Bayesian method. Following that, the RL at current time is estimated accordingly. As the degradation data information accumulates, the RL can be re-estimated in an adaptive manner. Finally, a numerical example about fatigue cracks is presented in order to illustrate the proposed model and the developed inferential method.  相似文献   

6.
Self-affine time series: measures of weak and strong persistence   总被引:2,自引:0,他引:2  
In this paper, we examine self-affine time series and their persistence. Time series are defined to be self-affine if their power-spectral density scales as a power of their frequency. Persistence can be classified in terms of range, short or long range, and in terms of strength, weak or strong. Self-affine time series are scale-invariant, thus they always exhibit long-range persistence. Synthetic self-affine time series are generated using the Fourier power-spectral method. We generate fractional Gaussian noises (fGns), −1β1, where β is the power-spectral exponent. These are summed to give fractional Brownian motions (fBms), 1β3, where the series are self-affine fractals with fractal dimension 1D2; β=2 is a Brownian motion. With β>1, the time series are non-stationary and moments of the time series depend upon its length; with β<1 the time series are stationary. We define self-affine time series with β>1 to have strong persistence and with β<1 to have weak persistence. We use a variety of techniques to quantify the strength of persistence of synthetic self-affine time series with −3β5. These techniques are effective in the following ranges: (1) semivariograms, 1β3, (2) rescaled-range (R/S) analyses, −1β1, (3) Fourier spectral techniques, all values of β, and (4) wavelet variance analyses, all values of β. Wavelet variance analyses lack many of the inherent problems that are found in Fourier power-spectral analysis.  相似文献   

7.
There has been growing interest in the estimation of transition probabilities among stages (Hestbeck et al. , 1991; Brownie et al. , 1993; Schwarz et al. , 1993) in tag-return and capture-recapture models. This has been driven by the increasing interest in meta-population models in ecology and the need for parameter estimates to use in these models. These transition probabilities are composed of survival and movement rates, which can only be estimated separately when an additional assumption is made (Brownie et al. , 1993). Brownie et al. (1993) assumed that movement occurs at the end of the interval between time i and i + 1. We generalize this work to allow different movement patterns in the interval for multiple tag-recovery and capture-recapture experiments. The time of movement is a random variable with a known distribution. The model formulations can be viewed as matrix extensions to the model formulations of single open population capturerecapture and tag-recovery experiments (Jolly, 1965; Seber, 1965; Brownie et al. , 1985). We also present the results of a small simulation study for the tag-return model when movement time follows a beta distribution, and later another simulation study for the capture-recapture model when movement time follows a uniform distribution. The simulation studies use a modified program SURVIV (White, 1983). The Relative Standard Errors (RSEs) of estimates according to high and low movement rates are presented. We show there are strong correlations between movement and survival estimates in the case that the movement rate is high. We also show that estimators of movement rates to different areas and estimators of survival rates in different areas have substantial correlations.  相似文献   

8.
Two common methods of analyzing data from a two-group pretest-posttest research design are (a) two-sample t test on the difference score between pretest and posttest and (b) repeated-measures/split-plot analysis of variance. The repeated-measures/split-plot analysis subsumes the t test analysis, although the former requires more assumptions to be satisfied. A numerical example is given to illustrate some of the equivalences of the two methods of analysis. The investigator should choose the method of analysis based on the research objective(s).  相似文献   

9.
Models for monotone trends in hazard rates for grouped survival data in stratified populations are introduced, and simple closed form score statistics for testing the significance of these trends are presented. The test statistics for some of the models understudy are shown to be independent of the assumed form of the function which relates the hazard rates to the sets of monotone scores assigned to the time intervals. The procedure is applied to test monotone trends in the recovery rates of erythematous response among skin cancer patients and controls that have been irradiated with a ultraviolent challenge.  相似文献   

10.
A robust Bayesian design is presented for a single-arm phase II trial with an early stopping rule to monitor a time to event endpoint. The assumed model is a piecewise exponential distribution with non-informative gamma priors on the hazard parameters in subintervals of a fixed follow up interval. As an additional comparator, we also define and evaluate a version of the design based on an assumed Weibull distribution. Except for the assumed models, the piecewise exponential and Weibull model based designs are identical to an established design that assumes an exponential event time distribution with an inverse gamma prior on the mean event time. The three designs are compared by simulation under several log-logistic and Weibull distributions having different shape parameters, and for different monitoring schedules. The simulations show that, compared to the exponential inverse gamma model based design, the piecewise exponential design has substantially better performance, with much higher probabilities of correctly stopping the trial early, and shorter and less variable trial duration, when the assumed median event time is unacceptably low. Compared to the Weibull model based design, the piecewise exponential design does a much better job of maintaining small incorrect stopping probabilities in cases where the true median survival time is desirably large.  相似文献   

11.
Random effects regression mixture models are a way to classify longitudinal data (or trajectories) having possibly varying lengths. The mixture structure of the traditional random effects regression mixture model arises through the distribution of the random regression coefficients, which is assumed to be a mixture of multivariate normals. An extension of this standard model is presented that accounts for various levels of heterogeneity among the trajectories, depending on their assumed error structure. A standard likelihood ratio test is presented for testing this error structure assumption. Full details of an expectation-conditional maximization algorithm for maximum likelihood estimation are also presented. This model is used to analyze data from an infant habituation experiment, where it is desirable to assess whether infants comprise different populations in terms of their habituation time.  相似文献   

12.
In this paper, we consider the prediction of a future observation based on a type-I hybrid censored sample when the lifetime distribution of experimental units is assumed to be a Weibull random variable. Different classical and Bayesian point predictors are obtained. Bayesian predictors are obtained using squared error and linear-exponential loss functions. We also provide a simulation consistent method for computing Bayesian prediction intervals. Monte Carlo simulations are performed to compare the performances of the different methods, and one data analysis has been presented for illustrative purposes.  相似文献   

13.
A class of multivariate mixed survival models for continuous and discrete time with a complex covariance structure is introduced in a context of quantitative genetic applications. The methods introduced can be used in many applications in quantitative genetics although the discussion presented concentrates on longevity studies. The framework presented allows to combine models based on continuous time with models based on discrete time in a joint analysis. The continuous time models are approximations of the frailty model in which the baseline hazard function will be assumed to be piece-wise constant. The discrete time models used are multivariate variants of the discrete relative risk models. These models allow for regular parametric likelihood-based inference by exploring a coincidence of their likelihood functions and the likelihood functions of suitably defined multivariate generalized linear mixed models. The models include a dispersion parameter, which is essential for obtaining a decomposition of the variance of the trait of interest as a sum of parcels representing the additive genetic effects, environmental effects and unspecified sources of variability; as required in quantitative genetic applications. The methods presented are implemented in such a way that large and complex quantitative genetic data can be analyzed. Some key model control techniques are discussed in a supplementary online material.  相似文献   

14.
We consider a semiparametric and a parametric transformation-to-normality model for bivariate data. After an unstructured or structured monotone transformation of the measurement scales, the measurements are assumed to have a bivariate normal distribution with correlation coefficient ρ, here termed the 'transformation correlation coefficient'. Under the semiparametric model with unstructured transformation, the principle of invariance leads to basing inference on the marginal ranks. The resulting rank-based likelihood function of ρis maximized via a Monte Carlo procedure. Under the parametric model, we consider Box-Cox type transformations and maximize the likelihood of ρalong with the nuisance parameters. Efficiencies of competing methods are reported, both theoretically and by simulations. The methods are illustrated on a real-data example.  相似文献   

15.
Capture–recapture processes are biased samplings of recurrent event processes, which can be modelled by the Andersen–Gill intensity model. The intensity function is assumed to be a function of time, covariates and a parameter. We derive the maximum likelihood estimators of both the parameter and the population size and show the consistency and asymptotic normality of the estimators for both recapture and removal studies. The estimators are asymptotically efficient and their theoretical asymptotic relative efficiencies with respect to the existing estimators of Yip and co-workers can be as large as ∞. The variance estimation and a numerical example are also presented.  相似文献   

16.
The availability of systems undergoing periodic inspections is studied in this paper. A perfect repair or replacement of a failed system is carried out requiring either a constant or a random length of time. In Model A, the system is assumed to be as good as new on completion of inspection or repair. For Model B, no maintenance or corrective actions are taken at the time of inspection if the system is still working, and the condition of the system is assumed to be the same as that before the inspection. Unlike that studied in a related paper by Sarkar and Sarkar (J. Statist. Plann. Inference 91 (2000) 77.), our model assumes that the periodic inspections take place at fixed time points after repair or replacement in case of failure. Some general results on the instantaneous availability and the steady-state availability for the two models are presented under the assumption of random repair or replacement time.  相似文献   

17.
A step stress accelerated life testing model is presented to obtain the optimal hold time at which the stress level is changed. The experimental test is designed to minimize the asymptotic variance of reliability estimate at time ζζ. A Weibull distribution is assumed for the failure time at any constant stress level. The scale parameter of the Weibull failure time distribution at constant stress levels is assumed to be a log-linear function of the stress level. The maximum likelihood function is given for the step stress accelerated life testing model with Type I censoring, from which the asymptotic variance and the Fisher information matrix are obtained. An optimal test plan with the minimum asymptotic variance of reliability estimate at time ζζ is determined.  相似文献   

18.
Pretest-posttest designs serve as building blocks for other more complicated repeated-measures designs. In settings where subjects are independent and errors follow a bivariate normal distribution, data analysis may consist of a univariate repeated-measures analysis or an analysis of covariance. Another possible analysis approach is to use seemingly unrelated regression (SUR). The purpose of this article is to help guide the statistician toward an appropriate analysis choice. Assumptions, estimates, and test statistics for each analysis are approached in a systematic manner. On the basis of these results, the crucial choice of analysis is whether differences in pretest group means are conceived to be real or the result of pure measurement error. Direct consultation of the statistician with a subject-matter person is important in making the right choice. If pretest group differences are real, then a univariate repeated-measures analysis is recommended. If pretest group differences are the result of pure measurement error, then a conditional analysis or SUR analysis should be used. The conditional analysis and the SUR analysis will produce similar results. Smaller variance estimates can be expected based on the SUR analysis, but this gain is partially mediated by a lack of an exact distribution for test statistics.  相似文献   

19.
A general methodology for bootstrapping in non-parametric frontier models   总被引:4,自引:0,他引:4  
The Data Envelopment Analysis method has been extensively used in the literature to provide measures of firms' technical efficiency. These measures allow rankings of firms by their apparent performance. The underlying frontier model is non-parametric since no particular functional form is assumed for the frontier model. Since the observations result from some data-generating process, the statistical properties of the estimated efficiency measures are essential for their interpretations. In the general multi-output multi-input framework, the bootstrap seems to offer the only means of inferring these properties (i.e. to estimate the bias and variance, and to construct confidence intervals). This paper proposes a general methodology for bootstrapping in frontier models, extending the more restrictive method proposed in Simar & Wilson (1998) by allowing for heterogeneity in the structure of efficiency. A numerical illustration with real data is provided to illustrate the methodology.  相似文献   

20.
A survival model is presented in which all patients go through a first phase of disease; some then die and the remainder progress to a second phase of disease. The data observed are the path taken and the total sojourn time in the system but not the time, if ever, at which the second phase is entered. The sojourn time in each phase is assumed to be exponentially distributed with possibly different rates for the two phases. Themodel describes serious diseases that progress through one or two phases, and can be extended to multiple phases. The model is extended to account for several length-biased sampling situations. Censoring is considered in all models. Maximum like lihood estimates for the parameters involved exist, are consistent and are a symptotically normal. One of the proposed models is applied to data from the Veterans Administration involving a study of coronary arterial occlusive disease.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号