首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Estimation of the lifetime distribution of industrial components and systems yields very important information for manufacturers and consumers. However, obtaining reliability data is time consuming and costly. In this context, degradation tests are a useful alternative approach to lifetime and accelerated life tests in reliability studies. The approximate method is one of the most used techniques for degradation data analysis. It is very simple to understand and easy to implement numerically in any statistical software package. This paper uses time series techniques in order to propose a modified approximate method (MAM). The MAM improves the standard one in two aspects: (1) it uses previous observations in the degradation path as a Markov process for future prediction and (2) it is not necessary to specify a parametric form for the degradation path. Characteristics of interest such as mean or median time to failure and percentiles, among others, are obtained by using the modified method. A simulation study is performed in order to show the improved properties of the modified method over the standard one. Both methods are also used to estimate the failure time distribution of the fatigue-crack-growth data set.  相似文献   

2.
This paper is concerned with the analysis of observations made on a system that is being stimulated at fixed time intervals but where the precise nature and effect of any individual stimulus is unknown. The realized values are modelled as a stochastic process consisting of a random signal embedded in noise. The aim of the analysis is to use the data to unravel the unknown structure of the system and to ascertain the probabilistic behaviour of the stimuli. A method of parameter estimation based on quasi-profile likelihood is presented and the statistical properties of the estimates are established while recognizing that there will be a discrepancy between the model and the true data-generating mechanism. A method of model validation and determination is also advanced and kernel smoothing techniques are proposed as a basis for identifying the amplitude distribution of the stimuli. The data processing techniques described have a direct application to the investigation of excitatory post-synaptic currents recorded from nerve cells in the central nervous system and their use in quantal analysis of such data is illustrated.  相似文献   

3.
This paper describes inference methods for functional data under the assumption that the functional data of interest are smooth latent functions, characterized by a Gaussian process, which have been observed with noise over a finite set of time points. The methods we propose are completely specified in a Bayesian environment that allows for all inferences to be performed through a simple Gibbs sampler. Our main focus is in estimating and describing uncertainty in the covariance function. However, these models also encompass functional data estimation, functional regression where the predictors are latent functions, and an automatic approach to smoothing parameter selection. Furthermore, these models require minimal assumptions on the data structure as the time points for observations do not need to be equally spaced, the number and placement of observations are allowed to vary among functions, and special treatment is not required when the number of functional observations is less than the dimensionality of those observations. We illustrate the effectiveness of these models in estimating latent functional data, capturing variation in the functional covariance estimate, and in selecting appropriate smoothing parameters in both a simulation study and a regression analysis of medfly fertility data.  相似文献   

4.
The analysis of time-indexed categorical data is important in many fields, e.g., in telecommunication network monitoring, manufacturing process control, ecology, etc. Primary interest is in detecting and measuring serial associations and dependencies in such data. For cardinal time series analysis, autocorrelation is a convenient and informative measure of serial association. Yet, for categorical time series analysis an analogous convenient measure and corresponding concepts of weak stationarity have not been provided. For two categorical variables, several ways of measuring association have been suggested. This paper reviews such measures and investigates their properties in a serial context. We discuss concepts of weak stationarity of a categorical time series, in particular of stationarity in association measures. Serial association and weak stationarity are studied in the class of discrete ARMA processes introduced by Jacobs and Lewis (J. Time Ser. Anal. 4(1):19–36, 1983). An intrinsic feature of a time series is that, typically, adjacent observations are dependent. The nature of this dependence among observations of a time series is of considerable practical interest. Time series analysis is concerned with techniques for the analysis of this dependence. (Box et al. 1994p. 1)  相似文献   

5.
For the first time, a new class of generalized Weibull linear models is introduced to be competitive to the well-known generalized (gamma and inverse Gaussian) linear models which are adequate for the analysis of positive continuous data. The proposed models have a constant coefficient of variation for all observations similar to the gamma models and may be suitable for a wide range of practical applications in various fields such as biology, medicine, engineering, and economics, among others. We derive a joint iterative algorithm for estimating the mean and dispersion parameters. We obtain closed form expressions in matrix notation for the second-order biases of the maximum likelihood estimates of the model parameters and define bias corrected estimates. The corrected estimates are easily obtained as vectors of regression coefficients in suitable weighted linear regressions. The practical use of the new class of models is illustrated in one application to a lung cancer data set.  相似文献   

6.
Structural equation models (SEM) have been extensively used in behavioral, social, and psychological research to model relations between the latent variables and the observations. Most software packages for the fitting of SEM rely on frequentist methods. Traditional models and software are not appropriate for analysis of the dependent observations such as time-series data. In this study, a structural equation model with a time series feature is introduced. A Bayesian approach is used to solve the model with the aid of the Markov chain Monte Carlo method. Bayesian inferences as well as prediction with the proposed time series structural equation model can also reveal certain unobserved relationships among the observations. The approach is successfully employed using real Asian, American and European stock return data.  相似文献   

7.
Time series sometimes consist of count data in which the number of events occurring in a given time interval is recorded. Such data are necessarily nonnegative integers, and an assumption of a Poisson or negative binomial distribution is often appropriate. This article sets ups a model in which the level of the process generating the observations changes over time. A recursion analogous to the Kalman filter is used to construct the likelihood function and to make predictions of future observations. Qualitative variables, based on a binomial or multinomial distribution, may be handled in a similar way. The model for count data may be extended to include explanatory variables. This enables nonstochastic slope and seasonal components to be included in the model, as well as permitting intervention analysis. The techniques are illustrated with a number of applications, and an attempt is made to develop a model-selection strategy along the lines of that used for Gaussian structural time series models. The applications include an analysis of the results of international football matches played between England and Scotland and an assessment of the effect of the British seat-belt law on the drivers of light-goods vehicles.  相似文献   

8.
It is purpose of this paper to introduce and describe graphical tools and techniques for (technically oriented) life time data analysis. The aim of this analysis is the determination of a representative statistical parametric or non-parametric model reflecting ageing behaviour over time. The paper intends to demonstrate that a subjective data analysis by graphs can be more sensitive and informative than pure parameter estimation in a predefined model or application of test statistics. In this sense some of those graphical tools are discussed which provide a less formal goodness-of-fit approach analysing the adherence to or deviation from a hypothesized distribution shape in a graph. Various graphical tools will be described and compared concerning their specific advantages and restrictions for life time data analysis. Finally some examples will demonstrate a “combined graphical analysis approach”.  相似文献   

9.
A versatile procedure is described comprising an application of statistical techniques to the analysis of the large, multi‐dimensional data arrays produced by electroencephalographic (EEG) measurements of human brain function. Previous analytical methods have been unable to identify objectively the precise times at which statistically significant experimental effects occur, owing to the large number of variables (electrodes) and small number of subjects, or have been restricted to two‐treatment experimental designs. Many time‐points are sampled in each experimental trial, making adjustment for multiple comparisons mandatory. Given the typically large number of comparisons and the clear dependence structure among time‐points, simple Bonferroni‐type adjustments are far too conservative. A three‐step approach is proposed: (i) summing univariate statistics across variables; (ii) using permutation tests for treatment effects at each time‐point; and (iii) adjusting for multiple comparisons using permutation distributions to control family‐wise error across the whole set of time‐points. Our approach provides an exact test of the individual hypotheses while asymptotically controlling family‐wise error in the strong sense, and can provide tests of interaction and main effects in factorial designs. An application to two experimental data sets from EEG studies is described, but the approach has application to the analysis of spatio‐temporal multivariate data gathered in many other contexts.  相似文献   

10.
Control charts contribute to the monitoring and improvement of process quality by helping to separate out special cause variation from common cause variation. By common cause variation we mean the usual variation in an in-control process. Special causes can be thought of as disturbances, possibly transitory, impacting a process that is in a state of statistical control. However, there is no clear place in this scheme of special causes and common causes for systematic non-iid variation, such as trend, seasonal, autoregression variation, and intervention effects from efforts to improve the proess. When systematic non-iid variation is present, time series modeling and fitting can fill in this picture. In the time series framework, observations influenced by special causes can be treated as outliers from the currently-entertained time-series model and can be detected by outlier detection methods. We discuss three data sets that illustrate how this can be done in order to make control charts more effective. We show also how a standard control-chart supplement called "pattern analysis" can be useful in time-series work.  相似文献   

11.
Functional data analysis involves the extension of familiar statistical procedures such as principal components analysis, linear modelling, and canonical correlation analysis to data where the raw observation xi is a function. An essential preliminary to a functional data analysis is often the registration or alignment of salient curve features by suitable monotone transformations hi of the argument t , so that the actual analyses are carried out on the values xi { hi ( t )}. This is referred to as dynamic time warping in the engineering literature. In effect, this conceptualizes variation among functions as being composed of two aspects: horizontal and vertical, or domain and range. A nonparametric function estimation technique is described for identifying the smooth monotone transformations hi , and is illustrated by data analyses. A second-order linear stochastic differential equation is proposed to model these components of variation.  相似文献   

12.
The authors consider regression analysis for binary data collected repeatedly over time on members of numerous small clusters of individuals sharing a common random effect that induces dependence among them. They propose a mixed model that can accommodate both these structural and longitudinal dependencies. They estimate the parameters of the model consistently and efficiently using generalized estimating equations. They show through simulations that their approach yields significant gains in mean squared error when estimating the random effects variance and the longitudinal correlations, while providing estimates of the fixed effects that are just as precise as under a generalized penalized quasi‐likelihood approach. Their method is illustrated using smoking prevention data.  相似文献   

13.
Bayesian inference for partially observed stochastic epidemics   总被引:4,自引:0,他引:4  
The analysis of infectious disease data is usually complicated by the fact that real life epidemics are only partially observed. In particular, data concerning the process of infection are seldom available. Consequently, standard statistical techniques can become too complicated to implement effectively. In this paper Markov chain Monte Carlo methods are used to make inferences about the missing data as well as the unknown parameters of interest in a Bayesian framework. The methods are applied to real life data from disease outbreaks.  相似文献   

14.
Real lifetime data are never precise numbers but more or less non-precise, also called fuzzy. This kind of imprecision is connected with all measurement results of continuous variables, therefore also with time observations. Imprecision is different from errors and variability. Therefore estimation methods for reliability characteristics have to be adapted to the situation of fuzzy lifetimes in order to obtain realistic results.  相似文献   

15.
We introduce a duration model that allows for unobserved cumulative individual-specific shocks, which are likely to be important in explaining variations in duration outcomes, such as length of life and time spent unemployed. The model is also a useful tool in situations where researchers observe a great deal of information about individuals when first interviewed in surveys but little thereafter. We call this model the “increasingly mixed proportional hazard” (IMPH) model. We compare and contrast this model with the mixed proportional hazard (MPH) model, which continues to be the workhorse of applied single-spell duration analysis in economics and the other social sciences. We apply the IMPH model to study the relationships among socioeconomic status, health shocks, and mortality, using 19 waves of data drawn from the German Socio-Economic Panel (SOEP). The IMPH model is found to fit the data statistically better than the MPH model, and unobserved health shocks and socioeconomic status are shown to play powerful roles in predicting longevity.  相似文献   

16.
In life testing, n identical testing items are placed on test. Instead of doing a complete life testing with all n outcomes, a Type II censored life testing, consisting of the first m outcomes, is usually employed. Although statistical analysis for the life testing based on censored data is less efficient than the complete life testing, the expected length of the censored life testing is less than that of the complete life testing. In this paper, we compare censored and complete life testing and suggest ways to improve time saving and efficiency. Instead of doing a complete life testing with all n outcomes, we put N>n items on test, which continues until we observe the nth outcome. With both the censored life testing and the complete life testing containing the same number of observations, we show that the expected length of the censored life testing is less than that of the complete life testing and that the censored life testing may be also more efficient than the complete life testing with the same number of observations.  相似文献   

17.
The decorrelating property of the discrete wavelet transformation (DWT) appears valuable because one can avoid estimating the correlation structure in the original data space by bootstrap resampling of the DWT. Several authors have shown that the wavestrap approximately retains the correlation structure of observations. However, simply retaining the same correlation structure of original observations does not guarantee enough variation for regression parameter estimators. Our simulation studies show that these wavestraps yield undercoverage of parameters for a simple linear regression for time series data of the type that arise in functional MRI experiments. It is disappointing that the wavestrap does not even provide valid resamples for both white noise sequences and fractional Brownian noise sequences. Thus, the wavestrap method is not completely valid in obtaining resamples related to linear regression analysis and should be used with caution for hypothesis testing as well. The reasons for these undercoverages are also discussed. A parametric bootstrap resampling in the wavelet domain is introduced to offer insight into these previously undiscovered defects in wavestrapping.  相似文献   

18.
In this paper we develop nonparametric methods for regression analysis when the response variable is subject to censoring and/or truncation. The development is based on a data completion princple that enables us to apply, via an iterative scheme, nonparametric regression techniques to iteratively com¬pleted data from a given sample with censored and/or truncated observations. In particular, locally weighted regression smoothers and additive regression models are extended to left-truncated and right-censored data Nonparamet¬ric regression analysis is applied to the Stanford heart transplant data, which have been analyzed by previous authors using semiparametric regression meth¬ods. and provides new insights into the relationship between expected survival time after a heart transplant and explanatory variables.  相似文献   

19.
The Monitoring Avian Productivity and Survivorship (MAPS) programme is a cooperative effort to provide annual regional indices of adult population size and post-fledging productivity and estimates of adult survival rates from data pooled from a network of constant-effort mist-netting stations across North America. This paper provides an overview of the field and analytical methods currently employed by MAPS, a discussion of the assumptions underlying the use of these techniques, and a discussion of the validity of some of these assumptions based on data gathered during the first 5 years (1989-1993) of the programme, during which time it grew from 17 to 227 stations. Ageand species-specific differences in dispersal characteristics are important factors affecting the usefulness of the indices of adult population size and productivity derived from MAPS data. The presence of transients, heterogeneous capture probabilities among stations, and the large sample sizes required by models to deal effectively with these two considerations are important factors affecting the accuracy and precision of survival rate estimates derived from MAPS data. Important results from the first 5 years of MAPS are: (1) indices of adult population size derived from MAPS mist-netting data correlated well with analogous indices derived from point-count data collected at MAPS stations; (2) annual changes in productivity indices generated by MAPS were similar to analogous changes documented by direct nest monitoring and were generally as expected when compared to annual changes in weather during the breeding season; and (3) a model using between-year recaptures in Cormack-Jolly-Seber (CJS) mark-recapture analyses to estimate the proportion of residents among unmarked birds was found, for most tropical-wintering species sampled, to provide a better fit with the available data and more realistic and precise estimates of annual survival rates of resident birds than did standard CJS mark-recapture analyses. A detailed review of the statistical characteristics of MAPS data and a thorough evaluation of the field and analytical methods used in the MAPS programme are currently under way.  相似文献   

20.
ABSTRACT

In the context of failure time data, over the long run, dependent observations that might be censored are commonly encountered in practice. The main objective of this paper is to make inference about the common marginal distribution of the failure times. To this end, one nonparametric estimator, namely, the Nelson-Aalen estimator is modified to incorporate the dependence among the observations. The modified estimator is the weighted moving average (WMA) version of the existing estimator used for independent data. It has been shown that the new version is better in the sense of minimizing the one-step ahead forecast errors. Also, the new estimator can be used as a crude measure for checking independence among observations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号