首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In recent years, a large number of new discrete distributions have appeared in the literature. However, flexible discrete models which, at the same time, allow for easy statistical inference, are still an exception. This paper makes a detailed analysis of a family of discrete failure time distributions which meets both requirements. It examines the maximum likelihood estimation of the unknown parameters and presents a goodness-of-fit test for this model. The test is used for the selection of an appropriate model for datasets of frequencies of the duration of atmospheric circulation patterns.  相似文献   

2.
Engineering degradation tests allow industry to assess the potential life span of long-life products that do not fail readily under accelerated conditions in life tests. A general statistical model is presented here for performance degradation of an item of equipment. The degradation process in the model is taken to be a Wiener diffusion process with a time scale transformation. The model incorporates Arrhenius extrapolation for high stress testing. The lifetime of an item is defined as the time until performance deteriorates to a specified failure threshold. The model can be used to predict the lifetime of an item or the extent of degradation of an item at a specified future time. Inference methods for the model parameters, based on accelerated degradation test data, are presented. The model and inference methods are illustrated with a case application involving self-regulating heating cables. The paper also discusses a number of practical issues encountered in applications. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

3.
4.
The diffusion process is a widely used statistical model for many natural dynamic phenomena but its inference is very complicated because complete data describing the diffusion sample path is not necessarily available. In addition, data is often collected with substantial uncertainty and it is not uncommon to have missing observations. Thus, the observed process will be discrete over a finite time period and the marginal likelihood given by this discrete data is not always available. In this paper, we consider a class of nonstationary diffusion process models with not only the measurement error but also discretely time-varying parameters which are modeled via a state space model. Hierarchical Bayesian inference for such a diffusion process model with time-varying parameters is applied to financial data.  相似文献   

5.
Time series modelling of childhood diseases: a dynamical systems approach   总被引:3,自引:0,他引:3  
A key issue in the dynamical modelling of epidemics is the synthesis of complex mathematical models and data by means of time series analysis. We report such an approach, focusing on the particularly well-documented case of measles. We propose the use of a discrete time epidemic model comprising the infected and susceptible class as state variables. The model uses a discrete time version of the susceptible–exposed–infected–recovered type epidemic models, which can be fitted to observed disease incidence time series. We describe a method for reconstructing the dynamics of the susceptible class, which is an unobserved state variable of the dynamical system. The model provides a remarkable fit to the data on case reports of measles in England and Wales from 1944 to 1964. Morever, its systematic part explains the well-documented predominant biennial cyclic pattern. We study the dynamic behaviour of the time series model and show that episodes of annual cyclicity, which have not previously been explained quantitatively, arise as a response to a quicker replenishment of the susceptible class during the baby boom, around 1947.  相似文献   

6.
Abstract.  Multivariate failure time data arises when each study subject can potentially ex-perience several types of failures or recurrences of a certain phenomenon, or when failure times are sampled in clusters. We formulate the marginal distributions of such multivariate data with semiparametric accelerated failure time models (i.e. linear regression models for log-transformed failure times with arbitrary error distributions) while leaving the dependence structures for related failure times completely unspecified. We develop rank-based monotone estimating functions for the regression parameters of these marginal models based on right-censored observations. The estimating equations can be easily solved via linear programming. The resultant estimators are consistent and asymptotically normal. The limiting covariance matrices can be readily estimated by a novel resampling approach, which does not involve non-parametric density estimation or evaluation of numerical derivatives. The proposed estimators represent consistent roots to the potentially non-monotone estimating equations based on weighted log-rank statistics. Simulation studies show that the new inference procedures perform well in small samples. Illustrations with real medical data are provided.  相似文献   

7.
In the analysis of time‐to‐event data, competing risks occur when multiple event types are possible, and the occurrence of a competing event precludes the occurrence of the event of interest. In this situation, statistical methods that ignore competing risks can result in biased inference regarding the event of interest. We review the mechanisms that lead to bias and describe several statistical methods that have been proposed to avoid bias by formally accounting for competing risks in the analyses of the event of interest. Through simulation, we illustrate that Gray's test should be used in lieu of the logrank test for nonparametric hypothesis testing. We also compare the two most popular models for semiparametric modelling: the cause‐specific hazards (CSH) model and Fine‐Gray (F‐G) model. We explain how to interpret estimates obtained from each model and identify conditions under which the estimates of the hazard ratio and subhazard ratio differ numerically. Finally, we evaluate several model diagnostic methods with respect to their sensitivity to detect lack of fit when the CSH model holds, but the F‐G model is misspecified and vice versa. Our results illustrate that adequacy of model fit can strongly impact the validity of statistical inference. We recommend analysts incorporate a model diagnostic procedure and contingency to explore other appropriate models when designing trials in which competing risks are anticipated.  相似文献   

8.
This paper develops a space‐time statistical model for local forecasting of surface‐level wind fields in a coastal region with complex topography. The statistical model makes use of output from deterministic numerical weather prediction models which are able to produce forecasts of surface wind fields on a spatial grid. When predicting surface winds at observing stations , errors can arise due to sub‐grid scale processes not adequately captured by the numerical weather prediction model , and the statistical model attempts to correct for these influences. In particular , it uses information from observing stations within the study region as well as topographic information to account for local bias. Bayesian methods for inference are used in the model , with computations carried out using Markov chain Monte Carlo algorithms. Empirical performance of the model is described , illustrating that a structured Bayesian approach to complicated space‐time models of the type considered in this paper can be readily implemented and can lead to improvements in forecasting over traditional methods.  相似文献   

9.
Failure Inference From a Marker Process Based on a Bivariate Wiener Model   总被引:1,自引:0,他引:1  
Many models have been proposed that relate failure times and stochastic time-varying covariates. In some of these models, failure occurs when a particular observable marker crosses a threshold level. We are interested in the more difficult, and often more realistic, situation where failure is not related deterministically to an observable marker. In this case, joint models for marker evolution and failure tend to lead to complicated calculations for characteristics such as the marginal distribution of failure time or the joint distribution of failure time and marker value at failure. This paper presents a model based on a bivariate Wiener process in which one component represents the marker and the second, which is latent (unobservable), determines the failure time. In particular, failure occurs when the latent component crosses a threshold level. The model yields reasonably simple expressions for the characteristics mentioned above and is easy to fit to commonly occurring data that involve the marker value at the censoring time for surviving cases and the marker value and failure time for failing cases. Parametric and predictive inference are discussed, as well as model checking. An extension of the model permits the construction of a composite marker from several candidate markers that may be available. The methodology is demonstrated by a simulated example and a case application.  相似文献   

10.
In this paper, Erlang–Lindley distribution (ErLD) is proposed which offers a more flexible model for waiting time data. It has the property that it can accommodate increasing, bathtub, and inverted bathtub shapes. Several statistical and reliability properties are derived and studied. The moments, its associated measures, and the limiting distributions of order statistics are derived. The model parameters are estimated by maximum likelihood and method of moments. An application of the proposed distribution to some waiting time data shows that it can give a better fit than other important lifetime models.  相似文献   

11.
In this paper, we extend SiZer (SIgnificant ZERo crossing of the derivatives) to dependent data for the purpose of goodness-of-fit tests for time series models. Dependent SiZer compares the observed data with a specific null model being tested by adjusting the statistical inference using an assumed autocovariance function. This new approach uses a SiZer type visualization to flag statistically significant differences between the data and a given null model. The power of this approach is demonstrated through some examples of time series of Internet traffic data. It is seen that such time series can have even more burstiness than is predicted by the popular, long- range dependent, Fractional Gaussian Noise model.  相似文献   

12.
In the statistical literature, several discrete distributions have been developed so far. However, in this progressive technological era, the data generated from different fields is getting complicated day by day, making it difficult to analyze this real data through the various discrete distributions available in the existing literature. In this context, we have proposed a new flexible family of discrete models named discrete odd Weibull-G (DOW-G) family. Its several impressive distributional characteristics are derived. A key feature of the proposed family is its failure rate function that can take a variety of shapes for distinct values of the unknown parameters, like decreasing, increasing, constant, J-, and bathtub-shaped. Furthermore, the presented family not only adequately captures the skewed and symmetric data sets, but it can also provide a better fit to equi-, over-, under-dispersed data. After producing the general class, two particular distributions of the DOW-G family are extensively studied. The parameters estimation of the proposed family, are explored by the method of maximum likelihood and Bayesian approach. A compact Monte Carlo simulation study is performed to assess the behavior of the estimation methods. Finally, we have explained the usefulness of the proposed family by using two different real data sets.  相似文献   

13.
In this paper we present methods for inference on data selected by a complex sampling design for a class of statistical models for the analysis of ordinal variables. Specifically, assuming that the sampling scheme is not ignorable, we derive for the class of cub models (Combination of discrete Uniform and shifted Binomial distributions) variance estimates for a complex two stage stratified sample. Both Taylor linearization and repeated replication variance estimators are presented. We also provide design‐based test diagnostics and goodness‐of‐fit measures. We illustrate by means of real data analysis the differences between survey‐weighted and unweighted point estimates and inferences for cub model parameters.  相似文献   

14.
In survival analysis applications, the presence of failure rate functions with non monotone shapes is common. Therefore, models that can accommodate such different shapes are needed. In this article, we present a location regression model based on the complementary exponentiated exponential geometric distribution as an alternative to the usual bathtub, increasing, and decreasing failure rates in lifetime data. Assuming censored data, we consider the maximum likelihood inference for analysis, graphical verification for residuals, and test statistics for influential points.  相似文献   

15.
In this paper we consider semiparametric inference methods for the time scale parameters in general time scale models (Oakes, 1995, Duchesne and Lawless, 2000). We use the results of Robins and Tsiatis (1992) and Lin and Ying (1995) to derive a rank-based estimator that is more efficient and robust than the traditional minimum coefficient of variation (min CV) estimator of Kordonsky and Gerstbakh (1993) for many underlying models. Moreover, our estimator can readily handle censored samples, which is not the case with the min CV method.  相似文献   

16.
Modelling count data is one of the most important issues in statistical research. In this paper, a new probability mass function is introduced by discretizing the continuous failure model of the Lindley distribution. The model obtained is over-dispersed and competitive with the Poisson distribution to fit automobile claim frequency data. After revising some of its properties a compound discrete Lindley distribution is obtained in closed form. This model is suitable to be applied in the collective risk model when both number of claims and size of a single claim are implemented into the model. The new compound distribution fades away to zero much more slowly than the classical compound Poisson distribution, being therefore suitable for modelling extreme data.  相似文献   

17.
In randomized clinical trials or observational studies, subjects are recruited at multiple treating sites. Factors that vary across sites may have some influence on outcomes; therefore, they need to be taken into account to get better results. We apply the accelerated failure time (AFT) model with linear mixed effects to analyze failure time data, accounting for correlations between outcomes. Specifically, we use Bayesian approach to fit the data, computing the regression parameters by Gibbs sampler combined with Buckley-James method. This approach is compared with the marginal independence approach and other methods through simulations and an application to a real example.  相似文献   

18.
Compositional time series are multivariate time series which at each time point are proportions that sum to a constant. Accurate inference for such series which occur in several disciplines such as geology, economics and ecology is important in practice. Usual multivariate statistical procedures ignore the inherent constrained nature of these observations as parts of a whole and may lead to inaccurate estimation and prediction. In this article, a regression model with vector autoregressive moving average (VARMA) errors is fit to the compositional time series after an additive log ratio (ALR) transformation. Inference is carried out in a hierarchical Bayesian framework using Markov chain Monte Carlo techniques. The approach is illustrated on compositional time series of mortality events in Los Angeles in order to investigate dependence of different categories of mortality on air quality.  相似文献   

19.
This paper proposes a unified framework for defining and fitting stochastic, discrete‐time, discrete‐stage population dynamics models. The biological system is described by a state‐space model, where the true but unknown state of the population is modelled by a state process, and this is linked to survey data by an observation process. All sources of uncertainty in the inputs, including uncertainty about model specification, are readily incorporated. The paper shows how the state process can be represented as a generalization of the standard Leslie or Lefkovitch matrix. By dividing the state process into subprocesses, complex models can be constructed from manageable building blocks. The paper illustrates the approach with a model of the British grey seal metapopulation, using sequential importance sampling with kernel smoothing to fit the model.  相似文献   

20.
Applied work routinely relies on heteroscedasticity and autocorrelation consistent (HAC) standard errors when conducting inference in a time series setting. As is well known, however, these corrections perform poorly in small samples under pronounced autocorrelations. In this article, I first provide a review of popular methods to clarify the reasons for this failure. I then derive inference that remains valid under a specific form of strong dependence. In particular, I assume that the long-run properties can be approximated by a stationary Gaussian AR(1) model, with coefficient arbitrarily close to one. In this setting, I derive tests that come close to maximizing a weighted average power criterion. Small sample simulations show these tests to perform well, also in a regression context.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号