首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
In this article, we consider the multiple step stress model based on the cumulative exposure model assumption. Here, it is assumed that for a given stress level, the lifetime of the experimental units follows exponential distribution and the expected lifetime decreases as the stress level increases. We mainly focus on the order restricted inference of the unknown parameters of the lifetime distributions. First we consider the order restricted maximum likelihood estimators (MLEs) of the model parameters. It is well known that the order restricted MLEs cannot be obtained in explicit forms. We propose an algorithm that stops in finite number of steps and it provides the MLEs. We further consider the Bayes estimates and the associated credible intervals under the squared error loss function. Due to the absence of explicit form of the Bayes estimates, we propose to use the importance sampling technique to compute Bayes estimates. We provide an extensive simulation study in case of three stress levels mainly to see the performance of the proposed methods. Finally the analysis of one real data set has been provided for illustrative purposes.  相似文献   

2.
The purpose of this paper is to address the optimal design of the step-stress accelerated degradation test (SSADT) issue when the degradation process of a product follows the inverse Gaussian (IG) process. For this design problem, an important task is to construct a link model to connect the degradation magnitudes at different stress levels. In this paper, a proportional degradation rate model is proposed to link the degradation paths of the SSADT with stress levels, in which the average degradation rate is proportional to an exponential function of the stress level. Two optimization problems about the asymptotic variances of the lifetime characteristics' estimators are investigated. The optimal settings including sample size, measurement frequency and the number of measurements for each stress level are determined by minimizing the two objective functions within a given budget constraint. As an example, the sliding metal wear data are used to illustrate the proposed model.  相似文献   

3.
The Integrated Nested Laplace Approximation (INLA) has established itself as a widely used method for approximate inference on Bayesian hierarchical models which can be represented as a latent Gaussian model (LGM). INLA is based on producing an accurate approximation to the posterior marginal distributions of the parameters in the model and some other quantities of interest by using repeated approximations to intermediate distributions and integrals that appear in the computation of the posterior marginals. INLA focuses on models whose latent effects are a Gaussian Markov random field. For this reason, we have explored alternative ways of expanding the number of possible models that can be fitted using the INLA methodology. In this paper, we present a novel approach that combines INLA and Markov chain Monte Carlo (MCMC). The aim is to consider a wider range of models that can be fitted with INLA only when some of the parameters of the model have been fixed. We show how new values of these parameters can be drawn from their posterior by using conditional models fitted with INLA and standard MCMC algorithms, such as Metropolis–Hastings. Hence, this will extend the use of INLA to fit models that can be expressed as a conditional LGM. Also, this new approach can be used to build simpler MCMC samplers for complex models as it allows sampling only on a limited number of parameters in the model. We will demonstrate how our approach can extend the class of models that could benefit from INLA, and how the R-INLA package will ease its implementation. We will go through simple examples of this new approach before we discuss more advanced applications with datasets taken from the relevant literature. In particular, INLA within MCMC will be used to fit models with Laplace priors in a Bayesian Lasso model, imputation of missing covariates in linear models, fitting spatial econometrics models with complex nonlinear terms in the linear predictor and classification of data with mixture models. Furthermore, in some of the examples we could exploit INLA within MCMC to make joint inference on an ensemble of model parameters.  相似文献   

4.
The Quermass‐interaction model allows to generalize the classical germ‐grain Boolean model in adding a morphological interaction between the grains. It enables to model random structures with specific morphologies, which are unlikely to be generated from a Boolean model. The Quermass‐interaction model depends in particular on an intensity parameter, which is impossible to estimate from classical likelihood or pseudo‐likelihood approaches because the number of points is not observable from a germ‐grain set. In this paper, we present a procedure based on the Takacs–Fiksel method, which is able to estimate all parameters of the Quermass‐interaction model, including the intensity. An intensive simulation study is conducted to assess the efficiency of the procedure and to provide practical recommendations. It also illustrates that the estimation of the intensity parameter is crucial in order to identify the model. The Quermass‐interaction model is finally fitted by our method to P. Diggle's heather data set.  相似文献   

5.
Time series within fields such as finance and economics are often modelled using long memory processes. Alternative studies on the same data can suggest that series may actually contain a ‘changepoint’ (a point within the time series where the data generating process has changed). These models have been shown to have elements of similarity, such as within their spectrum. Without prior knowledge this leads to an ambiguity between these two models, meaning it is difficult to assess which model is most appropriate. We demonstrate that considering this problem in a time-varying environment using the time-varying spectrum removes this ambiguity. Using the wavelet spectrum, we then use a classification approach to determine the most appropriate model (long memory or changepoint). Simulation results are presented across a number of models followed by an application to stock cross-correlations and US inflation. The results indicate that the proposed classification outperforms an existing hypothesis testing approach on a number of models and performs comparatively across others.  相似文献   

6.
EEG microstate analysis investigates the collection of distinct temporal blocks that characterize the electrical activity of the brain. Brain activity within each microstate is stable, but activity switches rapidly between different microstates in a nonrandom way. We propose a Bayesian nonparametric model that concurrently estimates the number of microstates and their underlying behaviour. We use a Markov switching vector autoregressive (VAR) framework, where a hidden Markov model (HMM) controls the nonrandom state switching dynamics of the EEG activity and a VAR model defines the behaviour of all time points within a given state. We analyze the resting‐state EEG data from twin pairs collected through the Minnesota Twin Family Study, consisting of 70 epochs per participant, where each epoch corresponds to 2 s of EEG data. We fit our model at the twin pair level, sharing information within epochs from the same participant and within epochs from the same twin pair. We capture within twin‐pair similarity, using an Indian buffet process, to consider an infinite library of microstates, allowing each participant to select a finite number of states from this library. The state spaces of highly similar twins may completely overlap while dissimilar twins could select distinct state spaces. In this way, our Bayesian nonparametric model defines a sparse set of states that describe the EEG data. All epochs from a single participant use the same set of states and are assumed to adhere to the same state switching dynamics in the HMM model, enforcing within‐participant similarity.  相似文献   

7.
One of the key questions in the use of mixture models concerns the choice of the number of components most suitable for a given data set. In this paper we investigate answers to this problem in the context of likelihood‐based clustering of the rows of a matrix of ordinal data modelled by the ordered stereotype model. Two methodologies for selecting the best model are demonstrated and compared. The first approach fits a separate model to the data for each possible number of clusters, and then uses an information criterion to select the best model. The second approach uses a Bayesian construction in which the parameters and the number of clusters are estimated simultaneously from their joint posterior distribution. Simulation studies are presented which include a variety of scenarios in order to test the reliability of both approaches. Finally, the results of the application of model selection to two real data sets are shown.  相似文献   

8.
Summary.  The number of people to select within selected households has significant consequences for the conduct and output of household surveys. The operational and data quality implications of this choice are carefully considered in many surveys, but the effect on statistical efficiency is not well understood. The usual approach is to select all people in each selected household, where operational and data quality concerns make this feasible. If not, one person is usually selected from each selected household. We find that this strategy is not always justified, and we develop intermediate designs between these two extremes. Current practices were developed when household survey field procedures needed to be simple and robust; however, more complex designs are now feasible owing to the increasing use of computer-assisted interviewing. We develop more flexible designs by optimizing survey cost, based on a simple cost model, subject to a required variance for an estimator of population total. The innovation lies in the fact that household sample sizes are small integers, which creates challenges in both design and estimation. The new methods are evaluated empirically by using census and health survey data, showing considerable improvement over existing methods in some cases.  相似文献   

9.
Biological control of pests is an important branch of entomology, providing environmentally friendly forms of crop protection. Bioassays are used to find the optimal conditions for the production of parasites and strategies for application in the field. In some of these assays, proportions are measured and, often, these data have an inflated number of zeros. In this work, six models will be applied to data sets obtained from biological control assays for Diatraea saccharalis , a common pest in sugar cane production. A natural choice for modelling proportion data is the binomial model. The second model will be an overdispersed version of the binomial model, estimated by a quasi-likelihood method. This model was initially built to model overdispersion generated by individual variability in the probability of success. When interest is only in the positive proportion data, a model can be based on the truncated binomial distribution and in its overdispersed version. The last two models include the zero proportions and are based on a finite mixture model with the binomial distribution or its overdispersed version for the positive data. Here, we will present the models, discuss their estimation and compare the results.  相似文献   

10.
11.
In this article, we introduce a new method for modelling curves with dynamic structures, using a non-parametric approach formulated as a state space model. The non-parametric approach is based on the use of penalised splines, represented as a dynamic mixed model. This formulation can capture the dynamic evolution of curves using a limited number of latent factors, allowing an accurate fit with a small number of parameters. We also present a new method to determine the optimal smoothing parameter through an adaptive procedure, using a formulation analogous to a model of stochastic volatility (SV). The non-parametric state space model allows unifying different methods applied to data with a functional structure in finance. We present the advantages and limitations of this method through simulation studies and also by comparing its predictive performance with other parametric and non-parametric methods used in financial applications using data on the term structure of interest rates.  相似文献   

12.
ABSTRACT

In this article, we consider a simple step-stress life test in the presence of exponentially distributed competing risks. It is assumed that the stress is changed when a pre-specified number of failures takes place. The data is assumed to be Type-II censored. We obtain the maximum likelihood estimators of the model parameters and the exact conditional distributions of the maximum likelihood estimators. Based on the conditional distribution, approximate confidence intervals (CIs) of unknown parameters have been constructed. Percentile bootstrap CIs of model parameters are also provided. Optimal test plan is addressed. We perform an extensive simulation study to observe the behaviour of the proposed method. The performances are quite satisfactory. Finally we analyse two data sets for illustrative purposes.  相似文献   

13.
This paper describes the modelling and fitting of Gaussian Markov random field spatial components within a Generalized AdditiveModel for Location, Scale and Shape (GAMLSS) model. This allows modelling of any or all the parameters of the distribution for the response variable using explanatory variables and spatial effects. The response variable distribution is allowed to be a non-exponential family distribution. A new package developed in R to achieve this is presented. We use Gaussian Markov random fields to model the spatial effect in Munich rent data and explore some features and characteristics of the data. The potential of using spatial analysis within GAMLSS is discussed. We argue that the flexibility of parametric distributions, ability to model all the parameters of the distribution and diagnostic tools of GAMLSS provide an ideal environment for modelling spatial features of data.  相似文献   

14.
The particle Gibbs sampler is a systematic way of using a particle filter within Markov chain Monte Carlo. This results in an off‐the‐shelf Markov kernel on the space of state trajectories, which can be used to simulate from the full joint smoothing distribution for a state space model in a Markov chain Monte Carlo scheme. We show that the particle Gibbs Markov kernel is uniformly ergodic under rather general assumptions, which we will carefully review and discuss. In particular, we provide an explicit rate of convergence, which reveals that (i) for fixed number of data points, the convergence rate can be made arbitrarily good by increasing the number of particles and (ii) under general mixing assumptions, the convergence rate can be kept constant by increasing the number of particles superlinearly with the number of observations. We illustrate the applicability of our result by studying in detail a common stochastic volatility model with a non‐compact state space.  相似文献   

15.
ABSTRACT

The Lindley distribution is an important distribution for analysing the stress–strength reliability models and lifetime data. In many ways, the Lindley distribution is a better model than that based on the exponential distribution. Order statistics arise naturally in many of such applications. In this paper, we derive the exact explicit expressions for the single, double (product), triple and quadruple moments of order statistics from the Lindley distribution. Then, we use these moments to obtain the best linear unbiased estimates (BLUEs) of the location and scale parameters based on Type-II right-censored samples. Next, we use these results to determine the mean, variance, and coefficients of skewness and kurtosis of some certain linear functions of order statistics to develop Edgeworth approximate confidence intervals of the location and scale Lindley parameters. In addition, we carry out some numerical illustrations through Monte Carlo simulations to show the usefulness of the findings. Finally, we apply the findings of the paper to some real data set.  相似文献   

16.
The performance of commonly used asymptotic inference procedures for the random-effects model used in meta analysis relies on the number of studies. When the number of studies is moderate or small, the exact inference procedure is more reliable than the asymptotic counterparts. However, the related numerical computation may be demanding and an obstacle of routine use of the exact method. In this paper, we proposed a novel numerical algorithm for constructing the exact 95% confidence interval of the location parameter in the random-effects model. The algorithm is much faster than the naive method and may greatly facilitate the use of the more appropriate exact inference procedure in meta analysis. Numerical studies and real data examples are used to illustrate the advantage of the proposed method.  相似文献   

17.
Virtual observatories give us access to huge amounts of image data that are often redundant. Our goal is to take advantage of this redundancy by combining images of the same field of view into a single model. To achieve this goal, we propose to develop a multi-source data fusion method that relies on probability and band-limited signal theory. The target object is an image to be inferred from a number of blurred and noisy sources, possibly from different sensors under various conditions (i.e. resolution, shift, orientation, blur, noise...). We aim at the recovery of a compound model “image + uncertainties” that best relates to the observations and contains a maximum of useful information from the initial data set. Thus, in some cases, spatial super-resolution may be required in order to preserve the information. We propose to use a Bayesian inference scheme to invert a forward model, which describes the image formation process for each observation and takes into account some a priori knowledge (e.g. stars as point sources). This involves both automatic registration and spatial resampling, which are ill-posed inverse problems that are addressed within a rigorous Bayesian framework. The originality of the work is in devising a new technique of multi-image data fusion that provides us with super-resolution, self-calibration and possibly model selection capabilities. This approach should outperform existing methods such as resample-and-add or drizzling since it can handle different instrument characteristics for each input image and compute uncertainty estimates as well. Moreover, it is designed to also work in a recursive way, so that the model can be updated when new data become available.  相似文献   

18.
In the field of computer virus, a growing number of mathematical models have been published during the last decade. In this paper, we consider several estimation methods under two parametric models, the random constant scanning model and the nonhomogeneous random scanning model, to propose a comparison of the estimators’ performances based on the simulation of random populations. We conclude our work with an application of the different estimation methods to a real data set of worm propagation.  相似文献   

19.
In this paper, we study the change-point inference problem motivated by the genomic data that were collected for the purpose of monitoring DNA copy number changes. DNA copy number changes or copy number variations (CNVs) correspond to chromosomal aberrations and signify abnormality of a cell. Cancer development or other related diseases are usually relevant to DNA copy number changes on the genome. There are inherited random noises in such data, therefore, there is a need to employ an appropriate statistical model for identifying statistically significant DNA copy number changes. This type of statistical inference is evidently crucial in cancer researches, clinical diagnostic applications, and other related genomic researches. For the high-throughput genomic data resulting from DNA copy number experiments, a mean and variance change point model (MVCM) for detecting the CNVs is appropriate. We propose to use a Bayesian approach to study the MVCM for the cases of one change and propose to use a sliding window to search for all CNVs on a given chromosome. We carry out simulation studies to evaluate the estimate of the locus of the DNA copy number change using the derived posterior probability. These simulation results show that the approach is suitable for identifying copy number changes. The approach is also illustrated on several chromosomes from nine fibroblast cancer cell line data (array-based comparative genomic hybridization data). All DNA copy number aberrations that have been identified and verified by karyotyping are detected by our approach on these cell lines.  相似文献   

20.
Summary.  We present an application of reversible jump Markov chain Monte Carlo sampling from the field of neurophysiology where we seek to estimate the number of motor units within a single muscle. Such an estimate is needed for monitoring the progression of neuromuscular diseases such as amyotrophic lateral sclerosis. Our data consist of action potentials that were recorded from the surface of a muscle in response to stimuli of different intensities applied to the nerve supplying the muscle. During the gradual increase in intensity of the stimulus from the threshold to supramaximal, all motor units are progressively excited. However, at any given submaximal intensity of stimulus, the number of units that are excited is variable, because of random fluctuations in axonal excitability. Furthermore, the individual motor unit action potentials exhibit variability. To account for these biological properties, Ridall and co-workers developed a model of motor unit activation that is capable of describing the response where the number of motor units, N , is fixed. The purpose of this paper is to extend that model so that the possible number of motor units, N , is a stochastic variable. We illustrate the elements of our model, show that the results are reproducible and show that our model can measure the decline in motor unit numbers during the course of amyotrophic lateral sclerosis. Our method holds promise of being useful in the study of neurogenic diseases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号