首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
There are a variety of methods in the literature which seek to make iterative estimation algorithms more manageable by breaking the iterations into a greater number of simpler or faster steps. Those algorithms which deal at each step with a proper subset of the parameters are called in this paper partitioned algorithms. Partitioned algorithms in effect replace the original estimation problem with a series of problems of lower dimension. The purpose of the paper is to characterize some of the circumstances under which this process of dimension reduction leads to significant benefits.Four types of partitioned algorithms are distinguished: reduced objective function methods, nested (partial Gauss-Seidel) iterations, zigzag (full Gauss-Seidel) iterations, and leapfrog (non-simultaneous) iterations. Emphasis is given to Newton-type methods using analytic derivatives, but a nested EM algorithm is also given. Nested Newton methods are shown to be equivalent to applying to same Newton method to the reduced objective function, and are applied to separable regression and generalized linear models. Nesting is shown generally to improve the convergence of Newton-type methods, both by improving the quadratic approximation to the log-likelihood and by improving the accuracy with which the observed information matrix can be approximated. Nesting is recommended whenever a subset of parameters is relatively easily estimated. The zigzag method is shown to produce a stable but generally slow iteration; it is fast and recommended when the parameter subsets have approximately uncorrelated estimates. The leapfrog iteration has less guaranteed properties in general, but is similar to nesting and zigzagging when the parameter subsets are orthogonal.  相似文献   

3.
Type-I and Type-II censoring schemes are the widely used censoring schemes available for life testing experiments. A mixture of Type-I and Type-II censoring schemes is known as a hybrid censoring scheme. Different hybrid censoring schemes have been introduced in recent years. In the last few years, a progressive censoring scheme has also received considerable attention. In this article, we mainly consider the Bayesian inference of the unknown parameters of two-parameter exponential distribution under different hybrid and progressive censoring schemes. It is observed that in general the Bayes estimate and the associated credible interval of any function of the unknown parameters, cannot be obtained in explicit form. We propose to use the Monte Carlo sampling procedure to compute the Bayes estimate and also to construct the associated credible interval. Monte Carlo Simulation experiments have been performed to see the effectiveness of the proposed method in case of Type-I hybrid censored samples. The performances are quite satisfactory. One data analysis has been performed for illustrative purposes.  相似文献   

4.
In the 1960's Benoit Mandelbrot and Eugene Fama argued strongly in favor of the stable Paretian distribution as a model for the unconditional distribution of asset returns. Although a substantial body of subsequent empirical studies supported this position, the stable Paretian model plays a minor role in current empirical work.

While in the economics and finance literature stable distributions are virtually exclusively associated with stable Paretian distributions, in this paper we adopt a more fundamental view and extend the concept of stability to a variety of probabilistic schemes. These schemes give rise to alternative stable distributions, which we compare empirically using S&P 500 stock return data. In this comparison the Weibull distribution, associated with both the nonrandom-minimum and geometric-random summation schemes dominates the other stable distributions considered-including the stable Paretian model.  相似文献   

5.
This article provides a simple expression of the Fisher information matrix about the unknown parameter(s) of the underlying lifetime model under the generalized progressive hybrid censoring scheme. The expressions of the expected number of failures and the expected duration of life test are also derived. Exponential and Weibull lifetime models are considered for numerical illustrations. Finally, Fisher information-based optimal schemes are discussed for the Weibull lifetime model.  相似文献   

6.
Allocation of samples in stratified and/or multistage sampling is one of the central issues of sampling theory. In a survey of a population often the constraints for precision of estimators of subpopulations parameters have to be taken care of during the allocation of the sample. Such issues are often solved with mathematical programming procedures. In many situations it is desirable to allocate the sample, in a way which forces the precision of estimates at the subpopulations level to be both: optimal and identical, while the constraints of the total (expected) size of the sample (or samples, in two-stage sampling) are imposed. Here our main concern is related to two-stage sampling schemes. We show that such problem in a wide class of sampling plans has an elegant mathematical and computational solution. This is done due to a suitable definition of the optimization problem, which enables to solve it through a linear algebra setting involving eigenvalues and eigenvectors of matrices defined in terms of some population quantities. As a final result, we obtain a very simple and relatively universal method for calculating the subpopulation optimal and equal-precision allocation which is based on one of the most standard algorithms of linear algebra (available, e.g., in R software). Theoretical solutions are illustrated through a numerical example based on the Labour Force Survey. Finally, we would like to stress that the method we describe allows to accommodate quite automatically for different levels of precision priority for subpopulations.  相似文献   

7.
In this paper, we use a likelihood approach and the local influence method introduced by Cook [Assessment of local influence (with discussion). J Roy Statist Soc Ser B. 1986;48:133–149] to study a vector autoregressive (VAR) model. We present the maximum likelihood estimators and the information matrix. We establish the normal curvature and slope diagnostics for the VAR model under several perturbation schemes and use the Monte Carlo method to obtain benchmark values for determining the influence of directional diagnostics and possible influential observations. An empirical study using the VAR model to fit real data of monthly returns of IBM and S&P500 index illustrates the effectiveness of our proposed diagnostics.  相似文献   

8.
We propose a new procedure for detecting a patch of outliers or influential observations for autoregressive integrated moving average (ARIMA) model using local influence analysis. It is shown that the dependency aspects of time series data gives rise to masking or smearing effects when the local influence analysis is performed using current perturbation schemes. We suggest a new perturbation scheme to take into account the dependent structure of time series data, and employ the stepwise local influence method to give a diagnostic procedure. We show that the new perturbation scheme can avoid the smearing effects, and the stepwise technique of local influence can successfully deal with masking effects. Various simulation studies are performed to show the efficiency of proposed methodology and a real example is used for illustrations.  相似文献   

9.
In this work, we generalize the controlled calibration model by assuming replication on both variables. Likelihood-based methodology is used to estimate the model parameters and the Fisher information matrix is used to construct confidence intervals for the unknown value of the regressor variable. Further, we study the local influence diagnostic method which is based on the conditional expectation of the complete-data log-likelihood function related to the EM algorithm. Some useful perturbation schemes are discussed. A simulation study is carried out to assess the effect of the measurement error on the estimation of the parameter of interest. This new approach is illustrated with a real data set.  相似文献   

10.
We consider exact and approximate Bayesian computation in the presence of latent variables or missing data. Specifically we explore the application of a posterior predictive distribution formula derived in Sweeting And Kharroubi (2003), which is a particular form of Laplace approximation, both as an importance function and a proposal distribution. We show that this formula provides a stable importance function for use within poor man’s data augmentation schemes and that it can also be used as a proposal distribution within a Metropolis-Hastings algorithm for models that are not analytically tractable. We illustrate both uses in the case of a censored regression model and a normal hierarchical model, with both normal and Student t distributed random effects. Although the predictive distribution formula is motivated by regular asymptotic theory, it is not necessary that the likelihood has a closed form or that it possesses a local maximum.  相似文献   

11.
The Hotelling's T 2 control chart, a direct analogue of the univariate Shewhart chart, is perhaps the most commonly used tool in industry for simultaneous monitoring of several quality characteristics. Recent studies have shown that using variable sampling size (VSS) schemes results in charts with more statistical power when detecting small to moderate shifts in the process mean vector. In this paper, we build a cost model of a VSS T 2 control chart for the economic and economic statistical design using the general model of Lorenzen and Vance [The economic design of control charts: A unified approach, Technometrics 28 (1986), pp. 3–11]. We optimize this model using a genetic algorithm approach. We also study the effects of the costs and operating parameters on the VSS T 2 parameters, and show, through an example, the advantage of economic design over statistical design for VSS T 2 charts, and measure the economic advantage of VSS sampling versus fixed sample size sampling.  相似文献   

12.
In many industrial quality control experiments and destructive stress testing, the only available data are successive minima (or maxima)i.e., record-breaking data. There are two sampling schemes used to collect record-breaking data: random sampling and inverse sampling. For random sampling, the total sample size is predetermined and the number of records is a random variable while in inverse-sampling the number of records to be observed is predetermined; thus the sample size is a random variable. The purpose of this papper is to determinevia simulations, which of the two schemes, if any, is more efficient. Since the two schemes are equivalent asymptotically, the simulations were carried out for small to moderate sized record-breaking samples. Simulated biases and mean square errors of the maximum likelihood estimators of the parameters using the two sampling schemes were compared. In general, it was found that if the estimators were well behaved, then there was no significant difference between the mean square errors of the estimates for the two schemes. However, for certain distributions described by both a shape and a scale parameter, random sampling led to estimators that were inconsistent. On the other hand, the estimated obtained from inverse sampling were always consistent. Moreover, for moderated sized record-breaking samples, the total sample size that needs to be observed is smaller for inverse sampling than for random sampling.  相似文献   

13.
现行的轮换样本调查使用各种类型的单水平轮换模式,在西方各国均得到了广泛应用,但是也存在着一系列问题。因此,通过对各种类型的轮换模式进行统一,并进行系统化、理论化研究,最终得出了二维平衡单水平轮换模式设计方法,并对其应用优势进行了总结。这套设计方法不仅将轮换模式设计与后续的估计方法研究统一起来,而且还能够削减各类轮换偏差的负面影响,并能准确度量轮换样本之间的相关关系,最终得出更加准确的连续性抽样估计量。  相似文献   

14.
Effectively solving the label switching problem is critical for both Bayesian and Frequentist mixture model analyses. In this article, a new relabeling method is proposed by extending a recently developed modal clustering algorithm. First, the posterior distribution is estimated by a kernel density from permuted MCMC or bootstrap samples of parameters. Second, a modal EM algorithm is used to find the m! symmetric modes of the KDE. Finally, samples that ascend to the same mode are assigned the same label. Simulations and real data applications demonstrate that the new method provides more accurate estimates than many existing relabeling methods.  相似文献   

15.
The author provides an approximated solution for the filtering of a state-space model, where the hidden state process is a continuous-time pure jump Markov process and the observations come from marked point processes. Each state k corresponds to a different marked point process, defined by its conditional intensity function λ k (t). When a state is visited by the hidden process, the corresponding marked point process is observed. The filtering equations are obtained by applying the innovation method and the integral representation theorem of a point process martingale. Since the filtering equations belong to the family of Kushner–Stratonovich equations, an iterative solution is calculated. The theoretical solution is approximated and a Monte Carlo integration technique employed to implement it. The sequential method has been tested on a simulated data set based on marked point processes widely used in the statistical analysis of seismic sequences: the Poisson model, the stress release model and the Etas model.  相似文献   

16.
While well chosen sampling schemes may substantially increase efficiency of observational studies, some sampling schemes may instead decrease efficiency. Rules of thumb how to choose sampling schemes are only available for some special cases. In this paper we provide tools to compare efficiencies, and cost adjusted efficiencies, of different sampling schemes, in order to facilitate this choice. The method can be used for both categorical and continuous outcome variables. Some examples are presented, focusing on data from ascertainment sampling schemes. A Monte Carlo method is used to overcome computational issues wherever needed. The results are illustrated in graphs.  相似文献   

17.
Parametric incomplete data models defined by ordinary differential equations (ODEs) are widely used in biostatistics to describe biological processes accurately. Their parameters are estimated on approximate models, whose regression functions are evaluated by a numerical integration method. Accurate and efficient estimations of these parameters are critical issues. This paper proposes parameter estimation methods involving either a stochastic approximation EM algorithm (SAEM) in the maximum likelihood estimation, or a Gibbs sampler in the Bayesian approach. Both algorithms involve the simulation of non-observed data with conditional distributions using Hastings–Metropolis (H–M) algorithms. A modified H–M algorithm, including an original local linearization scheme to solve the ODEs, is proposed to reduce the computational time significantly. The convergence on the approximate model of all these algorithms is proved. The errors induced by the numerical solving method on the conditional distribution, the likelihood and the posterior distribution are bounded. The Bayesian and maximum likelihood estimation methods are illustrated on a simulated pharmacokinetic nonlinear mixed-effects model defined by an ODE. Simulation results illustrate the ability of these algorithms to provide accurate estimates.  相似文献   

18.
In this article, we present a method for sample size calculation for studies involving both the intercept and slope parameters of a simple linear regression model. Some methods have been proposed in the literature to determine the adequate sample size. However, they are usually based on the line slope only. We propose a method based on the F statistic that involves both the intercept and the slope parameters of the model. The validation process is conducted by fitting a simple linear regression model and by testing a zero intercept and unity slope hypothesis. Compared to a traditional method and using Monte Carlo simulations, encouraging results attest for the clear superiority of the proposed method. The article ends with a real-life example showing the value of the new method in practice.  相似文献   

19.
In this paper, we extend the censored linear regression model with normal errors to Student-t errors. A simple EM-type algorithm for iteratively computing maximum-likelihood estimates of the parameters is presented. To examine the performance of the proposed model, case-deletion and local influence techniques are developed to show its robust aspect against outlying and influential observations. This is done by the analysis of the sensitivity of the EM estimates under some usual perturbation schemes in the model or data and by inspecting some proposed diagnostic graphics. The efficacy of the method is verified through the analysis of simulated data sets and modelling a real data set first analysed under normal errors. The proposed algorithm and methods are implemented in the R package CensRegMod.  相似文献   

20.
Bivariate rank set sample (BVRSS) matched pair sign test is introduced and investigated for different ranking based schemes. We show that this test is asymptotically more efficient and more powerful than its counterpart sign test based on a bivariate simple random sample (BVSRS) for different ranking schemes. The asymptotic null distribution and the efficiency of the test are derived. Pitman’s asymptotic relative efficiency is used to compare the asymptotic performance of the matched pair sign test using BVRSS versus using BVSRS in all ranking cases. For small sample sizes, the bootstrap method is used to estimate P-values. Numerical comparisons are used to gain insight about the efficiency of the BVRSS sign test compared to the BVSRS sign test. Our numerical and theoretical results indicate that using any ranking scheme of BVRSS for the matched pair sign test is more efficient than using BVSRS.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号