首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Abstract

This paper investigates the statistical analysis of grouped accelerated temperature cycling test data when the product lifetime follows a Weibull distribution. A log-linear acceleration equation is derived from the Coffin-Manson model. The problem is transformed to a constant-stress accelerated life test with grouped data and multiple acceleration variables. The Jeffreys prior and reference priors are derived. Maximum likelihood estimation and Bayesian estimation with objective priors are obtained by applying the technique of data augmentation. A simulation study shows that both of these two methods perform well when sample size is large, and the Bayesian method gives better performance under small sample sizes.  相似文献   

2.
Abstract

Markov processes offer a useful basis for modeling the progression of organisms through successive stages of their life cycle. When organisms are examined intermittently in developmental studies, likelihoods can be constructed based on the resulting panel data in terms of transition probability functions. In some settings however, organisms cannot be tracked individually due to a difficulty in identifying distinct individuals, and in such cases aggregate counts of the number of organisms in different stages of development are recorded at successive time points. We consider the setting in which such aggregate counts are available for each of a number of tanks in a developmental study. We develop methods which accommodate clustering of the transition rates within tanks using a marginal modeling approach followed by robust variance estimation, and through use of a random effects model. Composite likelihood is proposed as a basis of inference in both settings. An extension which incorporates mortality is also discussed. The proposed methods are shown to perform well in empirical studies and are applied in an illustrative example on the growth of the Arabidopsis thaliana plant.  相似文献   

3.
ABSTRACT

In this paper, we investigate the consistency of the Expectation Maximization (EM) algorithm-based information criteria for model selection with missing data. The criteria correspond to a penalization of the conditional expectation of the complete data log-likelihood given the observed data and with respect to the missing data conditional density. We present asymptotic properties related to maximum likelihood estimation in the presence of incomplete data and we provide sufficient conditions for the consistency of model selection by minimizing the information criteria. Their finite sample performance is illustrated through simulation and real data studies.  相似文献   

4.
Brief Abstract

This article focuses on estimation of multivariate simple linear profiles. While outliers may hamper the expected performance of the ordinary regression estimators, this study resorts to robust estimators as the remedy of the estimation problem in presence of contaminated observations. More specifically, three robust estimators M, S and MM are employed. Extensive simulation runs show that in the absence of outliers or for small amount of contamination, the robust methods perform as well as the classical least square method, while for medium and large amounts of contamination the proposed estimators perform considerably better than classical method.  相似文献   

5.
ABSTRACT

In incident cohort studies, survival data often include subjects who have had an initiate event at recruitment and may potentially experience two successive events (first and second) during the follow-up period. When disease registries or surveillance systems collect data based on incidence occurring within a specific calendar time interval, the initial event is usually subject to double truncation. Furthermore, since the second duration process is observable only if the first event has occurred, double truncation and dependent censoring arise. In this article, under the two sampling biases with an unspecified distribution of truncation variables, we propose a nonparametric estimator of the joint survival function of two successive duration times using the inverse-probability-weighted (IPW) approach. The consistency of the proposed estimator is established. Based on the estimated marginal survival functions, we also propose a two-stage estimation procedure for estimating the parameters of copula model. The bootstrap method is used to construct confidence interval. Numerical studies demonstrate that the proposed estimation approaches perform well with moderate sample sizes.  相似文献   

6.
Abstract

It is one of the important issues in survival analysis to compare two hazard rate functions to evaluate treatment effect. It is quite common that the two hazard rate functions cross each other at one or more unknown time points, representing temporal changes of the treatment effect. In certain applications, besides survival data, we also have related longitudinal data available regarding some time-dependent covariates. In such cases, a joint model that accommodates both types of data can allow us to infer the association between the survival and longitudinal data and to assess the treatment effect better. In this paper, we propose a modelling approach for comparing two crossing hazard rate functions by joint modelling survival and longitudinal data. Maximum likelihood estimation is used in estimating the parameters of the proposed joint model using the EM algorithm. Asymptotic properties of the maximum likelihood estimators are studied. To illustrate the virtues of the proposed method, we compare the performance of the proposed method with several existing methods in a simulation study. Our proposed method is also demonstrated using a real dataset obtained from an HIV clinical trial.  相似文献   

7.
ABSTRACT

There is a growing interest to get a fully MR based radiotherapy. The most important development needed is to obtain improved bone tissue estimation. The existing model-based methods perform poorly on bone tissues. This paper was aimed at obtaining improved bone tissue estimation. Skew-Gaussian mixture model and Gaussian mixture model were proposed to investigate CT image estimation from MR images by partitioning the data into two major tissue types. The performance of the proposed models was evaluated using the leave-one-out cross-validation method on real data. In comparison with the existing model-based approaches, the model-based partitioning approach outperformed in bone tissue estimation, especially in dense bone tissue estimation.  相似文献   

8.
In partly linear models, the dependence of the response y on (x T, t) is modeled through the relationship y=x T β+g(t)+?, where ? is independent of (x T, t). We are interested in developing an estimation procedure that allows us to combine the flexibility of the partly linear models, studied by several authors, but including some variables that belong to a non-Euclidean space. The motivating application of this paper deals with the explanation of the atmospheric SO2 pollution incidents using these models when some of the predictive variables belong in a cylinder. In this paper, the estimators of β and g are constructed when the explanatory variables t take values on a Riemannian manifold and the asymptotic properties of the proposed estimators are obtained under suitable conditions. We illustrate the use of this estimation approach using an environmental data set and we explore the performance of the estimators through a simulation study.  相似文献   

9.
Abstract

In this article we propose an automatic selection of the bandwidth of the recursive kernel density estimators for spatial data defined by the stochastic approximation algorithm. We showed that, using the selected bandwidth and the stepsize which minimize the MWISE (Mean Weighted Integrated Squared Error), the recursive estimator will be quite similar to the nonrecursive one in terms of estimation error and much better in terms of computational costs. In addition, we obtain the central limit theorem for the nonparametric recursive density estimator under some mild conditions.  相似文献   

10.
In this article, we study Bayesian estimation for the covariance matrix Σ and the precision matrix Ω (the inverse of the covariance matrix) in the star-shaped model with missing data. Based on a Cholesky-type decomposition of the precision matrix Ω = ΨΨ, where Ψ is a lower triangular matrix with positive diagonal elements, we develop the Jeffreys prior and a reference prior for Ψ. We then introduce a class of priors for Ψ, which includes the invariant Haar measures, Jeffreys prior, and reference prior. The posterior properties are discussed and the closed-form expressions for Bayesian estimators for the covariance matrix Σ and the precision matrix Ω are derived under the Stein loss, entropy loss, and symmetric loss. Some simulation results are given for illustration.  相似文献   

11.
Abstract

In this article, we consider a panel data partially linear regression model with fixed effect and non parametric time trend function. The data can be dependent cross individuals through linear regressor and error components. Unlike the methods using non parametric smoothing technique, a difference-based method is proposed to estimate linear regression coefficients of the model to avoid bandwidth selection. Here the difference technique is employed to eliminate the non parametric function effect, not the fixed effects, on linear regressor coefficient estimation totally. Therefore, a more efficient estimator for parametric part is anticipated, which is shown to be true by the simulation results. For the non parametric component, the polynomial spline technique is implemented. The asymptotic properties of estimators for parametric and non parametric parts are presented. We also show how to select informative ones from a number of covariates in the linear part by using smoothly clipped absolute deviation-penalized estimators on a difference-based least-squares objective function, and the resulting estimators perform asymptotically as well as the oracle procedure in terms of selecting the correct model.  相似文献   

12.
ABSTRACT

This paper proposes a hysteretic autoregressive model with GARCH specification and a skew Student's t-error distribution for financial time series. With an integrated hysteresis zone, this model allows both the conditional mean and conditional volatility switching in a regime to be delayed when the hysteresis variable lies in a hysteresis zone. We perform Bayesian estimation via an adaptive Markov Chain Monte Carlo sampling scheme. The proposed Bayesian method allows simultaneous inferences for all unknown parameters, including threshold values and a delay parameter. To implement model selection, we propose a numerical approximation of the marginal likelihoods to posterior odds. The proposed methodology is illustrated using simulation studies and two major Asia stock basis series. We conduct a model comparison for variant hysteresis and threshold GARCH models based on the posterior odds ratios, finding strong evidence of the hysteretic effect and some asymmetric heavy-tailness. Versus multi-regime threshold GARCH models, this new collection of models is more suitable to describe real data sets. Finally, we employ Bayesian forecasting methods in a Value-at-Risk study of the return series.  相似文献   

13.
ABSTRACT

Conditional risk measuring plays an important role in financial regulation and depends on volatility estimation. A new class of parameter models called Generalized Autoregressive Score (GAS) model has been successfully applied for different error's densities and for different problems of time series prediction in particular for volatility modeling and VaR estimation. To improve the estimating accuracy of the GAS model, this study proposed a semi-parametric method, LS-SVR and FS-LS-SVR applied to the GAS model to estimate the conditional VaR. In particular, we fit the GAS(1,1) model to the return series using three different distributions. Then, LS-SVR and FS-LS-SVR approximate the GAS(1,1) model. An empirical research was performed to illustrate the effectiveness of the proposed method. More precisely, the experimental results from four stock indexes returns suggest that using hybrid models, GAS-LS-SVR and GAS-FS-LS-SVR provides improved performances in the VaR estimation.  相似文献   

14.
ABSTRACT

This article is concerned with inference in the linear model with dyadic data. Dyadic data are indexed by pairs of “units;” for example, trade data between pairs of countries. Because of the potential for observations with a unit in common to be correlated, standard inference procedures may not perform as expected. We establish a range of conditions under which a t-statistic with the dyadic-robust variance estimator of Fafchamps and Gubert is asymptotically normal. Using our theoretical results as a guide, we perform a simulation exercise to study the validity of the normal approximation, as well as the performance of a novel finite-sample correction. We conclude with guidelines for applied researchers wishing to use the dyadic-robust estimator for inference.  相似文献   

15.
ABSTRACT

We present a decomposition of prediction error for the multilevel model in the context of predicting a future observable y *j in the jth group of a hierarchical dataset. The multilevel prediction rule is used for prediction and the components of prediction error are estimated via a simulation study that spans the various combinations of level-1 (individual) and level-2 (group) sample sizes and different intraclass correlation values. Additionally, analytical results present the increase in predicted mean square error (PMSE) with respect to prediction error bias. The components of prediction error provide information with respect to the cost of parameter estimation versus data imputation for predicting future values in a hierarchical data set. Specifically, the cost of parameter estimation is very small compared to data imputation.  相似文献   

16.
Abstract

In this paper, we propose maximum entropy in the mean methods for propensity score matching classification problems. We provide a new methodological approach and estimation algorithms to handle explicitly cases when data is available: (i) in interval form; (ii) with bounded measurement or observational errors; or (iii) both as intervals and with bounded errors. We show that entropy in the mean methods for these three cases generally outperform benchmark error-free approaches.  相似文献   

17.
Abstract

In risk assessment, it is often desired to make inferences on the minimum dose levels (benchmark doses or BMDs) at which a specific benchmark risk (BMR) is attained. The estimation and inferences of BMDs are well understood in the case of an adverse response to a single-exposure agent. However, the theory of finding BMDs and making inferences on the BMDs is much less developed for cases where the adverse effect of two hazardous agents is studied simultaneously. Deutsch and Piegorsch [2012. Benchmark dose profiles for joint-action quantal data in quantitative risk assessment. Biometrics 68(4):1313–22] proposed a benchmark modeling paradigm in dual exposure setting—adapted from the single-exposure setting—and developed a strategy for conducting full benchmark analysis with joint-action quantal data, and they further extended the proposed benchmark paradigm to continuous response outcomes [Deutsch, R. C., and W. W. Piegorsch. 2013. Benchmark dose profiles for joint-action continuous data in quantitative risk assessment. Biometrical Journal 55(5):741–54]. In their 2012 article, Deutsch and Piegorsch worked exclusively with the complementary log link for modeling the risk with quantal data. The focus of the current paper is on the logit link; particularly, we consider an Abbott-adjusted [A method of computing the effectiveness of an insecticide. Journal of Economic Entomology 18(2):265–7] log-logistic model for the analysis of quantal data with nonzero background response. We discuss the estimation of the benchmark profile (BMP)—a collection of benchmark points which induce the prespecified BMR—and propose different methods for building benchmark inferences in studies involving two hazardous agents. We perform Monte Carlo simulation studies to evaluate the characteristics of the confidence limits. An example is given to illustrate the use of the proposed methods.  相似文献   

18.
ABSTRACT

The performances of six confidence intervals for estimating the arithmetic mean of a lognormal distribution are compared using simulated data. The first interval considered is based on an exact method and is recommended in U.S. EPA guidance documents for calculating upper confidence limits for contamination data. Two intervals are based on asymptotic properties due to the Central Limit Theorem, and the other three are based on transformations and maximum likelihood estimation. The effects of departures from lognormality on the performance of these intervals are also investigated. The gamma distribution is considered to represent departures from the lognormal distribution. The average width and coverage of each confidence interval is reported for varying mean, variance, and sample size. In the lognormal case, the exact interval gives good coverage, but for small sample sizes and large variances the confidence intervals are too wide. In these cases, an approximation that incorporates sampling variability of the sample variance tends to perform better. When the underlying distribution is a gamma distribution, the intervals based upon the Central Limit Theorem tend to perform better than those based upon lognormal assumptions.  相似文献   

19.
Abstract

In this article, we focus on the variable selection for semiparametric varying coefficient partially linear model with response missing at random. Variable selection is proposed based on modal regression, where the non parametric functions are approximated by B-spline basis. The proposed procedure uses SCAD penalty to realize variable selection of parametric and nonparametric components simultaneously. Furthermore, we establish the consistency, the sparse property and asymptotic normality of the resulting estimators. The penalty estimation parameters value of the proposed method is calculated by EM algorithm. Simulation studies are carried out to assess the finite sample performance of the proposed variable selection procedure.  相似文献   

20.
Abstract

In a quantitative linear model with errors following a stationary Gaussian, first-order autoregressive or AR(1) process, Generalized Least Squares (GLS) on raw data and Ordinary Least Squares (OLS) on prewhitened data are efficient methods of estimation of the slope parameters when the autocorrelation parameter of the error AR(1) process, ρ, is known. In practice, ρ is generally unknown. In the so-called two-stage estimation procedures, ρ is then estimated first before using the estimate of ρ to transform the data and estimate the slope parameters by OLS on the transformed data. Different estimators of ρ have been considered in previous studies. In this article, we study nine two-stage estimation procedures for their efficiency in estimating the slope parameters. Six of them (i.e., three noniterative, three iterative) are based on three estimators of ρ that have been considered previously. Two more (i.e., one noniterative, one iterative) are based on a new estimator of ρ that we propose: it is provided by the sample autocorrelation coefficient of the OLS residuals at lag 1, denoted r(1). Lastly, REstricted Maximum Likelihood (REML) represents a different type of two-stage estimation procedure whose efficiency has not been compared to the others yet. We also study the validity of the testing procedures derived from GLS and the nine two-stage estimation procedures. Efficiency and validity are analyzed in a Monte Carlo study. Three types of explanatory variable x in a simple quantitative linear model with AR(1) errors are considered in the time domain: Case 1, x is fixed; Case 2, x is purely random; and Case 3, x follows an AR(1) process with the same autocorrelation parameter value as the error AR(1) process. In a preliminary step, the number of inadmissible estimates and the efficiency of the different estimators of ρ are compared empirically, whereas their approximate expected value in finite samples and their asymptotic variance are derived theoretically. Thereafter, the efficiency of the estimation procedures and the validity of the derived testing procedures are discussed in terms of the sample size and the magnitude and sign of ρ. The noniterative two-stage estimation procedure based on the new estimator of ρ is shown to be more efficient for moderate values of ρ at small sample sizes. With the exception of small sample sizes, REML and its derived F-test perform the best overall. The asymptotic equivalence of two-stage estimation procedures, besides REML, is observed empirically. Differences related to the nature, fixed or random (uncorrelated or autocorrelated), of the explanatory variable are also discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号