首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
Statistical inferences for probability distributions involving truncation parameters have received recent attention in the literature. One aspect of these inferences is the question of shortest confidence intervals for parameters or parametric functions of these models. The topic is a classical one, and the approach follows the usual theory. In all literature treatments the authors consider specific models and derive confidence intervals (not necessarily shortest). All of these models can, however, be considered as special cases of a more general one. The use of this model enables one to obtain easily shortest confidence intervals and unify the different approaches. In addition, it provides a useful technique for classroom presentation of the topic.  相似文献   

2.
A family of multiplicative survival models is obtained for the analysis of clinical trial data in which a substantial proportion c of patients respond favorably to treatment with longterm survivorship. The model (MWSM) representing the hazard function for all patients as a function of c and a Weibull density is developed in which the distributional parameters and c are regressed on covariates and estimated by the method of maximum likelihood. The MWSM hazard function is monotonically increasing, peaking, then decreasing thereafter if the shape parameter exceeds unity, and is monotonically decreasing otherwise. Mortality rates with similar behavior have been empirically observed in cancer clinical trial data. A nonproportional or proportional hazards model results depending upon whether or not any Weibull parameter is regressed on covariates. Tests of hypotheses and confidence intervals for c and p-quantiles are obtained.  相似文献   

3.
In survey sampling, policymaking regarding the allocation of resources to subgroups (called small areas) or the determination of subgroups with specific properties in a population should be based on reliable estimates. Information, however, is often collected at a different scale than that of these subgroups; hence, the estimation can only be obtained on finer scale data. Parametric mixed models are commonly used in small‐area estimation. The relationship between predictors and response, however, may not be linear in some real situations. Recently, small‐area estimation using a generalised linear mixed model (GLMM) with a penalised spline (P‐spline) regression model, for the fixed part of the model, has been proposed to analyse cross‐sectional responses, both normal and non‐normal. However, there are many situations in which the responses in small areas are serially dependent over time. Such a situation is exemplified by a data set on the annual number of visits to physicians by patients seeking treatment for asthma, in different areas of Manitoba, Canada. In cases where covariates that can possibly predict physician visits by asthma patients (e.g. age and genetic and environmental factors) may not have a linear relationship with the response, new models for analysing such data sets are required. In the current work, using both time‐series and cross‐sectional data methods, we propose P‐spline regression models for small‐area estimation under GLMMs. Our proposed model covers both normal and non‐normal responses. In particular, the empirical best predictors of small‐area parameters and their corresponding prediction intervals are studied with the maximum likelihood estimation approach being used to estimate the model parameters. The performance of the proposed approach is evaluated using some simulations and also by analysing two real data sets (precipitation and asthma).  相似文献   

4.
Empirical likelihood-based inference for the nonparametric components in additive partially linear models is investigated. An empirical likelihood approach to construct the confidence intervals of the nonparametric components is proposed when the linear covariate is measured with and without errors. We show that the proposed empirical log-likelihood ratio is asymptotically standard chi-squared without requiring the undersmoothing of the nonparametric components. Then, it can be directly used to construct the confidence intervals for the nonparametric functions. A simulation study indicates that, compared with a normal approximation-based approach, the proposed method works better in terms of coverage probabilities and widths of the pointwise confidence intervals.  相似文献   

5.
Structural models—or dynamic linear models as they are known in the Bayesian literature—have been widely used to model and predict time series using a decomposition in non observable components. Due to the direct interpretation of the parameters, structural models are a powerful and simple methodology to analyze time series in several areas, such as economy, climatology, environmental sciences, among others. The parameters of such models can be estimated either using maximum likelihood or Bayesian procedures, generally implemented using conjugate priors, and there are plenty of works in the literature employing both methods. But are there situations where one of these approaches should be preferred? In this work, instead of conjugate priors for the hyperparameters, the Jeffreys prior is used in the Bayesian approach, along with the uniform prior, and the results are compared to the maximum likelihood method, in an extensive Monte Carlo study. Interval estimation is also evaluated and, to this purpose, bootstrap confidence intervals are introduced in the context of structural models and their performance is compared to the asymptotic and credibility intervals. A real time series of a Brazilian electric company is used as illustration.  相似文献   

6.
The among variance component in the balanced one-factor nested components-of-variance model is of interest in many fields of application. Except for an artificial method that uses a set of random numbers which is of no use in practical situations, an exact-size confidence interval on the among variance has not yet been derived. This paper provides a detailed comparison of three approximate confidence intervals which possess certain desired properties and have been shown to be the better methods among many available approximate procedures. Specifically, the minimum and the maximum of the confidence coefficients for the one- and two-sided intervals of each method are obtained. The expected lengths of the intervals are also compared.  相似文献   

7.
Consider dichotomous observations taken from T strata or tables, where within each table, the effect of J>2 doses or treatments are valuated. 'Ihe dose or treatment effect may be measured by various functions of the probability of outcomes, but it is assumed that the effect is the same in each table. Previous work on finding confidence intervals is specific to a particular function of the probabilities, based on only two doses, and limited to ML estimation of the nuisance parameters. In this paper, confidence intervals are developed based on the C, test, allowing for a unification and generalization of previous work. A computational procedure is given that minimizes the number of iterations required. An extension of the procedure to the regression framework suitable when there are large numbers of sparse tables is outlined.  相似文献   

8.
A progressive hybrid censoring scheme is a mixture of type-I and type-II progressive censoring schemes. In this paper, we mainly consider the analysis of progressive type-II hybrid-censored data when the lifetime distribution of the individual item is the normal and extreme value distributions. Since the maximum likelihood estimators (MLEs) of these parameters cannot be obtained in the closed form, we propose to use the expectation and maximization (EM) algorithm to compute the MLEs. Also, the Newton–Raphson method is used to estimate the model parameters. The asymptotic variance–covariance matrix of the MLEs under EM framework is obtained by Fisher information matrix using the missing information and asymptotic confidence intervals for the parameters are then constructed. This study will end up with comparing the two methods of estimation and the asymptotic confidence intervals of coverage probabilities corresponding to the missing information principle and the observed information matrix through a simulation study, illustrated examples and real data analysis.  相似文献   

9.
In this paper, we combine empirical likelihood and estimating functions for censored data to obtain robust confidence regions for the parameters and more generally for functions of the parameters of distributions used in lifetime data analysis. The proposed method works with type I, type II or randomly censored data. It is illustrated by considering inference for log-location-scale models. In particular, we focus on the log-normal and the Weibull models and we tackle the problem of constructing robust confidence regions (or intervals) for the parameters of the model, as well as for quantiles and values of the survival function. The usefulness of the method is demonstrated through a Monte Carlo study and by examples on two lifetime data sets.  相似文献   

10.
In this study, we introduced a method for building a Bayesian nomogram and proposed an appropriate nomogram for type 2 diabetes (T2D) using data from 13,474 subjects collected from the 2013–2015 Korean National Health and Nutrition Examination Survey (KNHANES) data. We identified risk factors related to T2D, proposed a visual nomogram for T2D from a naïve Bayesian classifier model, and predicted incidence rates. Additionally, we computed confidence intervals for the influence of risk factors (attributes) and verified the proposed Bayesian nomogram using a receiver operating characteristic curve. Finally, we compared logistic regression and the Bayesian nomogram for T2D. The results of the analysis of the T2D data showed that the most influential factor among all attributes in the Bayesian nomogram was age group, and the highest risk factor for T2D incidence was cardiovascular disease. Dyslipidemia and hypertension also had significant impacts on T2D incidence while the effects of sex, smoking status, and employment status were relatively small compared to those of other variables. Using the proposed Bayesian nomogram, we can easily predict the incidence rate of T2D in an individual, and treatment plans can be established based on this information.  相似文献   

11.
Abstract. In this article, a naive empirical likelihood ratio is constructed for a non‐parametric regression model with clustered data, by combining the empirical likelihood method and local polynomial fitting. The maximum empirical likelihood estimates for the regression functions and their derivatives are obtained. The asymptotic distributions for the proposed ratio and estimators are established. A bias‐corrected empirical likelihood approach to inference for the parameters of interest is developed, and the residual‐adjusted empirical log‐likelihood ratio is shown to be asymptotically chi‐squared. These results can be used to construct a class of approximate pointwise confidence intervals and simultaneous bands for the regression functions and their derivatives. Owing to our bias correction for the empirical likelihood ratio, the accuracy of the obtained confidence region is not only improved, but also a data‐driven algorithm can be used for selecting an optimal bandwidth to estimate the regression functions and their derivatives. A simulation study is conducted to compare the empirical likelihood method with the normal approximation‐based method in terms of coverage accuracies and average widths of the confidence intervals/bands. An application of this method is illustrated using a real data set.  相似文献   

12.
In many medical studies, there are covariates that change their values over time and their analysis is most often modeled using the Cox regression model. However, many of these time-dependent covariates can be expressed as an intermediate event, which can be modeled using a multi-state model. Using the relationship of time-dependent (discrete) covariates and multi-state models, we compare (via simulation studies) the Cox model with time-dependent covariates with the most frequently used multi-state regression models. This article also details the procedures for generating survival data arising from all approaches, including the Cox model with time-dependent covariates.  相似文献   

13.
Ridge Regression techniques have been found useful to reduce mean square errors of parameter estimates when multicollinearity is present. But the usefulness of the method rest not only upon its ability to produce good parameter estimates, with smaller mean squared error than Ordinary Least Squares, but also on having reasonable inferential procedures. The aim of this paper is to develop asymptotic confidence intervals for the model parameters based on Ridge Regression estimates and the Edgeworth expansion. Some simulation experiments are carried out to compare these confidence intervals with those obtained from the application of Ordinary Least Squares. Also, an example will be provided based on the well known data set of Hald.  相似文献   

14.
In this paper, we consider the simple step-stress model for a two-parameter exponential distribution, when both the parameters are unknown and the data are Type-II censored. It is assumed that under two different stress levels, the scale parameter only changes but the location parameter remains unchanged. It is observed that the maximum likelihood estimators do not always exist. We obtain the maximum likelihood estimates of the unknown parameters whenever they exist. We provide the exact conditional distributions of the maximum likelihood estimators of the scale parameters. Since the construction of the exact confidence intervals is very difficult from the conditional distributions, we propose to use the observed Fisher Information matrix for this purpose. We have suggested to use the bootstrap method for constructing confidence intervals. Bayes estimates and associated credible intervals are obtained using the importance sampling technique. Extensive simulations are performed to compare the performances of the different confidence and credible intervals in terms of their coverage percentages and average lengths. The performances of the bootstrap confidence intervals are quite satisfactory even for small sample sizes.  相似文献   

15.
In many clinical studies, subjects are at risk of experiencing more than one type of potentially recurrent event. In some situations, however, the occurrence of an event is observed, but the specific type is not determined. We consider the analysis of this type of incomplete data when the objectives are to summarize features of conditional intensity functions and associated treatment effects, and to study the association between different types of event. Here we describe a likelihood approach based on joint models for the multi-type recurrent events where parameter estimation is obtained from a Monte-Carlo EM algorithm. Simulation studies show that the proposed method gives unbiased estimators for regression coefficients and variance–covariance parameters, and the coverage probabilities of confidence intervals for regression coefficients are close to the nominal level. When the distribution of the frailty variable is misspecified, the method still provides estimators of the regression coefficients with good properties. The proposed method is applied to a motivating data set from an asthma study in which exacerbations were to be sub-typed by cellular analysis of sputum samples as eosinophilic or non-eosinophilic.  相似文献   

16.
Progressive multi-state models provide a convenient framework for characterizing chronic disease processes where the states represent the degree of damage resulting from the disease. Incomplete data often arise in studies of such processes, and standard methods of analysis can lead to biased parameter estimates when observation of data is response-dependent. This paper describes a joint analysis useful for fitting progressive multi-state models to data arising in longitudinal studies in such settings. Likelihood based methods are described and parameters are shown to be identifiable. An EM algorithm is described for parameter estimation, and variance estimation is carried out using the Louis’ method. Simulation studies demonstrate that the proposed method works well in practice under a variety of settings. An application to data from a smoking prevention study illustrates the utility of the method.  相似文献   

17.
In this article, the generalized linear model for longitudinal data is studied. A generalized empirical likelihood method is proposed by combining generalized estimating equations and quadratic inference functions based on the working correlation matrix. It is proved that the proposed generalized empirical likelihood ratios are asymptotically chi-squared under some suitable conditions, and hence it can be used to construct the confidence regions of the parameters. In addition, the maximum empirical likelihood estimates of parameters are obtained, and their asymptotic normalities are proved. Some simulations are undertaken to compare the generalized empirical likelihood and normal approximation-based method in terms of coverage accuracies and average areas/lengths of confidence regions/intervals. An example of a real data is used for illustrating our methods.  相似文献   

18.
In this article we deal with simultaneous two-sided tolerance intervals for a univariate linear regression model with independent normally distributed errors. We present a method for determining the intervals derived by the general confidence-set approach (GCSA), i.e. the intervals are constructed based on a specified confidence set for unknown parameters of the model. The confidence set used in the new method is formed based on a suggested hypothesis test about all parameters of the model. The simultaneous two-sided tolerance intervals determined by the presented method are found to be efficient and fast to compute based on a preliminary numerical comparison of all the existing methods based on GCSA.  相似文献   

19.
Conditional parametric bootstrapping is defined as the samples obtained by performing the simulations in such a way that the estimator is kept constant and equal to the estimate obtained from the data. Order statistics of the bootstrap replicates of the parameter chosen in each simulation provide exact confidence intervals, in a probabilistic sense, in models with one parameter under quite general conditions. The method is still exact in the case of nuisance parameters when these are location and scale parameters, and the bootstrapping is based on keeping the maximum-likelihood estimates constant. The method is also exact if there exists a sufficient statistic for the nuisance parameters and if the simulations are performed conditioning on this statistic. The technique may also be used to construct prediction intervals. These are generally not exact, but are likely to be good approximations.  相似文献   

20.
Bayesian calibration of computer models   总被引:5,自引:0,他引:5  
We consider prediction and uncertainty analysis for systems which are approximated using complex mathematical models. Such models, implemented as computer codes, are often generic in the sense that by a suitable choice of some of the model's input parameters the code can be used to predict the behaviour of the system in a variety of specific applications. However, in any specific application the values of necessary parameters may be unknown. In this case, physical observations of the system in the specific context are used to learn about the unknown parameters. The process of fitting the model to the observed data by adjusting the parameters is known as calibration. Calibration is typically effected by ad hoc fitting, and after calibration the model is used, with the fitted input values, to predict the future behaviour of the system. We present a Bayesian calibration technique which improves on this traditional approach in two respects. First, the predictions allow for all sources of uncertainty, including the remaining uncertainty over the fitted parameters. Second, they attempt to correct for any inadequacy of the model which is revealed by a discrepancy between the observed data and the model predictions from even the best-fitting parameter values. The method is illustrated by using data from a nuclear radiation release at Tomsk, and from a more complex simulated nuclear accident exercise.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号