首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract

In risk assessment, it is often desired to make inferences on the minimum dose levels (benchmark doses or BMDs) at which a specific benchmark risk (BMR) is attained. The estimation and inferences of BMDs are well understood in the case of an adverse response to a single-exposure agent. However, the theory of finding BMDs and making inferences on the BMDs is much less developed for cases where the adverse effect of two hazardous agents is studied simultaneously. Deutsch and Piegorsch [2012. Benchmark dose profiles for joint-action quantal data in quantitative risk assessment. Biometrics 68(4):1313–22] proposed a benchmark modeling paradigm in dual exposure setting—adapted from the single-exposure setting—and developed a strategy for conducting full benchmark analysis with joint-action quantal data, and they further extended the proposed benchmark paradigm to continuous response outcomes [Deutsch, R. C., and W. W. Piegorsch. 2013. Benchmark dose profiles for joint-action continuous data in quantitative risk assessment. Biometrical Journal 55(5):741–54]. In their 2012 article, Deutsch and Piegorsch worked exclusively with the complementary log link for modeling the risk with quantal data. The focus of the current paper is on the logit link; particularly, we consider an Abbott-adjusted [A method of computing the effectiveness of an insecticide. Journal of Economic Entomology 18(2):265–7] log-logistic model for the analysis of quantal data with nonzero background response. We discuss the estimation of the benchmark profile (BMP)—a collection of benchmark points which induce the prespecified BMR—and propose different methods for building benchmark inferences in studies involving two hazardous agents. We perform Monte Carlo simulation studies to evaluate the characteristics of the confidence limits. An example is given to illustrate the use of the proposed methods.  相似文献   

2.
In the longitudinal studies with binary response, it is often of interest to estimate the percentage of positive responses at each time point and the percentage of having at least one positive response by each time point. When missing data exist, the conventional method based on observed percentages could result in erroneous estimates. This study demonstrates two methods of using expectation-maximization (EM) and data augmentation (DA) algorithms in the estimation of the marginal and cumulative probabilities for incomplete longitudinal binary response data. Both methods provide unbiased estimates when the missingness mechanism is missing at random (MAR) assumption. Sensitivity analyses have been performed for cases when the MAR assumption is in question.  相似文献   

3.
The interpretation of Cpk:, a common measure of process capability and confidence limits for it, is based on the assumption that the process is normally distributed. The non-parametric but computer intensive method called Bootstrap is introduced and three Bootstrap confidence interval estimates for C^ are defined. An initial simulation of two processes (one normal and the other highly skewed) is presented and discussed  相似文献   

4.
We study the use of a Scheffé-style simultaneous confidence band as applied to low-dose risk estimation with quantal response data. We consider two formulations for the dose-response risk function, an Abbott-adjusted Weibull model and an Abbott-adjusted log-logistic model. Using the simultaneous construction, we derive methods for estimating upper confidence limits on predicted extra risk and, by inverting the upper bands on risk, lower bounds on the benchmark dose, or BMD, at a specific level of ‘benchmark risk’. Monte Carlo evaluations explore the operating characteristics of the simultaneous limits.  相似文献   

5.
Low dose risk estimation via simultaneous statistical inferences   总被引:2,自引:0,他引:2  
Summary.  The paper develops and studies simultaneous confidence bounds that are useful for making low dose inferences in quantitative risk analysis. Application is intended for risk assessment studies where human, animal or ecological data are used to set safe low dose levels of a toxic agent, but where study information is limited to high dose levels of the agent. Methods are derived for estimating simultaneous, one-sided, upper confidence limits on risk for end points measured on a continuous scale. From the simultaneous confidence bounds, lower confidence limits on the dose that is associated with a particular risk (often referred to as a bench-mark dose ) are calculated. An important feature of the simultaneous construction is that any inferences that are based on inverting the simultaneous confidence bounds apply automatically to inverse bounds on the bench-mark dose.  相似文献   

6.
This article explores the calculation of tolerance limits for the Poisson regression model based on the profile likelihood methodology and small-sample asymptotic corrections to improve the coverage probability performance. The data consist of n counts, where the mean or expected rate depends upon covariates via the log regression function. This article evaluated upper tolerance limits as a function of covariates. The upper tolerance limits are obtained from upper confidence limits of the mean. To compute upper confidence limits the following methodologies were considered: likelihood based asymptotic methods, small-sample asymptotic methods to improve the likelihood based methodology, and the delta method. Two applications are discussed: one application relating to defects in semiconductor wafers due to plasma etching and the other examining the number of surface faults in upper seams of coal mines. All three methodologies are illustrated for the two applications.  相似文献   

7.
ABSTRACT

A general Bayesian random effects model for analyzing longitudinal mixed correlated continuous and negative binomial responses with and without missing data is presented. This Bayesian model, given some random effects, uses a normal distribution for the continuous response and a negative binomial distribution for the count response. A Markov Chain Monte Carlo sampling algorithm is described for estimating the posterior distribution of the parameters. This Bayesian model is illustrated by a simulation study. For sensitivity analysis to investigate the change of parameter estimates with respect to the perturbation from missing at random to not missing at random assumption, the use of posterior curvature is proposed. The model is applied to a medical data, obtained from an observational study on women, where the correlated responses are the negative binomial response of joint damage and continuous response of body mass index. The simultaneous effects of some covariates on both responses are also investigated.  相似文献   

8.
The current regulation of non-carcinogenic effects has generally been based on dividing a safety factor into an experimental no-observed-effect-level (NOEL), giving a regulatory reference dose (RfD). This approach does not attempt to estimate the risk as a function of dose; it assumes no significant risk for the dose below the RfD. This paper proposes a mathematical model for finding the upper confidence limit on risk and lower confidence limit on dose for quantitative risk assessment when the responses follow a normal distribution. The proposed procedure appears to be conservative; this is supported by results of a simulation study. The procedure is illustrated by application to real data.  相似文献   

9.
The purpose of this study was to utilize simulated data based on an ongoing randomized clinical trial (RCT) to evaluate the effects of treatment switching with randomization as an instrumental variable (IV) at differing levels of treatment crossovers, for continuous and binary outcomes. Data were analyzed using IV, intent-to-treat (ITT), and per protocol (PP) methods. The IV method performed the best, since it provided the most unbiased point estimates, and it had equal or higher power and higher coverage probabilities compared to the ITT estimates, and because a PP analysis can be biased due to its exclusion of non-compliant patients.  相似文献   

10.
In this paper, we consider the problem of determining non-parametric confidence intervals for quantiles when available data are in the form of k-records. Distribution-free confidence intervals as well as lower and upper confidence limits are derived for fixed quantiles of an arbitrary unknown distribution based on k-records of an independent and identically distributed sequence from that distribution. The construction of tolerance intervals and limits based on k-records is also discussed. An exact expression for the confidence coefficient of these intervals are derived. Some tables are also provided to assist in choosing the appropriate k-records for the construction of these confidence intervals and tolerance intervals. Some simulation results are presented to point out some of the features and properties of these intervals. Finally, the data, representing the records of the amount of annual rainfall in inches recorded at Los Angeles Civic Center, are used to illustrate all the results developed in this paper and also to demonstrate the improvements that they provide on those based on either the usual records or the current records.  相似文献   

11.
Zero-inflated models are commonly used for modeling count and continuous data with extra zeros. Inflations at one point or two points apart from zero for modeling continuous data have been discussed less than that of zero inflation. In this article, inflation at an arbitrary point α as a semicontinuous distribution is presented and the mean imputation for a continuous response is discussed as a cause of having semicontinuous data. Also, inflation at two points and generally at k arbitrary points and their relation to cell-mean imputation in the mixture of continuous distributions are studied. To analyze the imputed data, a mixture of semicontinuous distributions is used. The effects of covariates on the dependent variable in a mixture of k semicontinuous distributions with inflation at k points are also investigated. In order to find the parameter estimates, the method of expectation–maximization (EM) algorithm is used. In a real data of Iranian Households Income and Expenditure Survey (IHIES), it is shown how to obtain a proper estimate of the population variance when continuous missing at random responses are mean imputed.  相似文献   

12.
In this paper, a joint model for analyzing multivariate mixed ordinal and continuous responses, where continuous outcomes may be skew, is presented. For modeling the discrete ordinal responses, a continuous latent variable approach is considered and for describing continuous responses, a skew-normal mixed effects model is used. A Bayesian approach using Markov Chain Monte Carlo (MCMC) is adopted for parameter estimation. Some simulation studies are performed for illustration of the proposed approach. The results of the simulation studies show that the use of the separate models or the normal distributional assumption for shared random effects and within-subject errors of continuous and ordinal variables, instead of the joint modeling under a skew-normal distribution, leads to biased parameter estimates. The approach is used for analyzing a part of the British Household Panel Survey (BHPS) data set. Annual income and life satisfaction are considered as the continuous and the ordinal longitudinal responses, respectively. The annual income variable is severely skewed, therefore, the use of the normality assumption for the continuous response does not yield acceptable results. The results of data analysis show that gender, marital status, educational levels and the amount of money spent on leisure have a significant effect on annual income, while marital status has the highest impact on life satisfaction.  相似文献   

13.
A procedure is given for obtaining a random width confidence interval for the largest reliability of k Weibull populations. The procedure does not identify the populations for which the reliability would be a maximum. The maximum likelihood estimators or the simplified linear estimators of the reliability based on type II censored data are used. The cases considered include unknown shape parameters being equal or unequal. Simultaneous confidence intervals for the k reliabilities are also obtained. Tables for the lower and upper limits in selected cases are constructed using Monte Carlo methods.  相似文献   

14.
Consider the problem of finding an upper 1 –α confidence limit for a scalar parameter of interest ø in the presence of a nuisance parameter vector θ when the data are discrete. Approximate upper limits T may be found by approximating the relevant unknown finite sample distribution by its limiting distribution. Such approximate upper limits typically have coverage probabilities below, sometimes far below, 1 –α for certain values of (θ, ø). This paper remedies that defect by shifting the possible values t of T so that they are as small as possible subject both to the minimum coverage probability being greater than or equal to 1 –α, and to the shifted values being in the same order as the unshifted ts. The resulting upper limits are called ‘tight’. Under very weak and easily checked regularity conditions, a formula is developed for the tight upper limits.  相似文献   

15.
For and continuous and symmetric and differing at most by a shift parameter, distribution-free confidence intervals for are obtained by means of the Chebyshev inequality and an upper bound for the variance of the Mann-Whitney statistic. The (two-sided) intervals are reliable for small samples and about 20 to 30 per cent shorter than those obtained by Ury for and completely unknown for equal sample sizes, with larger savings otherwise. They are also shorter than the upper bounds obtained by Birnbaum and McCarty (1958) when the confidence coefficient does not exceed 0.95.  相似文献   

16.
Mediation is a hypothesized causal chain among three variables. Mediation analysis for continuous response variables is well developed in the literature, and it can be shown that the indirect effect is equal to the total effect minus the direct effect. However, mediation analysis for categorical responses is still not fully developed. The purpose of this article is to propose a simpler method of analysing the mediation effect among three variables when the dependent and mediator variables are both dichotomous. We propose using the latent variable technique which in turn will adjust for the necessary condition that indirect effect is equal to the total effect minus the direct effect. An intensive simulation study is conducted to compare the proposed method with other methods in the literature. Our theoretical derivation and simulation study show that the proposed approach is simpler to use and at least as good as other approaches provided in the literature. We illustrate our approach to test for the potential mediators on the relationship between depression and obesity among children and adolescents compared to the method in Winship and Mare using National children health survey data 2011–2012.  相似文献   

17.
Based on the large-sample normal distribution of the sample log odds ratio and its asymptotic variance from maximum likelihood logistic regression, shortest 95% confidence intervals for the odds ratio are developed. Although the usual confidence interval on the odds ratio is unbiased, the shortest interval is not. That is, while covering the true odds ratio with the stated probability, the shortest interval covers some values below the true odds ratio with higher probability. The upper and lower limits of the shortest interval are shifted to the left of those of the usual interval, with greater shifts in the upper limits. With the log odds model γ + , in which X is binary, simulation studies showed that the approximate average percent difference in length is 7.4% for n (sample size) = 100, and 3.8% for n = 200. Precise estimates of the covering probabilities of the two types of intervals were obtained from simulation studies, and are compared graphically. For odds ratio estimates greater (less) than one, shortest intervals are more (less) likely to include one than are the usual intervals. The usual intervals are likelihood-based and the shortest intervals are not. The usual intervals have minimum expected length among the class of unbiased intervals. Shortest intervals do not provide important advantages over the usual intervals, which we recommend for practical use.  相似文献   

18.
In an earlier paper it was recommended that an experimental design for the study of a mixture system in which the components had lower and upper limits should consist of a subset of the vertices and centroids of the region defined by the limitson the components. This paper extends this methodology to the situation where linear combinations of two or more components (e.g., liquid content=x3+x4+≦0.35) are subject to lower and upper constraints. The CONSIM algorithm, developed by R. E. Wheeler, is recommended for computing the vertices of the resulting experimental region. Procedures for developing linear and quadratic mixture model designs are discussed. A five-component example which has two multiple-component constraints is included to illustrate the proposed methods of mixture experimentation.  相似文献   

19.
Summary.  In studies to assess the accuracy of a screening test, often definitive disease assessment is too invasive or expensive to be ascertained on all the study subjects. Although it may be more ethical or cost effective to ascertain the true disease status with a higher rate in study subjects where the screening test or additional information is suggestive of disease, estimates of accuracy can be biased in a study with such a design. This bias is known as verification bias. Verification bias correction methods that accommodate screening tests with binary or ordinal responses have been developed; however, no verification bias correction methods exist for tests with continuous results. We propose and compare imputation and reweighting bias-corrected estimators of true and false positive rates, receiver operating characteristic curves and area under the receiver operating characteristic curve for continuous tests. Distribution theory and simulation studies are used to compare the proposed estimators with respect to bias, relative efficiency and robustness to model misspecification. The bias correction estimators proposed are applied to data from a study of screening tests for neonatal hearing loss.  相似文献   

20.
When comparing two experimental treatments with a placebo, we focus our attention on interval estimation of the proportion ratio (PR) of patient responses under a three-period crossover design. We propose a random effects exponential multiplicative risk model and derive asymptotic interval estimators in closed form for the PR between treatments and placebo. Using Monte Carlo simulations, we compare the performance of these interval estimators in a variety of situations. We use the data comparing two different doses of an analgesic with placebo for the relief of primary dysmenorrhea to illustrate the use of these interval estimators and the difference in estimates of the PR and odds ratio (OR) when the underlying relief rates are not small.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号