首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.
In randomized complete block designs, a monotonic relationship among treatment groups may already be established from prior information, e.g., a study with different dose levels of a drug. The test statistic developed by Page and another from Jonckheere and Terpstra are two unweighted rank based tests used to detect ordered alternatives when the assumptions in the traditional two-way analysis of variance are not satisfied. We consider a new weighted rank based test by utilizing a weight for each subject based on the sample variance in computing the new test statistic. The new weighted rank based test is compared with the two commonly used unweighted tests with regard to power under various conditions. The weighted test is generally more powerful than the two unweighted tests when the number of treatment groups is small to moderate.  相似文献   

2.
In this paper we study characterization problems for discrete distributions using the doubly truncated mean function m(xy)=E(h(X)|x≤X≤y), for a monotonic function h(x). We obtain the distribution function F(x) from m(x,y) and we give the necessary and sufficient conditions for any real function to be the doubly truncated mean function for a discrete distribution.  相似文献   

3.
Existing projection designs (e.g. maximum projection designs) attempt to achieve good space-filling properties in all projections. However, when using a Gaussian process (GP), model-based design criteria such as the entropy criterion is more appropriate. We employ the entropy criterion averaged over a set of projections, called expected entropy criterion (EEC), to generate projection designs. We show that maximum EEC designs are invariant to monotonic transformations of the response, i.e. they are optimal for a wide class of stochastic process models. We also demonstrate that transformation of each column of a Latin hypercube design (LHD) based on a monotonic function can substantially improve the EEC. Two types of input transformations are considered: a quantile function of a symmetric Beta distribution chosen to optimize the EEC, and a nonparametric transformation corresponding to the quantile function of a symmetric density chosen to optimize the EEC. Numerical studies show that the proposed transformations of the LHD are efficient and effective for building robust maximum EEC designs. These designs give projections with markedly higher entropies and lower maximum prediction variances (MPV''s) at the cost of small increases in average prediction variances (APV''s) compared to state-of-the-art space-filling designs over wide ranges of covariance parameter values.  相似文献   

4.
Summary.  Controversy has intensified regarding the death-rate from cancer that is induced by a dose of radiation. In the models that are usually considered the hazard function is an increasing function of the dose of radiation. Such models can mask local variations. We consider the models of excess relative risk and of absolute risk and propose a nonparametric estimation of the effect of the dose by using a model selection procedure. This estimation deals with stratified data. We approximate the function of the dose by a collection of splines and select the best one according to the Akaike information criterion. In the same way between the models of excess relative risk or excess absolute risk, we choose the model that best fits the data. We propose a bootstrap method for calculating a pointwise confidence interval of the dose function. We apply our method for estimating the solid cancer and leukaemia death hazard functions to Hiroshima.  相似文献   

5.
We consider a regression analysis of longitudinal data in the presence of outcome‐dependent observation times and informative censoring. Existing approaches commonly require a correct specification of the joint distribution of longitudinal measurements, the observation time process, and informative censoring time under the joint modeling framework and can be computationally cumbersome due to the complex form of the likelihood function. In view of these issues, we propose a semiparametric joint regression model and construct a composite likelihood function based on a conditional order statistics argument. As a major feature of our proposed methods, the aforementioned joint distribution is not required to be specified, and the random effect in the proposed joint model is treated as a nuisance parameter. Consequently, the derived composite likelihood bypasses the need to integrate over the random effect and offers the advantage of easy computation. We show that the resulting estimators are consistent and asymptotically normal. We use simulation studies to evaluate the finite‐sample performance of the proposed method and apply it to a study of weight loss data that motivated our investigation.  相似文献   

6.
We develop a transparent and efficient two-stage nonparametric (TSNP) phase I/II clinical trial design to identify the optimal biological dose (OBD) of immunotherapy. We propose a nonparametric approach to derive the closed-form estimates of the joint toxicity–efficacy response probabilities under the monotonic increasing constraint for the toxicity outcomes. These estimates are then used to measure the immunotherapy's toxicity–efficacy profiles at each dose and guide the dose finding. The first stage of the design aims to explore the toxicity profile. The second stage aims to find the OBD, which can achieve the optimal therapeutic effect by considering both the toxicity and efficacy outcomes through a utility function. The closed-form estimates and concise dose-finding algorithm make the TSNP design appealing in practice. The simulation results show that the TSNP design yields superior operating characteristics than the existing Bayesian parametric designs. User-friendly computational software is freely available to facilitate the application of the proposed design to real trials. We provide comprehensive illustrations and examples about implementing the proposed design with associated software.  相似文献   

7.
Abstract.  We study a binary regression model using the complementary log–log link, where the response variable Δ is the indicator of an event of interest (for example, the incidence of cancer, or the detection of a tumour) and the set of covariates can be partitioned as ( X ,  Z ) where Z (real valued) is the primary covariate and X (vector valued) denotes a set of control variables. The conditional probability of the event of interest is assumed to be monotonic in Z , for every fixed X . A finite-dimensional (regression) parameter β describes the effect of X . We show that the baseline conditional probability function (corresponding to X  =  0 ) can be estimated by isotonic regression procedures and develop an asymptotically pivotal likelihood-ratio-based method for constructing (asymptotic) confidence sets for the regression function. We also show how likelihood-ratio-based confidence intervals for the regression parameter can be constructed using the chi-square distribution. An interesting connection to the Cox proportional hazards model under current status censoring emerges. We present simulation results to illustrate the theory and apply our results to a data set involving lung tumour incidence in mice.  相似文献   

8.
Abstract. When applicable, an assumed monotonicity property of the regression function w.r.t. covariates has a strong stabilizing effect on the estimates. Because of this, other parametric or structural assumptions may not be needed at all. Although monotonic regression in one dimension is well studied, the question remains whether one can find computationally feasible generalizations to multiple dimensions. Here, we propose a non‐parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure. The monotonic construction is based on marked point processes, where the random point locations and the associated marks (function levels) together form piecewise constant realizations of the regression surfaces. The actual inference is based on model‐averaged results over the realizations. The monotonicity of the construction is enforced by partial ordering constraints, which allows it to asymptotically, with increasing density of support points, approximate the family of all monotonic bounded continuous functions.  相似文献   

9.
Recently, Domma et al. [An extension of Azzalinis method, J. Comput. Appl. Math. 278 (2015), pp. 37–47] proposed an extension of Azzalini's method. This method can attract readers due to its flexibility and ease of applicability. Most of the weighted Weibull models that have been introduced are with monotonic hazard rate function. This fact limits their applicability. So, our aim is to build a new weighted Weibull distribution with monotonic and non-monotonic hazard rate function. A new weighted Weibull distribution, so-called generalized weighted Weibull (GWW) distribution, is introduced by a method exposed in Domma et al. [13]. GWW distribution possesses decreasing, increasing, upside-down bathtub, N-shape and M-shape hazard rate. Also, it is very easy to derive statistical properties of the GWW distribution. Finally, we consider application of the GWW model on a real data set, providing simulation study too.  相似文献   

10.
Abstract

We consider effect of additive covariate error on linear model in observational (radiation epidemiology) study for exposure risk. Additive dose error affects dose-response shape under general linear regression settings covering identity-link GLM type models and linear excess-relative-risk grouped-Poisson models. Under independent error, dose distribution that log of dose density is up to quadratic polynomial on an interval (the log-quadratic density condition), normal, exponential, and uniform distributions, is the condition for linear regression calibration. Violation of the condition can result low-dose-high-sensitivity model from linear no-threshold (LNT) model by the dose error. Power density is also considered. A published example is given.  相似文献   

11.
This work is motivated by a quantitative Magnetic Resonance Imaging study of the differential tumor/healthy tissue change in contrast uptake induced by radiation. The goal is to determine the time in which there is maximal contrast uptake (a surrogate for permeability) in the tumor relative to healthy tissue. A notable feature of the data is its spatial heterogeneity. Zhang, Johnson, Little, and Cao (2008a and 2008b) discuss two parallel approaches to "denoise" a single image of change in contrast uptake from baseline to one follow-up visit of interest. In this work we extend the image model to explore the longitudinal profile of the tumor/healthy tissue contrast uptake in multiple images over time. We fit a two-stage model. First, we propose a longitudinal image model for each subject. This model simultaneously accounts for the spatial and temporal correlation and denoises the observed images by borrowing strength both across neighboring pixels and over time. We propose to use the Mann-Whitney U statistic to summarize the tumor contrast uptake relative to healthy tissue. In the second stage, we fit a population model to the U statistic and estimate when it achieves its maximum. Our initial findings suggest that the maximal contrast uptake of the tumor core relative to healthy tissue peaks around three weeks after initiation of radiotherapy, though this warrants further investigation.  相似文献   

12.
One of the standard problems in statistics consists of determining the relationship between a response variable and a single predictor variable through a regression function. Background scientific knowledge is often available that suggests that the regression function should have a certain shape (e.g. monotonically increasing or concave) but not necessarily a specific parametric form. Bernstein polynomials have been used to impose certain shape restrictions on regression functions. The Bernstein polynomials are known to provide a smooth estimate over equidistant knots. Bernstein polynomials are used in this paper due to their ease of implementation, continuous differentiability, and theoretical properties. In this work, we demonstrate a connection between the monotonic regression problem and the variable selection problem in the linear model. We develop a Bayesian procedure for fitting the monotonic regression model by adapting currently available variable selection procedures. We demonstrate the effectiveness of our method through simulations and the analysis of real data.  相似文献   

13.
We address the issue of performing inference on the parameters that index the modified extended Weibull (MEW) distribution. We show that numerical maximization of the MEW log-likelihood function can be problematic. It is even possible to encounter maximum likelihood estimates that are not finite, that is, it is possible to encounter monotonic likelihood functions. We consider different penalization schemes to improve maximum likelihood point estimation. A penalization scheme based on the Jeffreys’ invariant prior is shown to be particularly useful. Simulation results on point estimation, interval estimation, and hypothesis testing inference are presented. Two empirical applications are presented and discussed.  相似文献   

14.
Abstract. We consider the problem of estimating the joint distribution function of the event time and a continuous mark variable when the event time is subject to interval censoring case 1 and the continuous mark variable is only observed in case the event occurred before the time of inspection. The non‐parametric maximum likelihood estimator in this model is known to be inconsistent. We study two alternative smooth estimators, based on the explicit (inverse) expression of the distribution function of interest in terms of the density of the observable vector. We derive the pointwise asymptotic distribution of both estimators.  相似文献   

15.
The variance of the error term in ordinary regression models and linear smoothers is usually estimated by adjusting the average squared residual for the trace of the smoothing matrix (the degrees of freedom of the predicted response). However, other types of variance estimators are needed when using monotonic regression (MR) models, which are particularly suitable for estimating response functions with pronounced thresholds. Here, we propose a simple bootstrap estimator to compensate for the over-fitting that occurs when MR models are estimated from empirical data. Furthermore, we show that, in the case of one or two predictors, the performance of this estimator can be enhanced by introducing adjustment factors that take into account the slope of the response function and characteristics of the distribution of the explanatory variables. Extensive simulations show that our estimators perform satisfactorily for a great variety of monotonic functions and error distributions.  相似文献   

16.
We obtain the necessary and sufficient conditions so that any real function (x) is the conditional expectation E(h(X)/Xx) of a random variable X with continuous distribution function, where h is a given real, continuous and strictly monotonic function.  相似文献   

17.
We consider a monotonic version of the multigamma coupler introduced by Murdoch and Green in ‘Exact Sampling from a Continuous State Space’ (Scandinavian Journal of Statistics, 1998) in the context of bounded multivariate distributions. Monotonicity greatly increases the efficiency of the coupler. We prove the validity of the multigamma coupler when the coalescence probability ρ is variable; specifically, a function of the bounding chains. This variation is more efficient than the fixed ρ version, and can always be used when the multigamma coupler is monotonic. We apply our algorithm to two examples.  相似文献   

18.
Estimating the effect of medical treatments on subject responses is one of the crucial problems in medical research. Matched‐pairs designs are commonly implemented in the field of medical research to eliminate confounding and improve efficiency. In this article, new estimators of treatment effects for heterogeneous matched‐pairs data are proposed. Asymptotic properties of the proposed estimators are derived. Simulation studies show that the proposed estimators have some advantages over the famous Heckman's estimator, the conditional maximum likelihood estimator, and the inverse probability weighted estimator. We apply the proposed methodology to a data set from a study of low‐birth‐weight infants.  相似文献   

19.
Summary.  Formal rules governing signed edges on causal directed acyclic graphs are described and it is shown how these rules can be useful in reasoning about causality. Specifically, the notions of a monotonic effect, a weak monotonic effect and a signed edge are introduced. Results are developed relating these monotonic effects and signed edges to the sign of the causal effect of an intervention in the presence of intermediate variables. The incorporation of signed edges in the directed acyclic graph causal framework furthermore allows for the development of rules governing the relationship between monotonic effects and the sign of the covariance between two variables. It is shown that when certain assumptions about monotonic effects can be made then these results can be used to draw conclusions about the presence of causal effects even when data are missing on confounding variables.  相似文献   

20.
The frequency of doctor consultations has direct consequences for health care budgets, yet little statistical analysis of the determinants of doctor visits has been reported. We consider the distribution of the number of visits to the doctor and, in particular, we model its dependence on a number of demographic factors. Examination of the Australian 1995 National Health Survey data reveals that generalized linear Poisson or negative binomial models are inadequate for modelling the mean as a function of covariates, because of excessive zero counts, and a mean‐variance relationship that varies enormously over covariate values. A negative binomial model is used, with parameter values estimated in subgroups according to the discrete combinations of the covariate values. Smoothing splines are then used to smooth and interpolate the parameter values. In effect the mean and the shape parameters are each modelled as (different) functions of gender, age and geographical factors. The estimated regressions for the mean have simple and intuitive interpretations. However, the dependence of the (negative binomial) shape parameter on the covariates is more difficult to interpret and is subject to influence by extreme observations. We illustrate the use of the model by estimating the distribution of the number of doctor consultations in the Statistical Local Area of Ryde, based on population numbers from the 1996 census.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号