首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Treatment effect in an observational study of relatively large scale can be described as a mixture of effects among subgroups. In particular, analysis for estimating the treatment effect at the level of an entire sample potentially involves not only differential effects across subgroups of the entire study cohort, but also differential propensities – probabilities of receiving treatment given study subjects’ pretreatment history. Such complex heterogeneity is of great research interest because the analysis of treatment effects can substantially depend on the hidden data structure for effect sizes and propensities. To uncover the unseen data structure, we propose a likelihood-based regression tree method which we call marginal tree (MT). The MT method is aimed at a simultaneous assessment of differential effects and propensity scores so that both become homogeneous within each terminal node of the resultant tree structure. We assess simulation performances of the MT method by comparing it with other existing tree methods and illustrate its use with a simulated data set, where the objective is to assess the effects of dieting behavior on its subsequent emotional distress among adolescent girls.  相似文献   

2.
Summary.  We discuss the analysis of data from single-nucleotide polymorphism arrays comparing tumour and normal tissues. The data consist of sequences of indicators for loss of heterozygosity (LOH) and involve three nested levels of repetition: chromosomes for a given patient, regions within chromosomes and single-nucleotide polymorphisms nested within regions. We propose to analyse these data by using a semiparametric model for multilevel repeated binary data. At the top level of the hierarchy we assume a sampling model for the observed binary LOH sequences that arises from a partial exchangeability argument. This implies a mixture of Markov chains model. The mixture is defined with respect to the Markov transition probabilities. We assume a non-parametric prior for the random-mixing measure. The resulting model takes the form of a semiparametric random-effects model with the matrix of transition probabilities being the random effects. The model includes appropriate dependence assumptions for the two remaining levels of the hierarchy, i.e. for regions within chromosomes and for chromosomes within patient. We use the model to identify regions of increased LOH in a data set coming from a study of treatment-related leukaemia in children with an initial cancer diagnostic. The model successfully identifies the desired regions and performs well compared with other available alternatives.  相似文献   

3.
Abstract.  A simple and standard approach for analysing multistate model data is to model all transition intensities and then compute a summary measure such as the transition probabilities based on this. This approach is relatively simple to implement but it is difficult to see what the covariate effects are on the scale of interest. In this paper, we consider an alternative approach that directly models the covariate effects on transition probabilities in multistate models. Our new approach is based on binomial modelling and inverse probability of censoring weighting techniques and is very simple to implement by standard software. We show how to do flexible regression models with possibly time-varying covariate effects.  相似文献   

4.
We consider a Bayesian nonignorable model to accommodate a nonignorable selection mechanism for predicting small area proportions. Our main objective is to extend a model on selection bias in a previously published paper, coauthored by four authors, to accommodate small areas. These authors assume that the survey weights (or their reciprocals that we also call selection probabilities) are available, but there is no simple relation between the binary responses and the selection probabilities. To capture the nonignorable selection bias within each area, they assume that the binary responses and the selection probabilities are correlated. To accommodate the small areas, we extend their model to a hierarchical Bayesian nonignorable model and we use Markov chain Monte Carlo methods to fit it. We illustrate our methodology using a numerical example obtained from data on activity limitation in the U.S. National Health Interview Survey. We also perform a simulation study to assess the effect of the correlation between the binary responses and the selection probabilities.  相似文献   

5.
The problem of building bootstrap confidence intervals for small probabilities with count data is addressed. The law of the independent observations is assumed to be a mixture of a given family of power series distributions. The mixing distribution is estimated by nonparametric maximum likelihood and the corresponding mixture is used for resampling. We build percentile-t and Efron percentile bootstrap confidence intervals for the probabilities and we prove their consistency in probability. The new theoretical results are supported by simulation experiments for Poisson and geometric mixtures. We compare percentile-t and Efron percentile bootstrap intervals with eight other bootstrap or asymptotic theory based intervals. It appears that Efron percentile bootstrap intervals outperform the competitors in terms of coverage probability and length.  相似文献   

6.
Abstract

This article will briefly examine a few XML-based standards that have been developed by the Library of Congress (with input and help from various interested partners). These standards attempt to take traditional library concepts and frameworks and give them expression using modern technology. The standards that will be examined are MODS, METS, MADS, and MIX. Some of the text has been taken from the various Library of Congress Web pages for the standards being described. The Web page URLs are noted at the end of each section.  相似文献   

7.
In this article we study what we chose to call exotic properties of NHMS and NHSMS. The interplay between stochastic theory of NHMS and NHSMS and other branches of probability, stochastic processes and mathematics, we believe is a fascinating one apart from being important. In many cases the information needed for the evolution of a NHMS is a larger set than the history of the multidimensional process NHMS. In our world where an overflow of information exists almost in all problems, it is almost surely that this will be available. Here, we extend the definition of the NHMS in order to accomodate this case. In this respect we arrive at the defnition of the 𝒢-non homogeneous Markov system. We study the problem of change of measure in a 𝒢-non homogeneousMarkov system. It is proved that under certain conditions the NHMS retains the Markov property, while as expected the basic sequences of transition probabilities change and it is established how they do so. We also find the expected population structure of the NHMS under the new measure in close analytic form. We also define the 𝒢-non homogeneous semi-Markov system and we study the problem of change of measure in a 𝒢-non homogeneous semi-Markov system. It is proved that under certain conditions the NHSMS retains the semi-Markov property while as expected the basic sequences of transition probabilities change and it is established how they do so. We prove that if the input process of memberships is a non homogeneous Poisson process, then asymtotically and under certain easily met in practice conditions, the compensated population structure of the 𝒢-NHMS is a martingale. Finally we prove that the space of all random population structures, under easily met in practice conditions, is a Hilbert space.  相似文献   

8.
An extension to the class of conventional numerical probability models for nondeterministic phenomena has been identified by Dempster and Shafer in the class of belief functions. We were originally stimulated by this work, but have since come to believe that the bewildering diversity of uncertainty and chance phenomena cannot be encompassed within either the conventional theory of probability, its relatively minor modifications (e.g., not requiring countable additivity), or the theory of belief functions. In consequence, we have been examining the properties of, and prospects for, the generalization of belief functions that is known as upper and lower, or interval-valued, probability. After commenting on what we deem to be problematic elements of common personalist/subjectivist/Bayesian positions that employ either finitely or countably additive probability to represent strength of belief and that are intended to be normative for rational behavior, we sketch some of the ways in which the set of lower envelopes, a subset of the set of lower probabilities that contains the belief functions, enables us to preserve the core of Bayesian reasoning while admitting a more realistic (e.g., in its reduced insistence upon an underlying precision in our beliefs) class of probability-like models. Particular advantages of lower envelopes are identified in the area of the aggregation of beliefs.

The focus of our own research is in the area of objective probabilistic reasoning about time series generated by physical or other empirical (e.g., societal) processes. As it is not the province of a general mathematical methodology such as probability theory to a priori rule out of existence empirical phenomena, we are concerned by the contraint imposed by conventional probability theory that an empirical process of bounded random variables that is believed to have a time- invariant generating mechanism must then exhibot long-run stable time averages. We have shown that lower probability models that allow for unstable time averages can only lie in the class of undominated lower probabilities, a subset of lower probability models disjoint from the lower envelopes and having the weakest relationship to conventional probability measures. Our research has been devoted to exploring and developing the theory of undominated lower probabilities so that it can be applied to model and understand nondeterministic phenomena, and we have also been interested in identifying actual physical processes (e.g., flicker noises) that exhibit behavior requiring such novel models.  相似文献   


9.
ABSTRACT

In this paper, we investigate the performance of cumulative sum (CUSUM) stopping rules for the online detection of unknown change point in a time homogeneous Markov chain. Under the condition that the post-change transition probabilities are unknown, we proposed two CUSUM type schemes for the detection. The first scheme is based on the maximum likelihood estimates of the post-change transition probabilities. This scheme is limited by its computation burden, which is mitigated by another scheme based on the reference transition probabilities selected from a prior known region. We give the bounds of the mean delay time and the mean time between false alarms to illustrate the effectiveness of the proposed schemes. The results of the simulation also demonstrate the feasibility of the proposed schemes.  相似文献   

10.
The problem of comparing several experimental treatments to a standard arises frequently in medical research. Various multi-stage randomized phase II/III designs have been proposed that select one or more promising experimental treatments and compare them to the standard while controlling overall Type I and Type II error rates. This paper addresses phase II/III settings where the joint goals are to increase the average time to treatment failure and control the probability of toxicity while accounting for patient heterogeneity. We are motivated by the desire to construct a feasible design for a trial of four chemotherapy combinations for treating a family of rare pediatric brain tumors. We present a hybrid two-stage design based on two-dimensional treatment effect parameters. A targeted parameter set is constructed from elicited parameter pairs considered to be equally desirable. Bayesian regression models for failure time and the probability of toxicity as functions of treatment and prognostic covariates are used to define two-dimensional covariate-adjusted treatment effect parameter sets. Decisions at each stage of the trial are based on the ratio of posterior probabilities of the alternative and null covariate-adjusted parameter sets. Design parameters are chosen to minimize expected sample size subject to frequentist error constraints. The design is illustrated by application to the brain tumor trial.  相似文献   

11.
Nest site fidelity of adult female black brant breeding at the Tutakoke River, Alaska was evaluated from 1987 to 1993 by recording nest locations of brant marked (approx. 1500) with individually coded tarsal tags. We used two approaches to study fidelity. First, we examined fidelity to four geographic strata within the Tutakoke River colony. For our second analysis approach, we used ARC/INFO to map and measure distances between successive nesting attempts and then estimated the probability of fidelity to within 200 m of the previous nest site. We used program MSSURVIV to estimate movement probabilities and to test hypotheses about fidelity. Both of our analysis approaches indicate that female black brant exhibit a high (>0.72) probability of fidelity to previous nest sites. Our estimates of fidelity were not biased by the confounding of detection, survival and movement probabilities that have plagued previous studies of fidelity.  相似文献   

12.
We revisit the addition law for expectations and present a sibling law: the absolute law for expectations. We show that these two laws and their corresponding laws for probabilities can be reconciled under a single framework. As an application, we use the absolute law for expectations to calculate the mean absolute deviation. Finally, we remark on a hidden point in a related article previously published on these pages; this will help readers to avoid a potential pitfall.  相似文献   

13.
We set out IDR as a loglinear-model-based Moran's I test for Poisson count data that resembles the Moran's I residual test for Gaussian data. We evaluate its type I and type II error probabilities via simulations, and demonstrate its utility via a case study. When population sizes are heterogeneous, IDR is effective in detecting local clusters by local association terms with an acceptable type I error probability. When used in conjunction with local spatial association terms in loglinear models, IDR can also indicate the existence of first-order global cluster that can hardly be removed by local spatial association terms. In this situation, IDR should not be directly applied for local cluster detection. In the case study of St. Louis homicides, we bridge loglinear model methods for parameter estimation to exploratory data analysis, so that a uniform association term can be defined with spatially varied contributions among spatial neighbors. The method makes use of exploratory tools such as Moran's I scatter plots and residual plots to evaluate the magnitude of deviance residuals, and it is effective to model the shape, the elevation and the magnitude of a local cluster in the model-based test.  相似文献   

14.
Abstract.  Typically, regression analysis for multistate models has been based on regression models for the transition intensities. These models lead to highly nonlinear and very complex models for the effects of covariates on state occupation probabilities. We present a technique that models the state occupation or transition probabilities in a multistate model directly. The method is based on the pseudo-values from a jackknife statistic constructed from non-parametric estimators for the probability in question. These pseudo-values are used as outcome variables in a generalized estimating equation to obtain estimates of model parameters. We examine this approach and its properties in detail for two special multistate model probabilities, the cumulative incidence function in competing risks and the current leukaemia-free survival used in bone marrow transplants. The latter is the probability a patient is alive and in either a first or second post-transplant remission. The techniques are illustrated on a dataset of leukaemia patients given a marrow transplant. We also discuss extensions of the model that are of current research interest.  相似文献   

15.
We deal with a random graph model where at each step, a vertex is chosen uniformly at random, and it is either duplicated or its edges are deleted. Duplication has a given probability. We analyze the limit distribution of the degree of a fixed vertex and derive a.s. asymptotic bounds for the maximal degree. The model shows a phase transition phenomenon with respect to the probabilities of duplication and deletion.  相似文献   

16.
This paper presents a method for assessing the sensitivity of predictions in Bayesian regression analyses. In parametric Bayesian analyses there is a family s0 of regression functions, parametrized by a finite-dimensional vector B. The family s0 is a subset of R, the set of all possible regression functions. A prior π0 on B induces a prior on R. This paper assesses sensitivity by computing bounds on the predictive probability of a fixed set K over a class of priors, Γ, induced by a class of families of regression functions, Γs, and a class of priors, Γπ. This paper is divided into three parts which (1) define Γ, (2) describe an algorithm for finding accurate bounds on predictive probabilities over Γ and (3) illustrate the method with two examples. It is found that sensitivity to the family of regression functions can be much more important than sensitivity to π0.  相似文献   

17.
We consider graphs, confidence procedures and tests that can be used to compare transition probabilities in a Markov chain model with intensities specified by a Cox proportional hazard model. Under assumptions of this model, the regression coefficients provide information about the relative risks of covariates in one–step transitions, however, they cannot in general be used to to assess whether or not the covariates have a beneficial or detrimental effect on the endpoint events. To alleviate this problem, we consider graphical tests based on confidence procedures for a generalized Q–Q plot and for the difference between transition probabilities. The procedures are illustrated using data of the International Bone Marrow Transplant Registry.  相似文献   

18.
In this paper, we consider classification procedures for exponential populations when an order on the populations parameters is known. We define and study the behavior of a classification rule which takes into account the additional information and outperforms the likelihood-ratio-based rule when two populations are considered. Moreover, we study the behavior of this rule in each of the two populations and compare the misclassification probabilities with the classical ones. Type II censorship, which is usual in practice, is considered and results obtained. The performance for more than two populations is evaluated by simulation.  相似文献   

19.
The probability of success or average power describes the potential of a future trial by weighting the power with a probability distribution of the treatment effect. The treatment effect estimate from a previous trial can be used to define such a distribution. During the development of targeted therapies, it is common practice to look for predictive biomarkers. The consequence is that the trial population for phase III is often selected on the basis of the most extreme result from phase II biomarker subgroup analyses. In such a case, there is a tendency to overestimate the treatment effect. We investigate whether the overestimation of the treatment effect estimate from phase II is transformed into a positive bias for the probability of success for phase III. We simulate a phase II/III development program for targeted therapies. This simulation allows to investigate selection probabilities and allows to compare the estimated with the true probability of success. We consider the estimated probability of success with and without subgroup selection. Depending on the true treatment effects, there is a negative bias without selection because of the weighting by the phase II distribution. In comparison, selection increases the estimated probability of success. Thus, selection does not lead to a bias in probability of success if underestimation due to the phase II distribution and overestimation due to selection cancel each other out. We recommend to perform similar simulations in practice to get the necessary information about the risk and chances associated with such subgroup selection designs.  相似文献   

20.
We provide a method for simultaneous variable selection and outlier identification using the mean-shift outlier model. The procedure consists of two steps: the first step is to identify potential outliers, and the second step is to perform all possible subset regressions for the mean-shift outlier model containing the potential outliers identified in step 1. This procedure is helpful for model selection while simultaneously considering outlier identification, and can be used to identify multiple outliers. In addition, we can evaluate the impact on the regression model of simultaneous omission of variables and interesting observations. In an example, we provide detailed output from the R system, and compare the results with those using posterior model probabilities as proposed by Hoeting et al. [Comput. Stat. Data Anal. 22 (1996), pp. 252-270] for simultaneous variable selection and outlier identification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号