首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 575 毫秒
1.
One of the objectives of personalized medicine is to take treatment decisions based on a biomarker measurement. Therefore, it is often interesting to evaluate how well a biomarker can predict the response to a treatment. To do so, a popular methodology consists of using a regression model and testing for an interaction between treatment assignment and biomarker. However, the existence of an interaction is not sufficient for a biomarker to be predictive. It is only necessary. Hence, the use of the marker‐by‐treatment predictiveness curve has been recommended. In addition to evaluate how well a single continuous biomarker predicts treatment response, it can further help to define an optimal threshold. This curve displays the risk of a binary outcome as a function of the quantiles of the biomarker, for each treatment group. Methods that assume a binary outcome or rely on a proportional hazard model for a time‐to‐event outcome have been proposed to estimate this curve. In this work, we propose some extensions for censored data. They rely on a time‐dependent logistic model, and we propose to estimate this model via inverse probability of censoring weighting. We present simulations results and three applications to prostate cancer, liver cirrhosis, and lung cancer data. They suggest that a large number of events need to be observed to define a threshold with sufficient accuracy for clinical usefulness. They also illustrate that when the treatment effect varies with the time horizon which defines the outcome, then the optimal threshold also depends on this time horizon.  相似文献   

2.
ABSTRACT

A statistical test can be seen as a procedure to produce a decision based on observed data, where some decisions consist of rejecting a hypothesis (yielding a significant result) and some do not, and where one controls the probability to make a wrong rejection at some prespecified significance level. Whereas traditional hypothesis testing involves only two possible decisions (to reject or not a null hypothesis), Kaiser’s directional two-sided test as well as the more recently introduced testing procedure of Jones and Tukey, each equivalent to running two one-sided tests, involve three possible decisions to infer the value of a unidimensional parameter. The latter procedure assumes that a point null hypothesis is impossible (e.g., that two treatments cannot have exactly the same effect), allowing a gain of statistical power. There are, however, situations where a point hypothesis is indeed plausible, for example, when considering hypotheses derived from Einstein’s theories. In this article, we introduce a five-decision rule testing procedure, equivalent to running a traditional two-sided test in addition to two one-sided tests, which combines the advantages of the testing procedures of Kaiser (no assumption on a point hypothesis being impossible) and Jones and Tukey (higher power), allowing for a nonnegligible (typically 20%) reduction of the sample size needed to reach a given statistical power to get a significant result, compared to the traditional approach.  相似文献   

3.
The funnel plot is a graphical visualization of summary data estimates from a meta-analysis, and is a useful tool for detecting departures from the standard modeling assumptions. Although perhaps not widely appreciated, a simple extension of the funnel plot can help to facilitate an intuitive interpretation of the mathematics underlying a meta-analysis at a more fundamental level, by equating it to determining the center of mass of a physical system. We used this analogy to explain the concepts of weighing evidence and of biased evidence to a young audience at the Cambridge Science Festival, without recourse to precise definitions or statistical formulas and with a little help from Sherlock Holmes! Following on from the science fair, we have developed an interactive web-application (named the Meta-Analyser) to bring these ideas to a wider audience. We envisage that our application will be a useful tool for researchers when interpreting their data. First, to facilitate a simple understanding of fixed and random effects modeling approaches; second, to assess the importance of outliers; and third, to show the impact of adjusting for small study bias. This final aim is realized by introducing a novel graphical interpretation of the well-known method of Egger regression.  相似文献   

4.
An approach to the analysis of time-dependent ordinal quality score data from robust design experiments is developed and applied to an experiment from commercial horticultural research, using concepts of product robustness and longevity that are familiar to analysts in engineering research. A two-stage analysis is used to develop models describing the effects of a number of experimental treatments on the rate of post-sales product quality decline. The first stage uses a polynomial function on a transformed scale to approximate the quality decline for an individual experimental unit using derived coefficients and the second stage uses a joint mean and dispersion model to investigate the effects of the experimental treatments on these derived coefficients. The approach, developed specifically for an application in horticulture, is exemplified with data from a trial testing ornamental plants that are subjected to a range of treatments during production and home-life. The results of the analysis show how a number of control and noise factors affect the rate of post-production quality decline. Although the model is used to analyse quality data from a trial on ornamental plants, the approach developed is expected to be more generally applicable to a wide range of other complex production systems.  相似文献   

5.
In a response-adaptive design, we review and update the trial on the basis of outcomes in order to achive a specific goal. In clinical trials our goal is to allocate a larger number of patients to the better treatment. In the present paper, we use a response adaptive design in a two-treatment two-period crossover trial where the treatment responses are continuous. We provide probability measures to choose between the possible treatment combinations AA, AB, BA, or BB. The goal is to use the better treatment combination a larger number of times. We calculate the allocation proportions to the possible treatment combinations and their standard errors. We also derive some asymptotic results and provide solutions on related inferential problems. The proposed procedure is compared with a possible competitor. Finally, we use a data set to illustrate the applicability of our proposed design.  相似文献   

6.
When statisticians are uncertain as to which parametric statistical model to use to analyse experimental data, they will often resort to a non-parametric approach. The purpose of this paper is to provide insight into a simple approach to take when it is unclear as to the appropriate parametric model and plan to conduct a Bayesian analysis. I introduce an approximate, or substitution likelihood, first proposed by Harold Jeffreys in 1939 and show how to implement the approach combined with both a non-informative and an informative prior to provide a random sample from the posterior distribution of the median of the unknown distribution. The first example I use to demonstrate the approach is a within-patient bioequivalence design and then show how to extend the approach to a parallel group design.  相似文献   

7.
It is known that patients may cease participating in a longitudinal study and become lost to follow-up. The objective of this article is to present a Bayesian model to estimate the malaria transition probabilities considering individuals lost to follow-up. We consider a homogeneous population, and it is assumed that the considered period of time is small enough to avoid two or more transitions from one state of health to another. The proposed model is based on a Gibbs sampling algorithm that uses information of lost to follow-up at the end of the longitudinal study. To simulate the unknown number of individuals with positive and negative states of malaria at the end of the study and lost to follow-up, two latent variables were introduced in the model. We used a real data set and a simulated data to illustrate the application of the methodology. The proposed model showed a good fit to these data sets, and the algorithm did not show problems of convergence or lack of identifiability. We conclude that the proposed model is a good alternative to estimate probabilities of transitions from one state of health to the other in studies with low adherence to follow-up.  相似文献   

8.
The problem of comparing, contrasting and combining information from different sets of data is an enduring one in many practical applications of statistics. A specific problem of combining information from different sources arose in integrating information from three different sets of data generated by three different sampling campaigns at the input stage as well as at the output stage of a grey-water treatment process. For each stage, a common process trend function needs to be estimated to describe the input and output material process behaviours. Once the common input and output process models are established, it is required to estimate the efficiency of the grey-water treatment method. A synthesized tool for modelling different sets of process data is created by assembling and organizing a number of existing techniques: (i) a mixed model of fixed and random effects, extended to allow for a nonlinear fixed effect, (ii) variogram modelling, a geostatistical technique, (iii) a weighted least squares regression embedded in an iterative maximum-likelihood technique to handle linear/nonlinear fixed and random effects and (iv) a formulation of a transfer-function model for the input and output processes together with a corresponding nonlinear maximum-likelihood method for estimation of a transfer function. The synthesized tool is demonstrated, in a new case study, to contrast and combine information from connected process models and to determine the change in one quality characteristic, namely pH, of the input and output materials of a grey-water filtering process.  相似文献   

9.
To quantify uncertainty in a formal manner, statisticians play a vital role in identifying a prior distribution for a Bayesian‐designed clinical trial. However, when expert beliefs are to be used to form the prior, the literature is sparse on how feasible and how reliable it is to elicit beliefs from experts. For late‐stage clinical trials, high importance is placed on reliability; however, feasibility may be equally important in early‐stage trials. This article describes a case study to assess how feasible it is to conduct an elicitation session in a structured manner and to form a probability distribution that would be used in a hypothetical early‐stage trial. The case study revealed that by using a structured approach to planning, training and conduct, it is feasible to elicit expert beliefs and form a probability distribution in a timely manner. We argue that by further increasing the published accounts of elicitation of expert beliefs in drug development, there will be increased confidence in the feasibility of conducting elicitation sessions. Furthermore, this will lead to wider dissemination of the pertinent issues on how to quantify uncertainty to both practicing statisticians and others involved with designing trials in a Bayesian manner. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
A common challenge in clinical research trials is for applied statistics to manage, analyse, summarize and report an enormous amount of data. Nowadays, due to advances in medical technology, situations frequently arise where it is difficult to display and interpret results. Consequently, a creative approach is required to summarize the main outcomes of the statistical analyses in a form which is easy to grasp, to interpret and possibly to remember. In this paper a number of clinical case studies are provided. Firstly, a topographical map of the brain summarizing P-values obtained from comparisons across different EEG sites; secondly, a bulls eye plot, showing the agreement between observers in different regions of the heart; thirdly, a pictorial table reporting inter- and intra-rater reliability scores of a speech assessment; fourthly a star-plot to deal with numerous questionnaire results and finally a correlogram to illustrate significant correlation values between two diagnostic tools. The intention of this paper is to encourage the effort of visual representations of multiple statistical outcomes. Such representations do not only embellish the report, but aid interpretation by conveying a specific statistical meaning.  相似文献   

11.
The theory of higher-order asymptotics provides accurate approximations to posterior distributions for a scalar parameter of interest, and to the corresponding tail area, for practical use in Bayesian analysis. The aim of this article is to extend these approximations to pseudo-posterior distributions, e.g., posterior distributions based on a pseudo-likelihood function and a suitable prior, which are proved to be particularly useful when the full likelihood is analytically or computationally infeasible. In particular, from a theoretical point of view, we derive the Laplace approximation for a pseudo-posterior distribution, and for the corresponding tail area, for a scalar parameter of interest, also in the presence of nuisance parameters. From a computational point of view, starting from these higher-order approximations, we discuss the higher-order tail area (HOTA) algorithm useful to approximate marginal posterior distributions, and related quantities. Compared to standard Markov chain Monte Carlo methods, the main advantage of the HOTA algorithm is that it gives independent samples at a negligible computational cost. The relevant computations are illustrated by two examples.  相似文献   

12.
In many real-life networks such as computer networks, branches and nodes have multi-state capacity, lead time, and accuracy rate. The network with unreliable nodes is more complex to evaluate the reliability because node failure results in the disabled of adjacent branches. Such a network is named a stochastic unreliable-node computer network (SUNCN). Under the strict assumption that each component (branch and node) has a deterministic capacity, the quickest path (QP) problem is to find a path sending a specific amount of data with minimum transmission time. The accuracy rate is a critical index to measure the performance of a computer network because some packets are damaged or lost due to voltage instability, magnetic field effects, lightning, etc. Subject to both assured accuracy rate and time constraints, this paper extends the QP problem to discuss the system reliability of an SUNCN. An efficient algorithm based on a graphic technique is proposed to find the minimal capacity vector meeting such constraints. System reliability, the probability to send a specific amount of data through multiple minimal paths subject to both assured accuracy rate and time constraints, can subsequently be computed.  相似文献   

13.
In semi-competing risks one considers a terminal event, such as death of a person, and a non-terminal event, such as disease recurrence. We present a model where the time to the terminal event is the first passage time to a fixed level c in a stochastic process, while the time to the non-terminal event is represented by the first passage time of the same process to a stochastic threshold S, assumed to be independent of the stochastic process. In order to be explicit, we let the stochastic process be a gamma process, but other processes with independent increments may alternatively be used. For semi-competing risks this appears to be a new modeling approach, being an alternative to traditional approaches based on illness-death models and copula models. In this paper we consider a fully parametric approach. The likelihood function is derived and statistical inference in the model is illustrated on both simulated and real data.  相似文献   

14.
An up-and-down (UD) experiment for estimating a given quantile of a binary response curve is a sequential procedure whereby at each step a given treatment level is used and, according to the outcome of the observations, a decision is made (deterministic or randomized) as to whether to maintain the same treatment or increase it by one level or else to decrease it by one level. The design points of such UD rules generate a Markov chain and the mode of its invariant distribution is an approximation to the quantile of interest. The main area of application of UD algorithms is in Phase I clinical trials, where it is of greatest importance to be able to attain reliable results in small-size experiments. In this paper we address the issues of the speed of convergence and the precision of quantile estimates of such procedures, both in theory and by simulation. We prove that the version of UD designs introduced in 1994 by Durham and Flournoy can in a large number of cases be regarded as optimal among all UD rules. Furthermore, in order to improve on the convergence properties of this algorithm, we propose a second-order UD experiment which, instead of making use of just the most recent observation, bases the next step on the outcomes of the last two. This procedure shares a number of desirable properties with the corresponding first-order designs, and also allows greater flexibility. With a suitable choice of the parameters, the new scheme is at least as good as the first-order one and leads to an improvement of the quantile estimates when the starting point of the algorithm is low relative to the target quantile.  相似文献   

15.
This paper analyzes the wage returns from internal migration for recent graduates in Italy. We employ a switching regression model that accounts for the endogeneity of the individual's choice to relocate to get a job after graduation: the omission of this selection decision can lead to biased estimates, as there is potential correlation between earnings and unobserved traits, exerting an influence on the decision to migrate. The empirical results sustain the appropriateness of the estimation technique and show that there is a significant pay gap between migrants and non-migrants; migrants seem to be positively selected and the migration premium is downward biased through OLS estimates. The endogeneity of migration shows up both as a negative intercept effect and as a positive slope effect, the second being larger than the first: bad knowledge of the local labor market and financial constraints lead migrants to accept a low basic wage but, due to relevant returns to their characteristics, they finally obtain a higher wage than the others.  相似文献   

16.
Summary.  The single transferable vote is a method of election that allows voters to mark candidates in order of preference. Votes that are not required to elect a candidate are passed to the next candidate in the voter's order of preference. Results of this kind of election give us data about the degree to which voters of a given persuasion are willing to pass their vote to a candidate of a different persuasion. Measures of voters' willingness to pass a vote to a candidate of a different persuasion are of particular interest in places such as Northern Ireland, where communities differ by religion and national aspiration, and agreed new political institutions are based on cross-community power-sharing. How we quantify this voting data may depend on the questions that we want to answer, of course. But, to understand changes in how the voter orders her or his preference, one may need to ask several questions, and to quantify the results of the election in more than one way.  相似文献   

17.
This paper describes an innovative application of statistical process control to the online remote control of the UK's gas transportation networks. The gas industry went through a number of changes in ownership, regulation, access to networks, organization and management culture in the 1990s. The application of SPC was motivated by these changes along with the desire to apply the best industrial statistics theory to practical problems. The work was initiated by a studentship, with the technology gradually being transferred to the industry. The combined efforts of control engineers and statisticians helped develop a novel SPC system. Having set up the control limits, a system was devised to automatically update and publish the control charts on a daily basis. The charts and an associated discussion forum are available to both managers and control engineers throughout the country at their desktop PCs. The paper describes methods of involving people to design first-class systems to achieve continual process improvement. It describes how the traditional benefits of SPC can be realized in a 'distal team working', and 'soft systems', context of four Area Control Centres, controlling a system delivering two thirds of the UK's energy needs.  相似文献   

18.
Failure to adjust for informative non‐compliance, a common phenomenon in endpoint trials, can lead to a considerably underpowered study. However, standard methods for sample size calculation assume that non‐compliance is non‐informative. One existing method to account for informative non‐compliance, based on a two‐subpopulation model, is limited with respect to the degree of association between the risk of non‐compliance and the risk of a study endpoint that can be modelled, and with respect to the maximum allowable rates of non‐compliance and endpoints. In this paper, we introduce a new method that largely overcomes these limitations. This method is based on a model in which time to non‐compliance and time to endpoint are assumed to follow a bivariate exponential distribution. Parameters of the distribution are obtained by equating them with the study design parameters. The impact of informative non‐compliance is investigated across a wide range of conditions, and the method is illustrated by recalculating the sample size of a published clinical trial. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

19.
Extreme Value Theory (EVT) aims to study the tails of probability distributions in order to measure and quantify extreme events of maximum and minimum. In river flow data, an extreme level of a river may be related to the level of a neighboring river that flows into it. In this type of data, it is very common for flooding of a location to have been caused by a very large flow from an affluent river that is tens or hundreds of kilometers from this location. In this sense, an interesting approach is to consider a conditional model for the estimation of a multivariate model. Inspired by this idea, we propose a Bayesian model to describe the dependence of exceedance between rivers, where we considered a conditionally independent structure. In this model, the dependence between rivers is captured by modeling the excess marginally of one river as a consequence of linear functions of the other rivers. The results showed that there is a strong and positive connection between excesses in one river caused by the excesses of the other rivers.  相似文献   

20.
In the causal analysis of survival data a time-based response is related to a set of explanatory variables. Definition of the relation between the time and the covariates may become a difficult task, particularly in the preliminary stage, when the information is limited. Through a nonparametric approach, we propose to estimate the survival function allowing to evaluate the relative importance of each potential explanatory variable, in a simple and explanatory fashion. To achieve this aim, each of the explanatory variables is used to partition the observed survival times. The observations are assumed to be partially exchangeable according to such partition. We then consider, conditionally on each partition, a hierarchical nonparametric Bayesian model on the hazard functions. We define and compare different prior distribution for the hazard functions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号