首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 859 毫秒
1.
In 2010, the Statisticians in the Pharmaceutical Industry (PSI) Toxicology Special Interest Group met to discuss the design and analysis of the Comet assay. The Comet assay is one potential component of the package of safety studies required by regulatory bodies. As these studies usually involve a three-way nested experimental design and as the distribution of the measured response is usually either lognormal or lognormal plus a point mass at zero, the analysis is not straightforward. This has led to many different types of analysis being proposed in the literature, with several different methods applied within the pharmaceutical industry itself. This article summarises the PSI Toxicology Group's discussions and recommendations around these issues.  相似文献   

2.
论生产服务业发展中的政府角色   总被引:4,自引:0,他引:4       下载免费PDF全文
李红梅 《统计研究》2002,19(8):63-66
一、生产服务业内涵与特点服务业是一个广泛的产业群 ,是指除第一产业、第二产业以外的所有产业 ,不仅包括为消费者服务的最终产品 ,也包括为生产者服务的中间投入产品。服务业的分类方法很多 ,目前学术界普遍接受的分类方法是布朗宁按照服务业基本属性所进行的分类 :第一 ,分配服务业。如批发零售、交通运输、储藏等 ;第二 ,消费服务业。娱乐与消遣、旅馆等 ;第三 ,生产服务业。金融、保险、房地产、对企业管理的服务等 ;第四 ,政府公共服务业。如政府机关。生产服务业 ,是指直接或间接为生产过程提供中间服务的服务性产业 ,它涉及信息收集…  相似文献   

3.
ABSTRACT Tests for trend in tumour response rates with increasing dose in long-term laboratory studies of carcinogenicity that take into account historical control information are discussed. The theoretical basis for these tests is described, and their small-sample properties evaluated using computer simulation. The performance of these tests is also evaluated using data from carcinogenicity experiments conducted under the U.S. National Toxicology Program. Based on these results, recommendations are made as to the most appropriate tests in practice. When the assumptions underlying these tests are satisfied, the use of historical control information is shown to result in an increase in power relative to the classical Cochran-Armitage test that is widely used without historical controls.  相似文献   

4.
5.
The Statisticians in the Pharmaceutical Industry Toxicology Special Interest Group has collated and compared statistical analysis methods for a number of toxicology study types including general toxicology, genetic toxicology, safety pharmacology and carcinogenicity. In this paper, we present the study design, experimental units and analysis methods.  相似文献   

6.
A common population characteristic of interest in animal ecology studies pertains to the selection of resources. That is, given the resources available to animals, what do they ultimately choose to use? A variety of statistical approaches have been employed to examine this question and each has advantages and disadvantages with respect to the form of available data and the properties of estimators given model assumptions. A wealth of high resolution telemetry data are now being collected to study animal population movement and space use and these data present both challenges and opportunities for statistical inference. We summarize traditional methods for resource selection and then describe several extensions to deal with measurement uncertainty and an explicit movement process that exists in studies involving high-resolution telemetry data. Our approach uses a correlated random walk movement model to obtain temporally varying use and availability distributions that are employed in a weighted distribution context to estimate selection coefficients. The temporally varying coefficients are then weighted by their contribution to selection and combined to provide inference at the population level. The result is an intuitive and accessible statistical procedure that uses readily available software and is computationally feasible for large datasets. These methods are demonstrated using data collected as part of a large-scale mountain lion monitoring study in Colorado, USA.  相似文献   

7.
Information derived from interim sacrifices or on cause of death is routinely used in the statistical analyses of carcinogenicity experiments involving occult tumours. The authors describe a simple semiparametric model which does not require this information. Natural deaths during the experiment and the usual terminal sacrifice provide sufficient information to ensure that the tumour incidence rates, which are of primary interest in occult‐tumour studies, can be estimated nonparametrically. The advantages of this semiparametric approach to the analysis of survival/sacrifice experiments are illustrated using data from a study on benzyl acetate conducted under the U. S. National Toxicology Program. The results derived compare favourably with those obtained using a previously published approach to the analysis of tumorigenicity data.  相似文献   

8.
In drug development, we ask ourselves which population, endpoint and treatment comparison should be investigated. In this context, we also debate what matters most to the different stakeholders that are involved in clinical drug development, for example, patients, physicians, regulators and payers. With the publication of draft ICH E9 addendum on estimands in 2017, we now have a common framework and language to discuss such questions in an informed and transparent way. This has led to the estimand discussion being a key element in study development, including design, analysis and interpretation of a treatment effect. At an invited session at the 2018 PSI annual conference, PSI hosted a role‐play debate where the aim of the session was to mimic a regulatory and payer scientific advice discussion for a COPD drug. Including role‐play views from an industry sponsor, a patient, a regulator and a payer. This paper presents the invented COPD case‐study design and considerations relating to appropriate estimands are discussed by each of the stakeholders from their differing viewpoints with the additional inclusion of a technical (academic) perspective. The rationale for each perspective on approaches for handling intercurrent events is presented, with a key emphasis on the application of while‐on‐treatment and treatment policy estimands in this context. It is increasingly recognised that the treatment effect estimated by the treatment policy approach may not always be of primary clinical interest and may not appropriately communicate to patients the efficacy they can expect if they take the treatment as directed.  相似文献   

9.
Many lifetime distribution models have successfully served as population models for risk analysis and reliability mechanisms. The Kumaraswamy distribution is one of these distributions which is particularly useful to many natural phenomena whose outcomes have lower and upper bounds or bounded outcomes in the biomedical and epidemiological research. This article studies point estimation and interval estimation for the Kumaraswamy distribution. The inverse estimators (IEs) for the parameters of the Kumaraswamy distribution are derived. Numerical comparisons with maximum likelihood estimation and biased-corrected methods clearly indicate the proposed IEs are promising. Confidence intervals for the parameters and reliability characteristics of interest are constructed using pivotal or generalized pivotal quantities. Then, the results are extended to the stress–strength model involving two Kumaraswamy populations with different parameter values. Construction of confidence intervals for the stress–strength reliability is derived. Extensive simulations are used to demonstrate the performance of confidence intervals constructed using generalized pivotal quantities.  相似文献   

10.
In many dose-response studies, each of several independent groups of animals is treated with a different dose of a substance. Many response variables are then measured on each animal. The distributions of the response variables may be nonnormal, and Jonckheere's (1954) test for ordered alternatives in the one-way layout is sometimes used to test whether the level of a single variable increases with increasing dose. In some applications, however, it is important to consider a set of response variables simultaneously. For instance, an increase in each of certain enzymes in the blood serum may suggest liver damage. To test whether these enzyme levels increase with increasing dose, it may be preferable to consider these enzymes as a group, rather than individually.

I propose two multivariate generalizations of Jonckheere's univariate test. Each multivariate test statistic is a function of coordinate-wise Jonckheere statistics—one a sum, the other a quadratic form. The sum statistic can be used to test the alternative hypothesis that each variable is stochastically increasing with increasing dose. The quadratic form statistic is designed for the more general alternative hypothesis that each variable is stochastically ordered with increasing dose.

For each of these two alternatives, I also propose a multivariate generalization of a normal theory test described by Puri (1965). I examine the asymptotic distributions of the four test statistics under the null hypothesis and under translation alternatives and compare each distribution-free test to the corresponding normal theory test in terms of asymptotic relative efficiency.

The multivariate Jonckheere tests are illustrated using does-response data from a subchronic toxicology study carried out by the National Toxicology Program. Four groups of ten male rats each were treated with increasing doses of vinylidene flouride, and the serum enzymes SDH, SGOT, and SGPT were measured. A comparison of univariate Jonckheere tests on each variable, bivariate tests on SDH and SGOT, and multivariate tests on all three variables gives insight into the behavior of the various procedures.  相似文献   

11.
The use of surrogate end points has become increasingly common in medical and biological research. This is primarily because, in many studies, the primary end point of interest is too expensive or too difficult to obtain. There is now a large volume of statistical methods for analysing studies with surrogate end point data. However, to our knowledge, there has not been a comprehensive review of these methods to date. This paper reviews some existing methods and summarizes the strengths and weaknesses of each method. It also discusses the assumptions that are made by each method and critiques how likely these assumptions are met in practice.  相似文献   

12.
The maximum likelihood approach to the estimation of factor analytic model parameters most commonly deals with outcomes that are assumed to be multivariate Gaussian random variables in a homogeneous input space. In many practical settings, however, many studies needing factor analytic modeling involve data that, not only are not multivariate Gaussian variables, but also come from a partitioned input space. This article introduces an extension of the maximum likelihood factor analysis that handles multivariate outcomes made up of attributes with different probability distributions, and originating from a partitioned input space. An EM Algorithm combined with Fisher Scoring is used to estimate the parameters of the derived model.  相似文献   

13.
In many medical studies, there are covariates that change their values over time and their analysis is most often modeled using the Cox regression model. However, many of these time-dependent covariates can be expressed as an intermediate event, which can be modeled using a multi-state model. Using the relationship of time-dependent (discrete) covariates and multi-state models, we compare (via simulation studies) the Cox model with time-dependent covariates with the most frequently used multi-state regression models. This article also details the procedures for generating survival data arising from all approaches, including the Cox model with time-dependent covariates.  相似文献   

14.
Foxhound training enclosures are facilities where wild-trapped foxes are placed into large fenced areas for dog training purposes. Although the purpose of these facilities is to train dogs without harming foxes, dog-related mortality has been reported to be an issue in some enclosures. Using data from a fox enclosure in Virginia, we investigate factors that influence fox survival in these dog training facilities and propose a set of policies to improve fox survival. In particular, a Bayesian hierarchical model is formulated to compute fox survival probabilities based on a fox's time in the enclosure and the number of dogs allowed in the enclosure at one time. These calculations are complicated by missing information on the number of dogs in the enclosure for many days during the study. We elicit expert knowledge for a prior on the number of dogs to account for the uncertainty in the missing data. Reversible jump Markov Chain Monte Carlo is used for model selection in the presence of missing covariates. We then use our model to examine possible changes to foxhound training enclosure policy and what effect those changes may have on fox survival.  相似文献   

15.
The effectiveness and safety of implantable medical devices is a critical public health concern. We consider analysis of data in which it is of interest to compare devices but some individuals may be implanted with two or more devices. Our motivating example is based on orthopedic devices, where the same individual can be implanted with as many as two devices for the same joint but on different sides of the body, referred to as bilateral cases. Different methods of analysis are considered in a simulation study and real data example, including both marginal and conditional survival models, fitting single and separate models for bilateral and non-bilateral cases, and combining estimates from these two models. The results of simulations suggest that in the context of orthopedic devices, where implants failures are rare, models fit on both bilateral and non-bilateral cases simultaneously could be quite misleading, and that combined estimates from fitting two separate models performed better under homogeneity. A real data example illustrates the issues surrounding analysis of orthopedic device data with bilateral cases. Our findings suggest that research studies of orthopedic devices should at minimum consider fitting separate models to bilateral and non-bilateral cases.  相似文献   

16.
In recent years the analysis of interval-censored failure time data has attracted a great deal of attention and such data arise in many fields including demographical studies, economic and financial studies, epidemiological studies, social sciences, and tumorigenicity experiments. This is especially the case in medical studies such as clinical trials. In this article, we discuss regression analysis of one type of such data, Case I interval-censored data, in the presence of left-truncation. For the problem, the additive hazards model is employed and the maximum likelihood method is applied for estimations of unknown parameters. In particular, we adopt the sieve estimation approach that approximates the baseline cumulative hazard function by linear functions. The resulting estimates of regression parameters are shown to be consistent and efficient and have an asymptotic normal distribution. An illustrative example is provided.  相似文献   

17.
Subgroup by treatment interaction assessments are routinely performed when analysing clinical trials and are particularly important for phase 3 trials where the results may affect regulatory labelling. Interpretation of such interactions is particularly difficult, as on one hand the subgroup finding can be due to chance, but equally such analyses are known to have a low chance of detecting differential treatment effects across subgroup levels, so may overlook important differences in therapeutic efficacy. EMA have therefore issued draft guidance on the use of subgroup analyses in this setting. Although this guidance provided clear proposals on the importance of pre‐specification of likely subgroup effects and how to use this when interpreting trial results, it is less clear which analysis methods would be reasonable, and how to interpret apparent subgroup effects in terms of whether further evaluation or action is necessary. A PSI/EFSPI Working Group has therefore been investigating a focused set of analysis approaches to assess treatment effect heterogeneity across subgroups in confirmatory clinical trials that take account of the number of subgroups explored and also investigating the ability of each method to detect such subgroup heterogeneity. This evaluation has shown that the plotting of standardised effects, bias‐adjusted bootstrapping method and SIDES method all perform more favourably than traditional approaches such as investigating all subgroup‐by‐treatment interactions individually or applying a global test of interaction. Therefore, these approaches should be considered to aid interpretation and provide context for observed results from subgroup analyses conducted for phase 3 clinical trials.  相似文献   

18.
Abstract.  In many epidemiological studies, disease occurrences and their rates are naturally modelled by counting processes and their intensities, allowing an analysis based on martingale methods. Applied to the Mantel–Haenszel estimator, these methods lend themselves to the analysis of general control selection sampling designs and the accommodation of time-varying exposures.  相似文献   

19.
The International Conference on Harmonisation guideline ‘Statistical Principles for Clinical Trials’ was adopted by the Committee for Proprietary Medicinal Products (CPMP) in March 1998, and consequently is operational in Europe. Since then more detailed guidance on selected topics has been issued by the CPMP in the form of ‘Points to Consider’ documents. The intent of these was to give guidance particularly to non‐statistical reviewers within regulatory authorities, although of course they also provide a good source of information for pharmaceutical industry statisticians. In addition, the Food and Drug Administration has recently issued a draft guideline on data monitoring committees. In November 2002 a one‐day discussion forum was held in London by Statisticians in the Pharmaceutical Industry (PSI). The aim of the meeting was to discuss how statisticians were responding to some of the issues covered in these new guidelines, and to document consensus views where they existed. The forum was attended by industry, academic and regulatory statisticians. This paper outlines the questions raised, resulting discussions and consensus views reached. It is clear from the guidelines and discussions at the workshop that the statistical analysis strategy must be planned during the design phase of a clinical trial and carefully documented. Once the study is complete the analysis strategy should be thoughtfully executed and the findings reported. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

20.
Many studies have been made of the performance of standard algorithms used to estimate the parameters of a mixture density, where data arise from two or more underlying populations. While these studies examine uncensored data, many mixture processes are right-censored. Therefore, this paper addresses the accuracy and efficiency of standard and hybrid algorithms under different degrees of right-censored data. While a common belief is that the EM algorithm is slow and inaccurate, we find that the EM generally exhibits excellent efficiency and accuracy. While extreme right censoring causes the EM to frequently fail to converge, a hybrid-EM algorithm is found to be superior at all levels of right-censoring.s  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号