首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Lifetime Data Analysis - A prediction model is calibrated if, roughly, for any percentage x we can expect that x subjects out of 100 experience the event among all subjects that have a predicted...  相似文献   

2.
The study proposes a Shewhart-type control chart, namely an MD chart, based on average absolute deviations taken from the median, for monitoring changes (especially moderate and large changes – a major concern of Shewhart control charts) in process dispersion assuming normality of the quality characteristic to be monitored. The design structure of the proposed MD chart is developed and its comparison is made with those of two well-known dispersion control charts, namely the R and S charts. Using power curves as a performance measure, it has been observed that the design structure of the proposed MD chart is more powerful than that of the R chart and is very close competitor to that of the S chart, in terms of discriminatory power for detecting shifts in the process dispersion. The non-normality effect is also examined on design structures of the three charts, and it has been observed that the design structure of the proposed MD chart is least affected by departure from normality.  相似文献   

3.
In longitudinal studies of biomarkers, an outcome of interest is the time at which a biomarker reaches a particular threshold. The CD4 count is a widely used marker of human immunodeficiency virus progression. Because of the inherent variability of this marker, a single CD4 count below a relevant threshold should be interpreted with caution. Several studies have applied persistence criteria, designating the outcome as the time to the occurrence of two consecutive measurements less than the threshold. In this paper, we propose a method to estimate the time to attainment of two consecutive CD4 counts less than a meaningful threshold, which takes into account the patient‐specific trajectory and measurement error. An expression for the expected time to threshold is presented, which is a function of the fixed effects, random effects and residual variance. We present an application to human immunodeficiency virus‐positive individuals from a seroprevalent cohort in Durban, South Africa. Two thresholds are examined, and 95% bootstrap confidence intervals are presented for the estimated time to threshold. Sensitivity analysis revealed that results are robust to truncation of the series and variation in the number of visits considered for most patients. Caution should be exercised when interpreting the estimated times for patients who exhibit very slow rates of decline and patients who have less than three measurements. We also discuss the relevance of the methodology to the study of other diseases and present such applications. We demonstrate that the method proposed is computationally efficient and offers more flexibility than existing frameworks. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
Virtually all models for the utility of personnel selection are based on the average criterion score of the predictor selected applicants. This paper indicates how standard results from the theory on order statistics can be used to determine the expected value, the standard error and the sampling distribution of the average criterion score statistic when a finite number of employees is selected. Exact as well as approximate results are derived and it is shown how these results can be used to construct intervals that will contain, with a given probability 1 - f , the average criterion score associated with a particular implementation of the personnel selection. These interval estimates are particularly helpful to the selection practitioner because they can be used to state the confidence level with which the selection payoff will be above a specific value. In addition, for most realistic selection scenarios, it is found that the corresponding utility interval estimate is quite large. For situations in which multiple selections are performed over time, the utility intervals are, however, smaller.  相似文献   

5.
According to the last proposals by the Basel Committee, banks are allowed to use statistical approaches for the computation of their capital charge covering financial risks such as credit risk, market risk and operational risk.

It is widely recognized that internal loss data alone do not suffice to provide accurate capital charge in financial risk management, especially for high-severity and low-frequency events. Financial institutions typically use external loss data to augment the available evidence and, therefore, provide more accurate risk estimates. Rigorous statistical treatments are required to make internal and external data comparable and to ensure that merging the two databases leads to unbiased estimates.

The goal of this paper is to propose a correct statistical treatment to make the external and internal data comparable and, therefore, mergeable. Such methodology augments internal losses with relevant, rather than redundant, external loss data.  相似文献   


6.
The feasibility of a new clinical trial may be increased by incorporating historical data of previous trials. In the particular case where only data from a single historical trial are available, there exists no clear recommendation in the literature regarding the most favorable approach. A main problem of the incorporation of historical data is the possible inflation of the type I error rate. A way to control this type of error is the so‐called power prior approach. This Bayesian method does not “borrow” the full historical information but uses a parameter 0 ≤ δ ≤ 1 to determine the amount of borrowed data. Based on the methodology of the power prior, we propose a frequentist framework that allows incorporation of historical data from both arms of two‐armed trials with binary outcome, while simultaneously controlling the type I error rate. It is shown that for any specific trial scenario a value δ > 0 can be determined such that the type I error rate falls below the prespecified significance level. The magnitude of this value of δ depends on the characteristics of the data observed in the historical trial. Conditionally on these characteristics, an increase in power as compared to a trial without borrowing may result. Similarly, we propose methods how the required sample size can be reduced. The results are discussed and compared to those obtained in a Bayesian framework. Application is illustrated by a clinical trial example.  相似文献   

7.
One of the most used criterion for evaluating space-filling design in computer experiments is the minimal distance between pairs of points. The focus of this paper is to propose a normalized quality index that is based on the distribution of the minimal distance when points are drawn independently from the uniform distribution over the unit hypercube. Expressions of this index are explicitly given in terms of polynomials under any \(L_p\) distance. When the size of the design or the dimension of the space is large, approximations relying on extreme value theory are derived. Some illustrations of our index are presented on simulated data and on a real problem.  相似文献   

8.
In this paper we outline a class of fully parametric proportional hazards models, in which the baseline hazard is assumed to be a power transform of the time scale, corresponding to assuming that survival times follow a Weibull distribution. Such a class of models allows for the possibility of time varying hazard rates, but assumes a constant hazard ratio. We outline how Bayesian inference proceeds for such a class of models using asymptotic approximations which require only the ability to maximize the joint log posterior density. We apply these models to a clinical trial to assess the efficacy of neutron therapy compared to conventional treatment for patients with tumors of the pelvic region. In this trial there was prior information about the log hazard ratio both in terms of elicited clinical beliefs and the results of previous studies. Finally, we consider a number of extensions to this class of models, in particular the use of alternative baseline functions, and the extension to multi-state data.  相似文献   

9.
An important evolution in the missing data arena has been the recognition of need for clarity in objectives. The objectives of primary focus in clinical trials can often be categorized as assessing efficacy or effectiveness. The present investigation illustrated a structured framework for choosing estimands and estimators when testing investigational drugs to treat the symptoms of chronic illnesses. Key issues were discussed and illustrated using a reanalysis of the confirmatory trials from a new drug application in depression. The primary analysis used a likelihood‐based approach to assess efficacy: mean change to the planned endpoint of the trial assuming patients stayed on drug. Secondarily, effectiveness was assessed using a multiple imputation approach. The imputation model—derived solely from the placebo group—was used to impute missing values for both the drug and placebo groups. Therefore, this so‐called placebo multiple imputation (a.k.a. controlled imputation) approach assumed patients had reduced benefit from the drug after discontinuing it. Results from the example data provided clear evidence of efficacy for the experimental drug and characterized its effectiveness. Data after discontinuation of study medication were not required for these analyses. Given the idiosyncratic nature of drug development, no estimand or approach is universally appropriate. However, the general practice of pairing efficacy and effectiveness estimands may often be useful in understanding the overall risks and benefits of a drug. Controlled imputation approaches, such as placebo multiple imputation, can be a flexible and transparent framework for formulating primary analyses of effectiveness estimands and sensitivity analyses for efficacy estimands. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

10.
The 'Clinical antipsychotic trials in intervention effectiveness' study, was designed to evaluate whether there were significant differences between several antipsychotic medications in effectiveness, tolerability, cost and quality of life of subjects with schizophrenia. Overall, 74 % of patients discontinued the study medication for various reasons before the end of 18 months in phase I of the study. When such a large percentage of study participants fail to complete the study schedule, it is not clear whether the apparent profile in effectiveness reflects genuine changes over time or is influenced by selection bias, with participants with worse (or better) outcome values being more likely to drop out or to discontinue. To assess the effect of dropouts for different reasons on inferences, we construct a joint model for the longitudinal outcome and cause-specific dropouts that allows for interval-censored dropout times. Incorporating the information regarding the cause of dropout improves inferences and provides better understanding of the association between cause-specific dropout and the outcome process. We use simulations to demonstrate the advantages of the joint modelling approach in terms of bias and efficiency.  相似文献   

11.
Before biomarkers can be used in clinical trials or patients' management, the laboratory assays that measure their levels have to go through development and analytical validation. One of the most critical performance metrics for validation of any assay is related to the minimum amount of values that can be detected and any value below this limit is referred to as below the limit of detection (LOD). Most of the existing approaches that model such biomarkers, restricted by LOD, are parametric in nature. These parametric models, however, heavily depend on the distributional assumptions, and can result in loss of precision under the model or the distributional misspecifications. Using an example from a prostate cancer clinical trial, we show how a critical relationship between serum androgen biomarker and a prognostic factor of overall survival is completely missed by the widely used parametric Tobit model. Motivated by this example, we implement a semiparametric approach, through a pseudo-value technique, that effectively captures the important relationship between the LOD restricted serum androgen and the prognostic factor. Our simulations show that the pseudo-value based semiparametric model outperforms a commonly used parametric model for modeling below LOD biomarkers by having lower mean square errors of estimation.  相似文献   

12.
Propensity score methods are an increasingly popular technique for causal inference. To estimate propensity scores, we must model the distribution of the treatment indicator given a vector of covariates. Much work has been done in the case where the covariates are fully observed. Unfortunately, many large scale and complex surveys, such as longitudinal surveys, suffer from missing covariate values. In this paper, we compare three different approaches and their underlying assumptions of handling missing background data in the estimation and use of propensity scores: a complete-case analysis, a pattern-mixture model based approach developed by Rosenbaum and Rubin (J Am Stat Assoc79:516–524, 1984), and a multiple imputation approach. We apply these methods to assess the impact of childbearing events on individuals’ wellbeing in Indonesia, using a sample of women from the Indonesia Family Life Survey. I am grateful to all the participants at the project “Poverty Dynamics and Fertility in Developing Countries” for their support and encouragement. Special thanks are due to Fabrizia Mealli for her insightful suggestions and discussions. I also thank Jungho Kim, who is the main author of the STATA code to produce Indonesia consumption expenditure. Finally, I thank Arnstein Aassve, and Letizia Mencarini for help working with the data and their very useful discussions, and Alexia Fuernkranz-Prskawetz, and Henriette Engelhardt for detailed comments and suggestions which have improved the paper. Financial support from CNR-EFS and COFIN 2005 is gratefully acknowledged.  相似文献   

13.
Abstract

In this note, we present a theoretical result which relaxes a critical condition required by the semiparametric approach to dimension reduction. The asymptotic normality of the estimators still maintains under weaker assumptions. This improvement greatly increases the applicability of the semiparametric approach.  相似文献   

14.
Biomarkers have the potential to improve our understanding of disease diagnosis and prognosis. Biomarker levels that fall below the assay detection limits (DLs), however, compromise the application of biomarkers in research and practice. Most existing methods to handle non-detects focus on a scenario in which the response variable is subject to the DL; only a few methods consider explanatory variables when dealing with DLs. We propose a Bayesian approach for generalized linear models with explanatory variables subject to lower, upper, or interval DLs. In simulation studies, we compared the proposed Bayesian approach to four commonly used methods in a logistic regression model with explanatory variable measurements subject to the DL. We also applied the Bayesian approach and other four methods in a real study, in which a panel of cytokine biomarkers was studied for their association with acute lung injury (ALI). We found that IL8 was associated with a moderate increase in risk for ALI in the model based on the proposed Bayesian approach.  相似文献   

15.
16.
Clinical trials are often designed to compare continuous non‐normal outcomes. The conventional statistical method for such a comparison is a non‐parametric Mann–Whitney test, which provides a P‐value for testing the hypothesis that the distributions of both treatment groups are identical, but does not provide a simple and straightforward estimate of treatment effect. For that, Hodges and Lehmann proposed estimating the shift parameter between two populations and its confidence interval (CI). However, such a shift parameter does not have a straightforward interpretation, and its CI contains zero in some cases when Mann–Whitney test produces a significant result. To overcome the aforementioned problems, we introduce the use of the win ratio for analysing such data. Patients in the new and control treatment are formed into all possible pairs. For each pair, the new treatment patient is labelled a ‘winner’ or a ‘loser’ if it is known who had the more favourable outcome. The win ratio is the total number of winners divided by the total numbers of losers. A 95% CI for the win ratio can be obtained using the bootstrap method. Statistical properties of the win ratio statistic are investigated using two real trial data sets and six simulation studies. Results show that the win ratio method has about the same power as the Mann–Whitney method. We recommend the use of the win ratio method for estimating the treatment effect (and CI) and the Mann–Whitney method for calculating the P‐value for comparing continuous non‐Normal outcomes when the amount of tied pairs is small. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

17.
We have developed a new approach to determine the threshold of a biomarker that maximizes the classification accuracy of a disease. We consider a Bayesian estimation procedure for this purpose and illustrate the method using a real data set. In particular, we determine the threshold for Apolipoprotein B (ApoB), Apolipoprotein A1 (ApoA1) and the ratio for the classification of myocardial infarction (MI). We first conduct a literature review and construct prior distributions. We then develop classification rules based on the posterior distribution of the location and scale parameters for these biomarkers. We identify the threshold for ApoB and ApoA1, and the ratio as 0.908 (gram/liter), 1.138 (gram/liter) and 0.808, respectively. We also observe that the threshold for disease classification varies substantially across different age and ethnic groups. Next, we identify the most informative predictor for MI among the three biomarkers. Based on this analysis, ApoA1 appeared to be a stronger predictor than ApoB for MI classification. Given that we have used this data set for illustration only, the results will require further investigation for use in clinical applications. However, the approach developed in this article can be used to determine the threshold of any continuous biomarker for a binary disease classification.  相似文献   

18.
Assessment of analytical similarity of tier 1 quality attributes is based on a set of hypotheses that tests the mean difference of reference and test products against a margin adjusted for standard deviation of the reference product. Thus, proper assessment of the biosimilarity hypothesis requires statistical tests that account for the uncertainty associated with the estimations of the mean differences and the standard deviation of the reference product. Recently, a linear reformulation of the biosimilarity hypothesis has been proposed, which facilitates development and implementation of statistical tests. These statistical tests account for the uncertainty in the estimation process of all the unknown parameters. In this paper, we survey methods for constructing confidence intervals for testing the linearized reformulation of the biosimilarity hypothesis and also compare the performance of the methods. We discuss test procedures using confidence intervals to make possible comparison among recently developed methods as well as other previously developed methods that have not been applied for demonstrating analytical similarity. A computer simulation study was conducted to compare the performance of the methods based on the ability to maintain the test size and power, as well as computational complexity. We demonstrate the methods using two example applications. At the end, we make recommendations concerning the use of the methods.  相似文献   

19.
A mixed-integer programing formulation for clustering is proposed, one that encompasses a wider range of objectives--and side conditions--than standard clustering approaches. The flexibility of the formulation is demonstrated in diagrams of sample problems and solutions. Preliminary computational tests in a practical setting confirm the usefulness of the formulation.  相似文献   

20.
The aim of the present work was to develop a new mathematical method for estimating the area under the curve (AUC) and its variability that could be applied in different preclinical experimental designs and amenable to be implemented in standard calculation worksheets. In order to assess the usefulness of the new approach, different experimental scenarios were studied and the results were compared with those obtained with commonly used software: WinNonlin® and Phoenix WinNonlin®. The results do not show statistical differences among the AUC values obtained by both procedures, but the new method appears to be a better estimator of the AUC standard error, measured as the coverage of 95% confidence interval. In this way, the new proposed method demonstrates to be as useful as WinNonlin® software when it was applicable. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号