首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In drug development, it sometimes occurs that a new drug does not demonstrate effectiveness for the full study population but appears to be beneficial in a relevant subgroup. In case the subgroup of interest was not part of a confirmatory testing strategy, the inflation of the overall type I error is substantial and therefore such a subgroup analysis finding can only be seen as exploratory at best. To support such exploratory findings, an appropriate replication of the subgroup finding should be undertaken in a new trial. We should, however, be reasonably confident in the observed treatment effect size to be able to use this estimate in a replication trial in the subpopulation of interest. We were therefore interested in evaluating the bias of the estimate of the subgroup treatment effect, after selection based on significance for the subgroup in an overall “failed” trial. Different scenarios, involving continuous as well as dichotomous outcomes, were investigated via simulation studies. It is shown that the bias associated with subgroup findings in overall nonsignificant clinical trials is on average large and varies substantially across plausible scenarios. This renders the subgroup treatment estimate from the original trial of limited value to design the replication trial. An empirical Bayesian shrinkage method is suggested to minimize this overestimation. The proposed estimator appears to offer either a good or a conservative correction to the observed subgroup treatment effect hence provides a more reliable subgroup treatment effect estimate for adequate planning of future studies.  相似文献   

2.
No-constant strategy is considered for the heterogenous autoregressive (HAR) model of Corsi, which is motivated by smaller biases of its estimated HAR coefficients than those of the constant HAR model. The no-constant model produces better forecasts than the constant model for four real datasets of the realized volatilities (RVs) of some major assets. Robustness of forecast improvement is verified for other functions of realized variance and log RV and for the extended datasets of all 20 RVs of Oxford-Man realized library. A Monte Carlo simulation also reveals improved forecasts for some historic HAR model estimated by Corsi.  相似文献   

3.
The estimand framework requires a precise definition of the clinical question of interest (the estimand) as different ways of accounting for “intercurrent” events post randomization may result in different scientific questions. The initiation of subsequent therapy is common in oncology clinical trials and is considered an intercurrent event if the start of such therapy occurs prior to a recurrence or progression event. Three possible ways to account for this intercurrent event in the analysis are to censor at initiation, consider recurrence or progression events (including death) that occur before and after the initiation of subsequent therapy, or consider the start of subsequent therapy as an event in and of itself. The new estimand framework clarifies that these analyses address different questions (“does the drug delay recurrence if no patient had received subsequent therapy?” vs “does the drug delay recurrence with or without subsequent therapy?” vs “does the drug delay recurrence or start of subsequent therapy?”). The framework facilitates discussions during clinical trial planning and design to ensure alignment between the key question of interest, the analysis, and interpretation. This article is a result of a cross-industry collaboration to connect the International Council for Harmonisation E9 addendum concepts to applications. Data from previously reported randomized phase 3 studies in the renal cell carcinoma setting are used to consider common intercurrent events in solid tumor studies, and to illustrate different scientific questions and the consequences of the estimand choice for study design, data collection, analysis, and interpretation.  相似文献   

4.
At the 22nd Annual North Carolina Serials Conference, focused on “Collaboration, Community, and Connection,” Linda Blake and Hilary Fredette of West Virginia University presented, ““Can we Lend?”: Communicating Interlibrary Loan Rights,” reviewing their experiences collaborating across an academic library to achieve the best possible interlibrary loan e-journal access within the bounds of sometimes inscrutable licenses.  相似文献   

5.
It is often known in advance that certain subsets of factors act independently upon a response. Such information can be used to estimational advantage by aliasing low-order effects with such zero interactions. We find the best 2n–k fractions for the case when the factors can be partitioned into two classes such that non-zero interactions may exist only between classes but not within a class.  相似文献   

6.
Nonlinear mixed effects models (NLMEM) are used in pharmacokinetics to analyse concentrations of patients during drug development, particularly for pediatric studies. Approaches based on the Fisher information matrix can be used to optimize their design. Local design needs some a priori parameter values which might be difficult to guess. Therefore, two-stage adaptive designs are useful to provide some flexibility. We implemented in the R function PFIM the Fisher matrix for two-stage designs in NLMEM. We evaluated, with simulations, the impact of one-stage and two-stage designs on the precision of parameter estimation when the true and a priori parameters are different.  相似文献   

7.
8.
The Gröbner basis method in experimental design (Pistone & Wynn, 1996) is developed in a practical setting. The computational algebraic techniques (Gröbner bases in particular) are coupled with statistical strategies and the links to more standard approaches made. A new method of analysing a non-orthogonal experiment based on the Gröbner basis method is introduced. Examples are given utilizing the approaches.  相似文献   

9.
Zusammenfassung: Vermögenspreise im Allgemeinen und Immobilienpreise im Besonderen gewannen in den zurückliegenden Jahren mehr und mehr an Bedeutung. Während sie in den späten 80er Jahren (nach dem Börsencrash im Herbst 1987) und im vergangenen Jahrzehnt vornehmlich unter dem Schlagwort asset-price inflation/deflation betrachtet wurden, stehen neuerdings die Tragfähigkeit und Bestandsfestigkeit der Finanzsysteme im Vordergrund. In den Ausführungen geht es vor allem um die Frage, warum, seit wann und aufgrund welcher Grunddaten die Deutsche Bundesbank auf diesem Gebiet der Preisstatistik tätig geworden ist. Dabei wird nicht nur auf das hohe Maß an Unsicherheit in den vorgelegten Angaben hingewiesen, sondern auch der Second–Best–Charakter der Berechnungen hervorgehoben.
Summary: Asset prices in general and property prices in particular have gained increasing importance in recent years. Whereas in the late 1980s (after the stock market crash in autumn 1987) and in the last decade these prices mainly came under the heading of asset-price inflation/deflation, the focus has recently shifted to sustainable and viable financial systems. The notes primarily explain why the Bundesbank is involved in this area of price statistics, when this involvement began and what underlying data the Bundesbank uses. At the same time, they not only indicate the large degree of uncertainty in the reported data but also highlight the second-best nature of the calculations.
*Vortrag anlässlich der 9. Konferenz Messen der Teuerung am 17./18. Juni 2004 in Marburg. Der Verfasser gibt seine persönliche Auffassung wieder, die nicht unbedingt mit derjenigen der Deutschen Bundesbank übereinstimmen muss.  相似文献   

10.
Statistical Methods & Applications - The first cluster of coronavirus cases in Europe was officially detected on 21st February 2020 in Northern Italy, even if recent evidence showed sporadic...  相似文献   

11.
Many authors have criticized the use of spreadsheets for statistical data processing and computing because of incorrect statistical functions, no log file or audit trail, inconsistent behavior of computational dialogs, and poor handling of missing values. Some improvements in some spreadsheet processors and the possibility of audit trail facilities suggest that the use of a spreadsheet for some statistical data entry and simple analysis tasks may now be acceptable. A brief outline of some issues and some guidelines for good practice are included.  相似文献   

12.
When the experimenter suspects that there might be a quadratic relation between the response variable and the explanatory parameters, a design with at least three points must be employed to establish and explore this relation (second-order design). Orthogonal arrays (OAs) with three levels are often used as second-order response surface designs. Generally, we assume that the data are independent observations; however, there are many situations where this assumption may not be sustainable. In this paper, we want to compare three-level OAs with 18, 27, and 36 runs under the presence of three specific forms of correlation in observations. The aim is to derive the best designs that can be efficiently used for response surface modeling.  相似文献   

13.
In the paper the problem of nonlinear unbiased estimation of expectation in linear models is considered. The considerations are restricted to linear plus quadratic estimators with quadratic parts invariant under a group of translations. The one way classification model is considered in detail, for which an explicit formula for the locally best estimators is presented. A numerical evaluation of variances of the best estimators is given for some unbalanced one way classification models and compared with the variance of the ordinary linear estimators.  相似文献   

14.
This paper deals with a testing problem for each of the interaction parameters of the Lotka–Volterra ordinary differential equations system~(ODE). In short, when the rates of birth and death are fixed, we would like to test if each interaction parameter is higher or lower than a fixed reference rate. We choose a statistical model where the actual population sizes are modelled as random perturbations of the solutions to this ODE. By assuming that the random perturbations follow correlated Ornstein–Uhlenbeck processes, we propose the uniformly most powerful test concerning each interaction parameter of the ODE and, we establish the asymptotic properties of the test. Further, we illustrate the suggested test on the Canadian mink–muskrat data set. This research has received the financial support from Natural Sciences and Engineering Research Council of Canada and Institut des Sciences Mathématiques.  相似文献   

15.
One of the games routinely played on the TV game show The Price is Right involves spinning a large wheel with the nickel values 5, 10, 15, …, 100 on it. The object of the game is to have the highest total score, from one or two spins, of all of the players in the game without going over a dollar (100). Using conditional probability and a Pascal computer program, the authors derive the optimal stopping times for all three players in the usual game.  相似文献   

16.
Joachim Bellach 《Statistics》2013,47(2):277-291
Some ways of Studentizing the parametric c-sample tests (c≧2) for location are examined and their asymptotic properties established. A way of Studentizing the c-sample Puni test based on the ranks of the observations is proposed. The resulting test is shown to be asymptotically valid and consistent for a reasonable class of alternatives  相似文献   

17.
In this paper we consider linear sufficiency and linear completeness in the context of estimating the estimable parametric function KβKβ under the general Gauss–Markov model {y,Xβ2V}{y,Xβ,σ2V}. We give new characterizations for linear sufficiency, and define and characterize linear completeness in a case of estimation of KβKβ. Also, we consider a predictive approach for obtaining the best linear unbiased estimator of KβKβ, and subsequently, we give the linear analogues of the Rao–Blackwell and Lehmann–Scheffé Theorems in the context of estimating KβKβ.  相似文献   

18.
Stochastic models for discrete time series in the time domain are well known but such models lack consideration of spatial dependency I We expand on their work by constructing spatially dependent moving average models. Definitions of order, stationarity, invertibility, autocorrelation function, and spectrum are made as natural extensions of those in zero dimensions and are implemented in the one and two-space dimensional models.  相似文献   

19.
Data envelopment analysis (DEA) and free disposal hull (FDH) estimators are widely used to estimate efficiency of production. Practitioners use DEA estimators far more frequently than FDH estimators, implicitly assuming that production sets are convex. Moreover, use of the constant returns to scale (CRS) version of the DEA estimator requires an assumption of CRS. Although bootstrap methods have been developed for making inference about the efficiencies of individual units, until now no methods exist for making consistent inference about differences in mean efficiency across groups of producers or for testing hypotheses about model structure such as returns to scale or convexity of the production set. We use central limit theorem results from our previous work to develop additional theoretical results permitting consistent tests of model structure and provide Monte Carlo evidence on the performance of the tests in terms of size and power. In addition, the variable returns to scale version of the DEA estimator is proved to attain the faster convergence rate of the CRS-DEA estimator under CRS. Using a sample of U.S. commercial banks, we test and reject convexity of the production set, calling into question results from numerous banking studies that have imposed convexity assumptions. Supplementary materials for this article are available online.  相似文献   

20.
Summary: Panel data offers a unique opportunity to identify data that interviewers clearly faked by comparing data waves. In the German Socio–Economic Panel (SOEP), only 0.5 percent of all records of raw data have been detected as faked. These fakes are used here to analyze the potential impact of fakes on survey results. Due to our central finding the faked records have no impact on the mean or the proportions. However, we show that there may be a serious bias in the estimation of correlations and regression coefficients. In all but one year (1998), the detected faked data have never been disseminated within the widely–used SOEP study. The fakes are removed prior to data release.* We are grateful to participants in the workshop on Item Nonresponse and Data Quality on Large Social Surveys for useful critique and comments, especially Rainer Schnell and our outstanding discussant Regina Riphahn. The usual disclaimer applies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号