首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In the first part of the paper we give a brief survey of methods to estimate distributed lag models where the coefficients can be approximated by a polynomial. In the second part we give a corrected version of Maddala's (1977) formula for the Almon estimator and some further results. To facilitate reference to Maddala's work, we use his notation, where possible.  相似文献   

2.
Modelling and simulation (M&S) is increasingly being applied in (clinical) drug development. It provides an opportune area for the community of pharmaceutical statisticians to pursue. In this article, we highlight useful principles behind the application of M&S. We claim that M&S should be focussed on decisions, tailored to its purpose and based in applied sciences, not relying entirely on data-driven statistical analysis. Further, M&S should be a continuous process making use of diverse information sources and applying Bayesian and frequentist methodology, as appropriate. In addition to forming a basis for analysing decision options, M&S provides a framework that can facilitate communication between stakeholders. Besides the discussion on modelling philosophy, we also describe how standard simulation practice can be ineffective and how simulation efficiency can often be greatly improved.  相似文献   

3.
Computer Experiments, consisting of a number of runs of a computer model with different inputs, are now common-place in scientific research. Using a simple fire model for illustration some guidelines are given for the size of a computer experiment. A graph is provided relating the error of prediction to the sample size which should be of use when designing computer experiments.

Methods for augmenting computer experiments with extra runs are also described and illustrated. The simplest method involves adding one point at a time choosing that point with the maximum prediction variance. Another method that appears to work well is to choose points from a candidate set with maximum determinant of the variance covariance matrix of predictions.  相似文献   

4.
The general problem of outlier detection and five recursive outlier detection procedures considered in the study are defined. The methods to compute powers, probabilities of detecting ≥1 outliers, and >1 observations including at least one inlier as outliers are computed and results are discussed. Results show that no procedure is most powerful when the actual number of outlier present in the sample is exactly, under-, and overestimated. The probabilities of inliers being detected as outliers are also substantial particularly when outliers occur only on one side of the sample  相似文献   

5.
Inference for a generalized linear model is generally performed using asymptotic approximations for the bias and the covariance matrix of the parameter estimators. For small experiments, these approximations can be poor and result in estimators with considerable bias. We investigate the properties of designs for small experiments when the response is described by a simple logistic regression model and parameter estimators are to be obtained by the maximum penalized likelihood method of Firth [Firth, D., 1993, Bias reduction of maximum likelihood estimates. Biometrika, 80, 27–38]. Although this method achieves a reduction in bias, we illustrate that the remaining bias may be substantial for small experiments, and propose minimization of the integrated mean square error, based on Firth's estimates, as a suitable criterion for design selection. This approach is used to find locally optimal designs for two support points.  相似文献   

6.
A General Multiconsequence Intervention Model class that describes the simultaneous occurrence of a change in the process mean and covariance structure is introduced. When the covariance change is negligible, this model class reduces to intervention models described by Box and Tiao (1975). Maximum Likelihood Estimators for the parameters of the multiconsequence model class are developed for various important modeling situations that result from different a priori information about the form of the mean shift function form and the model parameters. As a consequence of these estimation results, an identification procedure for determining an appropriate dynamic mean shift form is suggested. The necessary hypothesis tests and corresponding confidence intervals.  相似文献   

7.
Several authors have suggested the method of minimum bias estimation for estimating response surfaces. The minimum bias estimation procedure achieves minimum average squared bias of the fitted model without depending on the values of the unknown parameters of the true surface. The only requirement is that the design satisfies a simple estimability condition. Subject to providing minimum average squared bias, the minimum bias estimator also provides minimum average variance of ?(x) where ?(x) is the estimate of the response at the point x.

To support the estimation of the parameters in the fitted model, very little has been suggested in the way of experimental designs except to say that a full rank matrix X of independent variables should be used. This paper presents a closer look at the estimability conditions that are required for minimum bias estimation, and from the form of the matrix X, a formula is derived which measures the amount of design flexibility available. The design flexibility is termed “the degrees of freedom” of the X matrix and it is shown how the degrees of freedom can be used to decide if other design optimality criteria might be considered along with minimum bias estimation. Several examples are provided.  相似文献   

8.
We develop theorems that are analogous to the famous theorems of Lindley & Smith (1972) for the case that the data are from the p-variate von Mises-Fisher distribution. Results similar to those of Lindley & Smith are obtained for the two-stage case. There is, however, a departure from normal theory in the three-stage case, since closed form solutions do not exist when sampling Sfom the von Mises-Fisher distribution. The difference is discussed and elaborated.  相似文献   

9.
10.
This article considers designed experiments for stability, comparability, and formulation testing that are analyzed with regression models in which the degradation rate is a fixed effect. In this setting, we investigate how the number of lots, the number of time points and their locations affect the precision of the entities of interest, leverages of the time points, detection of non-linearity and interim analyses. This investigation shows that modifying time point locations suggested by ICH for stability studies can significantly improve these objectives. In addition, we show that estimates of precision can be biased when a regression model that assumes independent measurements is used in the presence of within-assay session correlation. This bias can lead to longer shelf life estimates in stability studies and loss of power in comparability studies. Mixed-effect models that take into account within-assay session correlation are shown to reduce this bias. The findings in this article are obtained from well known statistical theory but provide valuable practical advice to scientists and statisticians designing and interpreting these types of experiments.  相似文献   

11.
12.
13.
In clinical trials with a time-to-event endpoint, subjects are often at risk for events other than the one of interest. When the occurrence of one type of event precludes observation of any later events or alters the probably of subsequent events, the situation is one of competing risks. During the planning stage of a clinical trial with competing risks, it is important to take all possible events into account. This paper gives expressions for the power and sample size for competing risks based on a flexible parametric Weibull model. Nonuniform accrual to the study is considered and an allocation ratio other than one may be used. Results are also provided for the case where two or more of the competing risks are of primary interest.  相似文献   

14.
Developing new medical tests and identifying single biomarkers or panels of biomarkers with superior accuracy over existing classifiers promotes lifelong health of individuals and populations. Before a medical test can be routinely used in clinical practice, its accuracy within diseased and non-diseased populations must be rigorously evaluated. We introduce a method for sample size determination for studies designed to test hypotheses about medical test or biomarker sensitivity and specificity. We show how a sample size can be determined to guard against making type I and/or type II errors by calculating Bayes factors from multiple data sets simulated under null and/or alternative models. The approach can be implemented across a variety of study designs, including investigations into one test or two conditionally independent or dependent tests. We focus on a general setting that involves non-identifiable models for data when true disease status is unavailable due to the nonexistence of or undesirable side effects from a perfectly accurate (i.e. ‘gold standard’) test; special cases of the general method apply to identifiable models with or without gold-standard data. Calculation of Bayes factors is performed by incorporating prior information for model parameters (e.g. sensitivity, specificity, and disease prevalence) and augmenting the observed test-outcome data with unobserved latent data on disease status to facilitate Gibbs sampling from posterior distributions. We illustrate our methods using a thorough simulation study and an application to toxoplasmosis.  相似文献   

15.
Oja (1983) examined various ways of measuring location, scatter, skewness, and kurtosis for multivariate distributions. Among other measures of location, he introduced a generalised median known in this paper under the name of the Oja median. In our study of the existence of that median, we show that Oja's definition can only be applied to distributions having a mean. In dimension d θ 2, we establish that the usual method of extension breaks down, which raises the question of the validity of the concept as a notion of median. Two fundamental theoretical properties of that median are also considered: uniqueness and consistency.  相似文献   

16.
Following some results on weak convergence, theorems are given on the selection of sample elements on uniform spaces. Our results are tied in with a theorem of Fernandez (1971, p. 1740).  相似文献   

17.
Consider a finite sample from a generalized negative-binomial distribution where both (canonical and index) parameters are unknown. This note proves that both the maximum-likelihood estimate and the moment estimate of the index parameter exist if and only if the sample variance is greater than the sample mean. This extends a result for the negative-binomial distribution that had been conjectured by Anscombe (1950) and later shown by Levin and Reeds (1977).  相似文献   

18.
This paper considers a life test under progressive type I group censoring with a Weibull failure time distribution. The maximum likelihood method is used to derive the estimators of the parameters of the failure time distribution. In practice, several variables, such as the number of test units, the number of inspections, and the length of inspection interval are related to the precision of estimation and the cost of experiment. An inappropriate setting of these decision variables not only wastes the resources of the experiment but also reduces the precision of estimation. One problem arising from designing a life test is the restricted budget of experiment. Therefore, under the constraint that the total cost of experiment does not exceed a pre-determined budget, this paper provides an algorithm to solve the optimal decision variables by considering three different criteria. An example is discussed to illustrate the proposed method. The sensitivity analysis is also studied.  相似文献   

19.
Some considerations relating to the post-data selection of models are discussed. These include some difficulties with orthodox theory, implementation of the likelihood principle, and Bayesian tests of hypotheses.  相似文献   

20.
Anti-tumor treatment outcomes in mouse experiments can be challenging to interpret and communicate accurately. In reporting these experiments, rigorous statistical considerations are commonly absent, although statistical applications have been proposed. We investigated the practicality and utility of different statistical strategies for the analysis of anti-tumor responses in a longitudinal mouse case study. Each analysis that we performed had different endpoints, investigated different questions, and was based on different assumptions. We found rudimentary visual and risk analysis insufficient without additional considerations, and upon further investigation we found improvements in key anti-tumor parameter estimates associated with a drug combination in our case study. We offer practical statistical considerations for investigating anti-cancer treatments in mice, applying a multi-tier statistical approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号