首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
ABSTRACT

Computer models depending on unknown inputs are used in computer experiments in order to study the input-output relationship. Some computer models are computationally intensive and only a few computer runs can be made. A nonstationary statistical model that is used as a faster-running surrogate for computationally intensive numerical solvers of ordinary differential systems is proposed in this article. This statistical model reflects the dynamics of the system and, as a population dynamics example will show, it can be more accurate than a commonly used black box statistical model.  相似文献   

2.
The study of physical processes is often aided by computer models or codes. Computer models that simulate such processes are sometimes computationally intensive and therefore not very efficient exploratory tools. In this paper, we address computer models characterized by temporal dynamics and propose new statistical correlation structures aimed at modelling their time dependence. These correlations are embedded in regression models with input-dependent design matrix and input-correlated errors that act as fast statistical surrogates for the computationally intensive dynamical codes. The methods are illustrated with an automotive industry application involving a road load data acquisition computer model.  相似文献   

3.
Robust regression has not had a great impact on statistical practice, although all statisticians are convinced of its importance. The procedures for robust regression currently available are complex, and computer intensive. With a modification of the Gaussian paradigm, taking into consideration outliers and leverage points, we propose an iteratively weighted least squares method which gives robust fits. The procedure is illustrated by applying it on data sets which have been previously used to illustrate robust regression methods.It is hoped that this simple, effective and accessible method will find its use in statistical practice.  相似文献   

4.
Data from complex surveys are being used increasingly to build the same sort of explanatory and predictive models as those used in the rest of statistics. Unfortunately the assumptions underlying standard statistical methods are not even approximately valid for most survey data. The problem of parameter estimation has been largely solved, at least for routine data analysis, through the use of weighted estimating equations, and software for most standard analytical procedures is now available in the major statistical packages. One notable omission from standard software is an analogue of the likelihood ratio test. An exception is the Rao–Scott test for loglinear models in contingency tables. In this paper we show how the Rao–Scott test can be extended to handle arbitrary regression models. We illustrate the process of fitting a model to survey data with an example from NHANES.  相似文献   

5.
Changes in survival rates during 1940–1992 for patients with Hodgkin's disease are studied by using population-based data. The aim of the analysis is to identify when the breakthrough in clinical trials of chemotherapy treatments started to increase population survival rates, and to find how long it took for the increase to level off, indicating that the full population effect of the breakthrough had been realized. A Weibull relative survival model is used because the model parameters are easily interpretable when assessing the effect of advances in clinical trials. However, the methods apply to any relative survival model that falls within the generalized linear models framework. The model is fitted by using modifications of existing software (SAS, GLIM) and profile likelihood methods. The results are similar to those from a cause-specific analysis of the data by Feuer and co-workers. Survival started to improve around the time that a major chemotherapy breakthrough (nitrogen mustard, Oncovin, prednisone and procarbazine) was publicized in the mid 1960s but did not level off for 11 years. For the analysis of data where the cause of death is obtained from death certificates, the relative survival approach has the advantage of providing the necessary adjustment for expected mortality from causes other than the disease without requiring information on the causes of death.  相似文献   

6.
The non-homogeneous Poisson process (NHPP) model is a very important class of software reliability models and is widely used in software reliability engineering. NHPPs are characterized by their intensity functions. In the literature it is usually assumed that the functional forms of the intensity functions are known and only some parameters in intensity functions are unknown. The parametric statistical methods can then be applied to estimate or to test the unknown reliability models. However, in realistic situations it is often the case that the functional form of the failure intensity is not very well known or is completely unknown. In this case we have to use functional (non-parametric) estimation methods. The non-parametric techniques do not require any preliminary assumption on the software models and then can reduce the parameter modeling bias. The existing non-parametric methods in the statistical methods are usually not applicable to software reliability data. In this paper we construct some non-parametric methods to estimate the failure intensity function of the NHPP model, taking the particularities of the software failure data into consideration.  相似文献   

7.
The big data era demands new statistical analysis paradigms, since traditional methods often break down when datasets are too large to fit on a single desktop computer. Divide and Recombine (D&R) is becoming a popular approach for big data analysis, where results are combined over subanalyses performed in separate data subsets. In this article, we consider situations where unit record data cannot be made available by data custodians due to privacy concerns, and explore the concept of statistical sufficiency and summary statistics for model fitting. The resulting approach represents a type of D&R strategy, which we refer to as summary statistics D&R; as opposed to the standard approach, which we refer to as horizontal D&R. We demonstrate the concept via an extended Gamma–Poisson model, where summary statistics are extracted from different databases and incorporated directly into the fitting algorithm without having to combine unit record data. By exploiting the natural hierarchy of data, our approach has major benefits in terms of privacy protection. Incorporating the proposed modelling framework into data extraction tools such as TableBuilder by the Australian Bureau of Statistics allows for potential analysis at a finer geographical level, which we illustrate with a multilevel analysis of the Australian unemployment data. Supplementary materials for this article are available online.  相似文献   

8.
When a published statistical model is also distributed as computer software, it will usually be desirable to present the outputs as interval, as well as point, estimates. The present paper compares three methods for approximate interval estimation about a model output, for use when the model form does not permit an exact interval estimate. The methods considered are first-order asymptotics, using second derivatives of the log-likelihood to estimate variance information; higher-order asymptotics based on the signed-root transformation; and the non-parametric bootstrap. The signed-root method is Bayesian, and uses an approximation for posterior moments that has not previously been tested in a real-world application. Use of the three methods is illustrated with reference to a software project arising in medical decision-making, the UKPDS Risk Engine. Intervals from the first-order and signed-root methods are near- identical, and typically 1% wider to 7% narrower than those from the non-parametric bootstrap. The asymptotic methods are markedly faster than the bootstrap method.  相似文献   

9.
Functional linear models are useful in longitudinal data analysis. They include many classical and recently proposed statistical models for longitudinal data and other functional data. Recently, smoothing spline and kernel methods have been proposed for estimating their coefficient functions nonparametrically but these methods are either intensive in computation or inefficient in performance. To overcome these drawbacks, in this paper, a simple and powerful two-step alternative is proposed. In particular, the implementation of the proposed approach via local polynomial smoothing is discussed. Methods for estimating standard deviations of estimated coefficient functions are also proposed. Some asymptotic results for the local polynomial estimators are established. Two longitudinal data sets, one of which involves time-dependent covariates, are used to demonstrate the approach proposed. Simulation studies show that our two-step approach improves the kernel method proposed by Hoover and co-workers in several aspects such as accuracy, computational time and visual appeal of the estimators.  相似文献   

10.
Commentaries are informative essays dealing with viewpoints of statistical practice, statistical education, and other topics considered to be of general interest to the broad readership of The American Statistician. Commentaries are similar in spirit to Letters to the Editor, but they involve longer discussions of background, issues, and perspectives. All commentaries will be refereed for their merit and compatibility with these criteria.

Proper methodology for the analysis of covariance for experiments designed in a split-plot or split-block design is not found in the statistical literature. Analyses for these designs are often performed incompletely or even incorrectly. This is especially true when popular statistical computer software packages are used for the analysis of these designs. This article provides several appropriate models, ANOVA tables, and standard errors for comparisons from experiments arranged in a standard split-plot, split–split-plot, or split-block design where a covariate has been measured on the smallest size experimental unit.  相似文献   

11.
Variable and model selection problems are fundamental to high-dimensional statistical modeling in diverse fields of sciences. Especially in health studies, many potential factors are usually introduced to determine an outcome variable. This paper deals with the problem of high-dimensional statistical modeling through the analysis of the trauma annual data in Greece for 2005. The data set is divided into the experiment and control sets and consists of 6334 observations and 112 factors that include demographic, transport and intrahospital data used to detect possible risk factors of death. In our study, different model selection techniques are applied to the experiment set and the notion of deviance is used on the control set to assess the fit of the overall selected model. The statistical methods employed in this work were the non-concave penalized likelihood methods, smoothly clipped absolute deviation, least absolute shrinkage and selection operator, and Hard, the generalized linear logistic regression, and the best subset variable selection.The way of identifying the significant variables in large medical data sets along with the performance and the pros and cons of the various statistical techniques used are discussed. The performed analysis reveals the distinct advantages of the non-concave penalized likelihood methods over the traditional model selection techniques.  相似文献   

12.
The randomized block design is routinely employed in the social and biopharmaceutical sciences. With no missing values, analysis of variance (AOV) can be used to analyze such experiments. However, if some data are missing, the AOV formulae are no longer applicable, and iterative methods such as restricted maximum likelihood (REML) are recommended, assuming block effects are treated as random. Despite the well-known advantages of REML, methods like AOV based on complete cases (blocks) only (CC-AOV) continue to be used by researchers, particularly in situations where routinely only a few missing values are encountered. Reasons for this appear to include a natural proclivity for non-iterative, summary-statistic-based methods, and a presumption that CC-AOV is only trivially less efficient than REML with only a few missing values (say≤10%). The purpose of this note is two-fold. First, to caution that CC-AOV can be considerably less powerful than REML even with only a few missing values. Second, to offer a summary-statistic-based, pairwise-available-case-estimation (PACE) alternative to CC-AOV. PACE, which is identical to AOV (and REML) with no missing values, outperforms CC-AOV in terms of statistical power. However, it is recommended in lieu of REMLonly if software to implement the latter is unavailable, or the use of a “transparent” formula-based approach is deemed necessary. An example using real data is provided for illustration.  相似文献   

13.
In recent years, a variety of regression models, including zero-inflated and hurdle versions, have been proposed to explain the case of a dependent variable with respect to exogenous covariates. Apart from the classical Poisson, negative binomial and generalised Poisson distributions, many proposals have appeared in the statistical literature, perhaps in response to the new possibilities offered by advanced software that now enables researchers to implement numerous special functions in a relatively simple way. However, we believe that a significant research gap remains, since very little attention has been paid to the quasi-binomial distribution, which was first proposed over fifty years ago. We believe this distribution might constitute a valid alternative to existing regression models, in situations in which the variable has bounded support. Therefore, in this paper we present a zero-inflated regression model based on the quasi-binomial distribution, taking into account the moments and maximum likelihood estimators, and perform a score test to compare the zero-inflated quasi-binomial distribution with the zero-inflated binomial distribution, and the zero-inflated model with the homogeneous model (the model in which covariates are not considered). This analysis is illustrated with two data sets that are well known in the statistical literature and which contain a large number of zeros.  相似文献   

14.
15.
The statistics of linear models: back to basics   总被引:2,自引:0,他引:2  
  相似文献   

16.
Molecular dynamic computer simulation is an essential tool in materials science to study atomic properties of materials in extreme environments and guide development of new materials. We propose a statistical analysis to emulate simulation output with the ultimate goal of efficiently approximating the computationally intensive simulation. We compare several spatial regression approaches including conditional autoregression (CAR), discrete wavelets transform (DWT), and principle components analysis (PCA). The methods are applied to simulation of copper atoms with twin wall and dislocation loop defects, under varying tilt tension angles. We find that CAR and DWT yield accurate results but fail to capture extreme defects, yet PCA better captures defect structure.  相似文献   

17.
The shortcomings of conventional statistical packages are discussed to illustrate the need to develop software which is able to exhibit a greater degree of statistical expertise, thereby reducing the misuse of statistical methods by those not well versed in the art of statistical analysis. Up to now the majority of the research into developing knowledge-based statistical software has concentrated on moving away from conventional architectures by adopting what can be termed expert systems approaches. This paper proposes an approach which is based upon the concept of semantic modelling. By representing some of the semantic meaning of data, it is conceived that a system could examine a request to apply a statistical technique and check if the use of the chosen technique was semantically sound, i.e. will the results obtained be meaningful. Current systems, in contrast, can only perform what can be considered as syntactic checks. The prototype system that has been implemented to explore the feasibility of such an approach is presented; the system has been designed as an enhanced variant of a conventional style statistical package.  相似文献   

18.
Clustered count data are commonly analysed by the generalized linear mixed model (GLMM). Here, the correlation due to clustering and some overdispersion is captured by the inclusion of cluster-specific normally distributed random effects. Often, the model does not capture the variability completely. Therefore, the GLMM can be extended by including a set of gamma random effects. Routinely, the GLMM is fitted by maximizing the marginal likelihood. However, this process is computationally intensive. Although feasible with medium to large data, it can be too time-consuming or computationally intractable with very large data. Therefore, a fast two-stage estimator for correlated, overdispersed count data is proposed. It is rooted in the split-sample methodology. Based on a simulation study, it shows good statistical properties. Furthermore, it is computationally much faster than the full maximum likelihood estimator. The approach is illustrated using a large dataset belonging to a network of Belgian general practices.  相似文献   

19.
The statistical modeling of big data bases constitutes one of the most challenging issues, especially nowadays. The issue is even more critical in case of a complicated correlation structure. Variable selection plays a vital role in statistical analysis of large data bases and many methods have been proposed so far to deal with the aforementioned problem. One of such methods is the Sure Independence Screening which has been introduced to reduce dimensionality to a relatively smaller scale. This method, though simple, produces remarkable results even under both ultra high dimensionality and big scale in terms of sample size problems. In this paper we dealt with the analysis of a big real medical data set assuming a Poisson regression model. We support the analysis by conducting simulated experiments taking into consideration the correlation structure of the design matrix.  相似文献   

20.
ABSTRACT

Despite the popularity of the general linear mixed model for data analysis, power and sample size methods and software are not generally available for commonly used test statistics and reference distributions. Statisticians resort to simulations with homegrown and uncertified programs or rough approximations which are misaligned with the data analysis. For a wide range of designs with longitudinal and clustering features, we provide accurate power and sample size approximations for inference about fixed effects in the linear models we call reversible. We show that under widely applicable conditions, the general linear mixed-model Wald test has noncentral distributions equivalent to well-studied multivariate tests. In turn, exact and approximate power and sample size results for the multivariate Hotelling–Lawley test provide exact and approximate power and sample size results for the mixed-model Wald test. The calculations are easily computed with a free, open-source product that requires only a web browser to use. Commercial software can be used for a smaller range of reversible models. Simple approximations allow accounting for modest amounts of missing data. A real-world example illustrates the methods. Sample size results are presented for a multicenter study on pregnancy. The proposed study, an extension of a funded project, has clustering within clinic. Exchangeability among the participants allows averaging across them to remove the clustering structure. The resulting simplified design is a single-level longitudinal study. Multivariate methods for power provide an approximate sample size. All proofs and inputs for the example are in the supplementary materials (available online).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号