首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Real-time monitoring is necessary for nanoparticle exposure assessment to characterize the exposure profile, but the data produced are autocorrelated. This study was conducted to compare three statistical methods used to analyze data, which constitute autocorrelated time series, and to investigate the effect of averaging time on the reduction of the autocorrelation using field data. First-order autoregressive (AR(1)) and autoregressive-integrated moving average (ARIMA) models are alternative methods that remove autocorrelation. The classical regression method was compared with AR(1) and ARIMA. Three data sets were used. Scanning mobility particle sizer data were used. We compared the results of regression, AR(1), and ARIMA with averaging times of 1, 5, and 10?min. AR(1) and ARIMA models had similar capacities to adjust autocorrelation of real-time data. Because of the non-stationary of real-time monitoring data, the ARIMA was more appropriate. When using the AR(1), transformation into stationary data was necessary. There was no difference with a longer averaging time. This study suggests that the ARIMA model could be used to process real-time monitoring data especially for non-stationary data, and averaging time setting is flexible depending on the data interval required to capture the effects of processes for occupational and environmental nano measurements.  相似文献   

2.
HIV viral dynamic models have received much attention in the literature. Long-term viral dynamics may be modelled by semiparametric nonlinear mixed-effect models, which incorporate large variation between subjects and autocorrelation within subjects and are flexible in modelling complex viral load trajectories. Time-dependent covariates may be introduced in the dynamic models to partially explain the between-individual variations. In the presence of measurement errors and missing data in time-dependent covariates, we show that the commonly used two-step method may give approximately unbiased estimates but may under-estimate standard errors. We propose a two-stage bootstrap method to adjust the standard errors in the two-step method and a likelihood method.  相似文献   

3.
In this paper we discuss some problems of existing methods for calculating the Value-at-Risk (VaR) in ARCH setting. It should be noted that the commonly used approaches often confuse the true innovations with the empirical residuals, i.e., estimation errors for unknown ARCH parameters are ignored. We adjust this by using the asymptotics of the residual empirical process, and propose a feasible VaR which, according to the spirit of VaR, keeps the assets away from a specified risk with high confidence level. Its meaningfulness in comparison with the usual VaR will be illustrated clearly by numerical studies.  相似文献   

4.
In simulation studies for discriminant analysis, misclassification errors are often computed using the Monte Carlo method, by testing a classifier on large samples generated from known populations. Although large samples are expected to behave closely to the underlying distributions, they may not do so in a small interval or region, and thus may lead to unexpected results. We demonstrate with an example that the LDA misclassification error computed via the Monte Carlo method may often be smaller than the Bayes error. We give a rigorous explanation and recommend a method to properly compute misclassification errors.  相似文献   

5.
Various statistical models have been proposed for two‐dimensional dose finding in drug‐combination trials. However, it is often a dilemma to decide which model to use when conducting a particular drug‐combination trial. We make a comprehensive comparison of four dose‐finding methods, and for fairness, we apply the same dose‐finding algorithm under the four model structures. Through extensive simulation studies, we compare the operating characteristics of these methods in various practical scenarios. The results show that different models may lead to different design properties and that no single model performs uniformly better in all scenarios. As a result, we propose using Bayesian model averaging to overcome the arbitrariness of the model specification and enhance the robustness of the design. We assign a discrete probability mass to each model as the prior model probability and then estimate the toxicity probabilities of combined doses in the Bayesian model averaging framework. During the trial, we adaptively allocated each new cohort of patients to the most appropriate dose combination by comparing the posterior estimates of the toxicity probabilities with the prespecified toxicity target. The simulation results demonstrate that the Bayesian model averaging approach is robust under various scenarios. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
The Breusch–Godfrey LM test is one of the most popular tests for autocorrelation. However, it has been shown that the LM test may be erroneous when there exist heteroskedastic errors in a regression model. Recently, remedies have been proposed by Godfrey and Tremayne [9] and Shim et al. [21]. This paper suggests three wild-bootstrapped variance-ratio (WB-VR) tests for autocorrelation in the presence of heteroskedasticity. We show through a Monte Carlo simulation that our WB-VR tests have better small sample properties and are robust to the structure of heteroskedasticity.  相似文献   

7.
For longitudinal time series data, linear mixed models that contain both random effects across individuals and first-order autoregressive errors within individuals may be appropriate. Some statistical diagnostics based on the models under a proposed elliptical error structure are developed in this work. It is well known that the class of elliptical distributions offers a more flexible framework for modelling since it contains both light- and heavy-tailed distributions. Iterative procedures for the maximum-likelihood estimates of the model parameters are presented. Score tests for the presence of autocorrelation and the homogeneity of autocorrelation coefficients among individuals are constructed. The properties of test statistics are investigated through Monte Carlo simulations. The local influence method for the models is also given. The analysed results of a real data set illustrate the values of the models and diagnostic statistics.  相似文献   

8.
CD4 and viral load play important roles in HIV/AIDS studies, and the study of their relationship has received much attention with well-known results. However, AIDS datasets are often highly complex in the sense that they typically contain outliers, measurement errors, and missing data. These data complications can greatly affect statistical analysis results, but much of the literature fail to address these issues in data analysis. In this paper, we re-visit the important relationship between CD4 and viral load and propose methods which simultaneously address outliers, measurement errors, and missing data. We find that the strength of the relationship may be severely mis-estimated if measurement errors and outliers are ignored. The proposed methods are general and can be used in other settings, where jointly modelling several different types of longitudinal data is required in the presence of data complications.  相似文献   

9.
A collection of quality data represented by a functional relationship between response and explanatory variables is called a profile. In the literature, the errors of profiles are often assumed to be independent. However, quality data often exhibits time correlations in real applications. Therefore, in this paper, we investigate a general linear regression model with a between-profile autocorrelation. We propose a multivariate exponentially weighted moving average chart for monitoring shifts in the regression parameters, and an exponentially weighted moving average chart for monitoring shifts in the standard deviation. A simulation study reveals that our proposed schemes outperform competing existing schemes based on the average run length criterion. An example is used to illustrate the applicability of the proposed scheme.  相似文献   

10.
Time series regression models have been widely studied in the literature by several authors. However, statistical analysis of replicated time series regression models has received little attention. In this paper, we study the application of the quasi-least squares method to estimate the parameters in a replicated time series model with errors that follow an autoregressive process of order p. We also discuss two other established methods for estimating the parameters: maximum likelihood assuming normality and the Yule-Walker method. When the number of repeated measurements is bounded and the number of replications n goes to infinity, the regression and the autocorrelation parameters are consistent and asymptotically normal for all three methods of estimation. Basically, the three methods estimate the regression parameter efficiently and differ in how they estimate the autocorrelation. When p=2, for normal data we use simulations to show that the quasi-least squares estimate of the autocorrelation is undoubtedly better than the Yule-Walker estimate. And the former estimate is as good as the maximum likelihood estimate almost over the entire parameter space.  相似文献   

11.
Adjustment for covariates is a time-honored tool in statistical analysis and is often implemented by including the covariates that one intends to adjust as additional predictors in a model. This adjustment often does not work well when the underlying model is misspecified. We consider here the situation where we compare a response between two groups. This response may depend on a covariate for which the distribution differs between the two groups one intends to compare. This creates the potential that observed differences are due to differences in covariate levels rather than “genuine” population differences that cannot be explained by covariate differences. We propose a bootstrap-based adjustment method. Bootstrap weights are constructed with the aim of aligning bootstrap–weighted empirical distributions of the covariate between the two groups. Generally, the proposed weighted-bootstrap algorithm can be used to align or match the values of an explanatory variable as closely as desired to those of a given target distribution. We illustrate the proposed bootstrap adjustment method in simulations and in the analysis of data on the fecundity of historical cohorts of French-Canadian women.  相似文献   

12.
With the increasing availability of large prospective disease registries, scientists studying the course of chronic conditions often have access to multiple data sources, with each source generated based on its own entry conditions. The different entry conditions of the various registries may be explicitly based on the response process of interest, in which case the statistical analysis must recognize the unique truncation schemes. Moreover, intermittent assessment of individuals in the registries can lead to interval-censored times of interest. We consider the problem of selecting important prognostic biomarkers from a large set of candidates when the event times of interest are truncated and right- or interval-censored. Methods for penalized regression are adapted to handle truncation via a Turnbull-type complete data likelihood. An expectation–maximization algorithm is described which is empirically shown to perform well. Inverse probability weights are used to adjust for the selection bias when assessing predictive accuracy based on individuals whose event status is known at a time of interest. Application to the motivating study of the development of psoriatic arthritis in patients with psoriasis in both the psoriasis cohort and the psoriatic arthritis cohort illustrates the procedure.  相似文献   

13.
Homogeneity of between-individual variance and autocorrelation coefficients is one of assumptions in the study of longitudinal data. However, the assumption could be challenging due to the complexity of the dataset. In the paper we propose and analyze nonlinear mixed models with AR(1) errors for longitudinal data, intend to introduce Huber's function in the log-likelihood function and get robust estimation, which may help to reduce the influence of outliers, by Fisher scoring method. Testing of homogeneity of variance among individuals and autocorrelation coefficients on the basis of Huber's M-estimation is studied later in the paper. Simulation studies are carried to assess performance of score test we proposed. Results obtained from plasma concentrations data are reported as an illustrative example.  相似文献   

14.
M-estimation is a widely used technique for robust statistical inference. In this paper, we study model selection and model averaging for M-estimation to simultaneously improve the coverage probability of confidence intervals of the parameters of interest and reduce the impact of heavy-tailed errors or outliers in the response. Under general conditions, we develop robust versions of the focused information criterion and a frequentist model average estimator for M-estimation, and we examine their theoretical properties. In addition, we carry out extensive simulation studies as well as two real examples to assess the performance of our new procedure, and find that the proposed method produces satisfactory results.  相似文献   

15.
Commentaries are informative essays dealing with viewpoints of statistical practice, statistical education, and other topics considered to be of general interest to the broad readership of The American Statistician. Commentaries are similar in spirit to Letters to the Editor, but they involve longer discussions of background, issues, and perspectives. All commentaries will be refereed for their merit and compatibility with these criteria.

Proper methodology for the analysis of covariance for experiments designed in a split-plot or split-block design is not found in the statistical literature. Analyses for these designs are often performed incompletely or even incorrectly. This is especially true when popular statistical computer software packages are used for the analysis of these designs. This article provides several appropriate models, ANOVA tables, and standard errors for comparisons from experiments arranged in a standard split-plot, split–split-plot, or split-block design where a covariate has been measured on the smallest size experimental unit.  相似文献   

16.
In biology, medicine and anthropology, scientists try to reveal general patterns when comparing different sampling units such as biological taxa, diseases or cultures. A problem of such comparative data is that standard statistical procedures are often inappropriate due to possible autocorrelation within the data. Widespread causes of autocorrelation are a shared geography or phylogeny of the sampling units. To cope with possible autocorrelations within comparative data, we suggest a new kind of the Mantel test. The Signed Mantel test evaluates the relationship between two or more distance matrices and allows trait variables facultatively to be represented as signed distances (calculated as signed differences or quotients). Considering the sign of distances takes into account the direction of an effect found in the data. Since different metrics exist to calculate the distance between two sampling units from the raw data and because the test results often depend on the kind of metric used, we suggest validating analysis by comparing the structures of the raw and the distance data. We offer a computer program that is able to construct both signed and absolute distance matrices, to perform both customary and Signed Mantel tests, and to explore raw and distance data visually.  相似文献   

17.
ABSTRACT

In statistical practice, inferences on standardized regression coefficients are often required, but complicated by the fact that they are nonlinear functions of the parameters, and thus standard textbook results are simply wrong. Within the frequentist domain, asymptotic delta methods can be used to construct confidence intervals of the standardized coefficients with proper coverage probabilities. Alternatively, Bayesian methods solve similar and other inferential problems by simulating data from the posterior distribution of the coefficients. In this paper, we present Bayesian procedures that provide comprehensive solutions for inferences on the standardized coefficients. Simple computing algorithms are developed to generate posterior samples with no autocorrelation and based on both noninformative improper and informative proper prior distributions. Simulation studies show that Bayesian credible intervals constructed by our approaches have comparable and even better statistical properties than their frequentist counterparts, particularly in the presence of collinearity. In addition, our approaches solve some meaningful inferential problems that are difficult if not impossible from the frequentist standpoint, including identifying joint rankings of multiple standardized coefficients and making optimal decisions concerning their sizes and comparisons. We illustrate applications of our approaches through examples and make sample R functions available for implementing our proposed methods.  相似文献   

18.
In statistical and econometric practice it is not uncommon to find that regression parameter estimates obtained using estimated generalized least squares (EGLS) do not differ much from those obtained through ordinary least squares (OLS), even when the assumption of spherical errors is violated. To investigate if one could ignore non-spherical errors, and legitimately continue with OLS estimation under the non-spherical disturbance setting, Banerjee and Magnus (1999) developed statistics to measure the sensitivity of the OLS estimator to covariance misspecification. Wan et al. (2007) generalized this work by allowing for linear restrictions on the regression parameters. This paper extends the aforementioned studies by exploring the sensitivity of the equality restrictions pre-test estimator to covariance misspecification. We find that the pre-test estimators can be very sensitive to covariance misspecification, and the degree of sensitivity of the pre-test estimator often lies between that of its unrestricted and restricted components. In addition, robustness to non-normality is investigated. It is found that existing results remain valid if elliptically symmetric, instead of normal, errors are assumed.  相似文献   

19.
In the field of education, it is often of great interest to estimate the percentage of students who start out in the top test quantile at time 1 and who remain there at time 2, which is termed as “persistence rate,” to measure the students’ academic growth. One common difficulty is that students’ performance may be subject to measurement errors. We therefore considered a correlation calibration method and the simulation–extrapolation (SIMEX) method for correcting the measurement errors. Simulation studies are presented to compare various measurement error correction methods in estimating the persistence rate.  相似文献   

20.
Both kriging and non-parametric regression smoothing can model a non-stationary regression function with spatially correlated errors. However comparisons have mainly been based on ordinary kriging and smoothing with uncorrelated errors. Ordinary kriging attributes smoothness of the response to spatial autocorrelation whereas non-parametric regression attributes trends to a smooth regression function. For spatial processes it is reasonable to suppose that the response is due to both trend and autocorrelation. This paper reviews methodology for non-parametric regression with autocorrelated errors which is a natural compromise between the two methods. Re-analysis of the one-dimensional stationary spatial data of Laslett (1994) and a clearly non-stationary time series demonstrates the rather surprising result that for these data, ordinary kriging outperforms more computationally intensive models including both universal kriging and correlated splines for spatial prediction. For estimating the regression function, non-parametric regression provides adaptive estimation, but the autocorrelation must be accounted for in selecting the smoothing parameter.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号