首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Interval-censored data are very common in the reliability and lifetime data analysis. This paper investigates the performance of different estimation procedures for a special type of interval-censored data, i.e. grouped data, from three widely used lifetime distributions. The approaches considered here include the maximum likelihood estimation, the minimum distance estimation based on chi-square criterion, the moment estimation based on imputation (IM) method and an ad hoc estimation procedure. Although IM-based techniques are extensively used recently, we show that this method is not always effective. It is found that the ad hoc estimation procedure is equivalent to the minimum distance estimation with another distance metric and more effective in the simulation. The procedures of different approaches are presented and their performances are investigated by Monte Carlo simulation for various combinations of sample sizes and parameter settings. The numerical results provide guidelines to analyse grouped data for practitioners when they need to choose a good estimation approach.  相似文献   

2.
In recent years there has been a significant development of several procedures to infer about the extremal model that most conveniently describes the distribution function of the underlying population from a data set. The problem of choosing one of the three extremal types, giving preference to the Gumbel model for the null hypothesis, has frequently received the general designation of statistical choice of extremal models and has been handled under different set-ups by numerous authors. Recently, a test procedure, referred by Hasofer and Wang (1992), gave place to a comparison with some of other connected perspectives. Such a topic, jointly with some suggestions for applicability to real data, is the theme of the present paper.  相似文献   

3.
Air quality control usually requires a monitoring system of multiple indicators measured at various points in space and time. Hence, the use of space–time multivariate techniques are of fundamental importance in this context, where decisions and actions regarding environmental protection should be supported by studies based on either inter-variables relations and spatial–temporal correlations. This paper describes how canonical correlation analysis can be combined with space–time geostatistical methods for analysing two spatial–temporal correlated aspects, such as air pollution concentrations and meteorological conditions. Hourly averages of three pollutants (nitric oxide, nitrogen dioxide and ozone) and three atmospheric indicators (temperature, humidity and wind speed) taken for two critical months (February and August) at several monitoring stations are considered and space–time variograms for the variables are estimated. Simultaneous relationships between such sample space–time variograms are determined through canonical correlation analysis. The most correlated canonical variates are used for describing synthetically the underlying space–time behaviour of the components of the two sets.  相似文献   

4.
5.
6.
Tests for equality of variances using independent samples are widely used in data analysis. Conover et al. [A comparative study of tests for homogeneity of variance, with applications to the outer continental shelf bidding data. Technometrics. 1981;23:351–361], won the Youden Prize by comparing 56 variations of popular tests for variance on the basis of robustness and power in 60 different scenarios. None of the tests they compared were robust and powerful for the skewed distributions they considered. This study looks at 12 variations they did not consider, and shows that 10 are robust for the skewed distributions they considered plus the lognormal distribution, which they did not study. Three of these 12 have clearly superior power for skewed distributions, and are competitive in terms of robustness and power for all of the distributions considered. They are recommended for general use based on robustness, power, and ease of application.  相似文献   

7.
In this article, we extended the classic Box–Cox transformation to spatial linear models. For a comparative study, the proposed models were applied to a real data set of Chinese population growth and economic development with three different structures: no spatial correction, conditional autoregressive and simultaneous autoregressive. Maximal likelihood method was used to estimate the Box–Cox parameter λ and other parameters in the models. The residuals of the models were analyzed through Moran’s I and Geary’s c.  相似文献   

8.
《随机性模型》2013,29(4):541-554
In this paper, we show that the discrete GI/G/1 system can be analysed as a QBD process with infinite blocks. Most importantly, we show that Matrix–geometric method can be used for analyzing this general queue system including establishing its stability criterion and for obtaining the explicit stationary probability and the waiting time distributions. This also settles the unwritten myth that Matrix–geometric method is limited to cases with at least one Markov based characterizing parameter, i.e. either interarrival or service times, in the case of queueing systems.  相似文献   

9.
The knowledge of the urban air quality represents the first step to face air pollution issues. For the last decades many cities can rely on a network of monitoring stations recording concentration values for the main pollutants. This paper focuses on functional principal component analysis (FPCA) to investigate multiple pollutant datasets measured over time at multiple sites within a given urban area. Our purpose is to extend what has been proposed in the literature to data that are multisite and multivariate at the same time. The approach results to be effective to highlight some relevant statistical features of the time series, giving the opportunity to identify significant pollutants and to know the evolution of their variability along time. The paper also deals with missing value issue. As it is known, very long gap sequences can often occur in air quality datasets, due to long time failures not easily solvable or to data coming from a mobile monitoring station. In the considered dataset, large and continuous gaps are imputed by empirical orthogonal function procedure, after denoising raw data by functional data analysis and before performing FPCA, in order to further improve the reconstruction.  相似文献   

10.
11.
There are a number of statistical techniques for analysing epidemic outbreaks. However, many diseases are endemic within populations and the analysis of such diseases are complicated by changing population demography. Motivated by the spread of cowpox among rodent populations, a combined mathematical model for population and disease dynamics is introduced. An MCMC algorithm is then constructed to make statistical inference for the model based on data being obtained from a capture–recapture experiment. The statistical analysis is used to identify the key elements in the spread of the cowpox virus.  相似文献   

12.
13.
There are a number of statistical techniques for analysing epidemic outbreaks. However, many diseases are endemic within populations and the analysis of such diseases is complicated by changing population demography. Motivated by the spread of cowpox amongst rodent populations, a combined mathematical model for population and disease dynamics is introduced. A Markov chain Monte Carlo algorithm is then constructed to make statistical inference for the model based on data being obtained from a capture–recapture experiment. The statistical analysis is used to identify the key elements in the spread of the cowpox virus.  相似文献   

14.
Several studies have found that occasional-break processes may produce realizations with slowly decaying autocorrelations, which is hardly distinguished from the long memory phenomenon. In this paper we suggest the use of the Box–Pierce statistics to discriminate long memory and occasional-break processes. We conduct an extensive Monte Carlo experiment to examine the finite sample properties of the Box–Pierce and other simple tests statistics in this framework. The results allow us to infer important guidelines for applied statistics in practice.  相似文献   

15.
Several models for studies related to tensile strength of materials are proposed in the literature where the size or length component has been taken to be an important factor for studying the specimens’ failure behaviour. An important model, developed on the basis of cumulative damage approach, is the three-parameter extension of the Birnbaum–Saunders fatigue model that incorporates size of the specimen as an additional variable. This model is a strong competitor of the commonly used Weibull model and stands better than the traditional models, which do not incorporate the size effect. The paper considers two such cumulative damage models, checks their compatibility with a real dataset, compares them with some of the recent toolkits, and finally recommends a model, which appears an appropriate one. Throughout the study is Bayesian based on Markov chain Monte Carlo simulation.  相似文献   

16.
Consider a population of individuals who are free of a disease under study, and who are exposed simultaneously at random exposure levels, say X,Y,Z,… to several risk factors which are suspected to cause the disease in the populationm. At any specified levels X=x, Y=y, Z=z, …, the incidence rate of the disease in the population ot risk is given by the exposure–response relationship r(x,y,z,…) = P(disease|x,y,z,…). The present paper examines the relationship between the joint distribution of the exposure variables X,Y,Z, … in the population at risk and the joint distribution of the exposure variables U,V,W,… among cases under the linear and the exponential risk models. It is proven that under the exponential risk model, these two joint distributions belong to the same family of multivariate probability distributions, possibly with different parameters values. For example, if the exposure variables in the population at risk have jointly a multivariate normal distribution, so do the exposure variables among cases; if the former variables have jointly a multinomial distribution, so do the latter. More generally, it is demonstrated that if the joint distribution of the exposure variables in the population at risk belongs to the exponential family of multivariate probability distributions, so does the joint distribution of exposure variables among cases. If the epidemiologist can specify the differnce among the mean exposure levels in the case and control groups which are considered to be clinically or etiologically important in the study, the results of the present paper may be used to make sample size determinations for the case–control study, corresponding to specified protection levels, i.e., size α and 1–β of a statistical test. The multivariate normal, the multinomial, the negative multinomial and Fisher's multivariate logarithmic series exposure distributions are used to illustrate our results.  相似文献   

17.
We consider interval-valued time series, that is, series resulting from collecting real intervals as an ordered sequence through time. Since the lower and upper bounds of the observed intervals at each time point are in fact values of the same variable, they are naturally related. We propose modeling interval time series with space–time autoregressive models and, based on the process appropriate for the interval bounds, we derive the model for the intervals’ center and radius. A simulation study and an application with data of daily wind speed at different meteorological stations in Ireland illustrate that the proposed approach is appropriate and useful.  相似文献   

18.
Asymptotic variance plays an important role in the inference using interval estimate of attributable risk. This paper compares asymptotic variances of attributable risk estimate using the delta method and the Fisher information matrix for a 2×2 case–control study due to the practicality of applications. The expressions of these two asymptotic variance estimates are shown to be equivalent. Because asymptotic variance usually underestimates the standard error, the bootstrap standard error has also been utilized in constructing the interval estimates of attributable risk and compared with those using asymptotic estimates. A simulation study shows that the bootstrap interval estimate performs well in terms of coverage probability and confidence length. An exact test procedure for testing independence between the risk factor and the disease outcome using attributable risk is proposed and is justified for the use with real-life examples for a small-sample situation where inference using asymptotic variance may not be valid.  相似文献   

19.
The “What If” analysis is applicablein research and heuristic situations that utilize statistical significance testing. One utility for the “What If” is in a pedagogical perspective; the “What If” analysis provides professors an interactive tool that visually represents examples of what statistical significance testing entails and the variables that affect the commonly misinterpreted pCALCULATED value. In order to develop a strong understanding of what affects the pCALCULATED value, the students tangibly manipulate data within the Excel sheet to create a visualize representation that explicitly demonstrates how variables affect the pCALCULATED value. The second utility is primarily applicable to researchers. “What If” analysis contributes to research in two ways: (1) a “What If” analysis can be run a priori to estimate the sample size a researcher may wish to use for his study; and (2) a “What If” analysis can be run a posteriori to aid in the interpretation of results. If used, the “What If” analysis provides researchers with another utility that enables them to conduct high-quality research and disseminate their results in an accurate manner.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号