首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Many authors have criticized the use of spreadsheets for statistical data processing and computing because of incorrect statistical functions, no log file or audit trail, inconsistent behavior of computational dialogs, and poor handling of missing values. Some improvements in some spreadsheet processors and the possibility of audit trail facilities suggest that the use of a spreadsheet for some statistical data entry and simple analysis tasks may now be acceptable. A brief outline of some issues and some guidelines for good practice are included.  相似文献   

2.
In this paper, we propose an extension of the Gompertz-Makeham distribution. This distribution is called the transmuted Gompertz-Makeham (TGM). The new model which can handle bathtub-shaped, increasing, increasing-constant and constant hazard rate functions. This property makes TGM is useful in survival analysis. Various statistical and reliability measures of the model are obtained, including hazard rate function, moments, moment generating function (mgf), quantile function, random number generating, skewness, kurtosis, conditional moments, mean deviations, Bonferroni curve, Lorenz curve, Gini index, mean inactivity time, mean residual lifetime and stochastic ordering; we also obtain the density of the ith order statistic. Estimation of the model parameters is justified by the method of maximum likelihood. An application to real data demonstrates that the TGM distribution can provides a better fit than some other very well known distributions.  相似文献   

3.
In this paper, gamma ( 5 ,2) distribution is considered as a failure model for the economic statistical design of x ¥ control charts. The study shows that the statistical performance of control charts can be improved significantly, with only a slight increase in the cost, by adding constraints to the optimization problem. The use of an economic statistical design instead of an economic design results in control charts that may be less expensive to implement, that have lower false alarm rates, and that have a higher probability of detecting process shifts. Numerical examples are presented to support this proposition. The results of economic statistical design are compared with those of a pure economic design. The effects of adding constraints for statistical performance measures, such as Type I error rate and the power of the chart, are extensively investigated.  相似文献   

4.
文章通过实例说明了时间数列分析中加法模型的应用,纠正了一些统计学教材上常见的错误认识和模型的错误使用,对统计教材中统计方法的系统化起到了一定的作用。  相似文献   

5.
In this paper, we extend the structural probit measurement error model by considering that the unobserved covariate follows a skew-normal distribution. The new model is termed the structural skew-normal probit model. As in the normal case, the likelihood function is obtained analytically which can be maximized by using existing statistical software. A Bayesian approach using Markov chain Monte Carlo techniques to generate from the posterior distributions is also developed. A simulation study demonstrates the usefulness of the approach in avoiding attenuation which is the case with the naive procedure and it seems to be more efficient than using the structural probit model when the distribution of the covariate (predictor) is skew.  相似文献   

6.
李筱乐 《统计研究》2016,33(7):78-84
本文在Antweiler研究框架基础上,结合不完全契约理论,构建了企业排污的一般模型,并使用“差异中差异”的估计方法,基于中国省际制造业细分行业数据,考察契约制度对环境质量的影响。研究发现,地区契约制度是影响环境质量的重要因素,在契约制度较为完善的地区,企业倾向于投入“减排专用资本”进行污染减排活动,进而改善环境质量。但是这种积极作用在契约密集度较高的行业并没有得到进一步强化,用行业本身的污染强度属性则可以解释这一“反常”的结果。为克服内生性问题,采用工具变量法进行再估计,上述结论依然稳健。  相似文献   

7.
Recently, Domma et al. [An extension of Azzalinis method, J. Comput. Appl. Math. 278 (2015), pp. 37–47] proposed an extension of Azzalini's method. This method can attract readers due to its flexibility and ease of applicability. Most of the weighted Weibull models that have been introduced are with monotonic hazard rate function. This fact limits their applicability. So, our aim is to build a new weighted Weibull distribution with monotonic and non-monotonic hazard rate function. A new weighted Weibull distribution, so-called generalized weighted Weibull (GWW) distribution, is introduced by a method exposed in Domma et al. [13]. GWW distribution possesses decreasing, increasing, upside-down bathtub, N-shape and M-shape hazard rate. Also, it is very easy to derive statistical properties of the GWW distribution. Finally, we consider application of the GWW model on a real data set, providing simulation study too.  相似文献   

8.
In this paper we extend the structural probit measurement error model by considering the unobserved covariate to follow a skew-normal distribution. The new model is termed the structural skew-normal probit model. As in the normal case, the likelihood function is obtained analytically, and can be maximized by using existing statistical software. A Bayesian approach using Markov chain Monte Carlo techniques for generating from the posterior distributions is also developed. A simulation study demonstrates the usefulness of the approach in avoiding attenuation which arises with the naive procedure. Moreover, a comparison of predicted and true success probabilities indicates that it seems to be more efficient to use the skew probit model when the distribution of the covariate (predictor) is skew. An application to a real data set is also provided.  相似文献   

9.
A simple approach for analyzing longitudinally measured biomarkers is to calculate summary measures such as the area under the curve (AUC) for each individual and then compare the mean AUC between treatment groups using methods such as t test. This two-step approach is difficult to implement when there are missing data since the AUC cannot be directly calculated for individuals with missing measurements. Simple methods for dealing with missing data include the complete case analysis and imputation. A recent study showed that the estimated mean AUC difference between treatment groups based on the linear mixed model (LMM), rather than on individually calculated AUCs by simple imputation, has negligible bias under random missing assumptions and only small bias when missing is not at random. However, this model assumes the outcome to be normally distributed, which is often violated in biomarker data. In this paper, we propose to use a LMM on log-transformed biomarkers, based on which statistical inference for the ratio, rather than difference, of AUC between treatment groups is provided. The proposed method can not only handle the potential baseline imbalance in a randomized trail but also circumvent the estimation of the nuisance variance parameters in the log-normal model. The proposed model is applied to a recently completed large randomized trial studying the effect of nicotine reduction on biomarker exposure of smokers.  相似文献   

10.
郝大明 《统计研究》2005,22(10):11-3
一、二级政府统计体系的合理性1.技术可行。调查信息经县级统计局收集、审核、初步整理后,按照目前的技术水平,完全可以直接上报国家统计局。2.国际通行。国际上主流的统计体系是集中型的二级统计体系。目前,瑞典、芬兰、丹麦、荷兰、挪威、比利时等国家的统计体系为一级政府统计体系:无地方政府统计机构,只有国家统计局,国家统计局同时为中央和地方两级政府提供统计服务。大部分国家如美国、加拿大、澳大利亚、印度尼西亚、法国、韩国、泰国、英国、德国、日本、奥地利、东欧诸国、新西兰、新加坡、菲律宾实行二级统计体系:政府统计体系由…  相似文献   

11.
This article considers the utility of the bounded cumulative hazard model in cure rate estimation, which is an appealing alternative to the widely used two-component mixture model. This approach has the following distinct advantages: (1) It allows for a natural way to extend the proportional hazards regression model, leading to a wide class of extended hazard regression models. (2) In some settings the model can be interpreted in terms of biologically meaningful parameters. (3) The model structure is particularly suitable for semiparametric and Bayesian methods of statistical inference. Notwithstanding the fact that the model has been around for less than a decade, a large body of theoretical results and applications has been reported to date. This review article is intended to give a big picture of these modeling techniques and associated statistical problems. These issues are discussed in the context of survival data in cancer.  相似文献   

12.
This paper proposes a new heavy-tailed and alternative slash type distribution on a bounded interval via a relation of a slash random variable with respect to the standard logistic function to model the real data set with skewed and high kurtosis which includes the outlier observation. Some basic statistical properties of the newly defined distribution are studied. We derive the maximum likelihood, least-square, and weighted least-square estimations of its parameters. We assess the performance of the estimators of these estimation methods by the simulation study. Moreover, an application to real data demonstrates that the proposed distribution can provide a better fit than well-known bounded distributions in the literature when the skewed data set with high kurtosis contains the outlier observations.  相似文献   

13.
文章研究了中国大连商品交易所大豆期货连续合约1994-2003年收益时间序列,并以该序列2003年第一个样本数据为分界点,建立了两子序列,分别进行了统计学分析,发现两子序列分布均是非正态的,较正态分布有尖峰厚尾的特征,具有记忆效应。并且,进一步根据两子序列的波动集群性建立一系列GARCH模型,对中国大豆期货的两个收益序列的波动性进行分析,并比较了二者的异同。  相似文献   

14.
Few approaches for monitoring autocorrelated attribute data have been proposed in the literature. If the marginal process distribution is binomial, then the binomial AR(1) model as a realistic and well-interpretable process model may be adequate. Based on known and newly derived statistical properties of this model, we shall develop approaches to monitor a binomial AR(1) process, and investigate their performance in a simulation study. A case study demonstrates the applicability of the binomial AR(1) model and of the proposed control charts to problems from statistical process control.  相似文献   

15.
In November 2003 the Scottish Prison Service signed a contract with Reliance for the provision of prisoner escort and court custody services throughout Scotland. It soon came out that the performance targets and financial penalties in the contract were covered by confidentiality clauses. But Sheila Bird has carried out some statistical detective work to estimate the concealed financial penalties for a prisoner unlawfully at large and to show that Reliance has been in serious breach of a crucial monthly performance threshold.  相似文献   

16.
Survival models involving frailties are commonly applied in studies where correlated event time data arise due to natural or artificial clustering. In this paper we present an application of such models in the animal breeding field. Specifically, a mixed survival model with a multivariate correlated frailty term is proposed for the analysis of data from over 3611 Brazilian Nellore cattle. The primary aim is to evaluate parental genetic effects on the trait length in days that their progeny need to gain a commercially specified standard weight gain. This trait is not measured directly but can be estimated from growth data. Results point to the importance of genetic effects and suggest that these models constitute a valuable data analysis tool for beef cattle breeding.  相似文献   

17.
Many users of regression methods are attracted to the notion that it would be valuable to determine the relative importance of independent variables. This article demonstrates a method based on hierarchies that builds on previous efforts to decompose R 2 through incremental partitioning. The standard method of incremental partitioning has been to follow one order among the many possible orders available. By taking a hierarchical approach in which all orders of variables are used, the average independent contribution of a variable is obtained and an exact partitioning results. Much the same logic is used to divide the joint effect of a variable. The method is general and applicable to all regression methods, including ordinary least squares, logistic, probit, and log-linear regression. A validation test demonstrates that the algorithm is sensitive to the relationships in the data rather than the proportion of variability accounted for by the statistical model used.  相似文献   

18.
19.
This paper demonstrates how to plan a contingent valuation experiment to assess the value of ecologically produced clothes. First, an appropriate statistical model (the trinomial spike model) that describes the probability that a randomly selected individual will accept any positive bid, and if so, will accept the bid A, is defined. Secondly, an optimization criterion that is a function of the variances of the parameter estimators is chosen. However, the variances of the parameter estimators in this model depend on the true parameter values. Pilot study data are therefore used to obtain estimates of the parameter values and a locally optimal design is found. Because this design is only optimal given that the estimated parameter values are correct, a design that minimizes the maximum of the criterion function over a plausable parameter region (i.e. a minimax design) is then found.  相似文献   

20.
This paper demonstrates that well-known parameter estimation methods for Gaussian fields place different emphasis on the high and low frequency components of the data. As a consequence, the relative importance of the frequencies under the objective of the analysis should be taken into account when selecting an estimation method, in addition to other considerations such as statistical and computational efficiency. The paper also shows that when noise is added to the Gaussian field, maximum pseudolikelihood automatically sets the smoothing parameter of the model equal to one. A simulation study then indicates that generalised cross-validation is more robust than maximum likelihood un-

der model misspecification in smoothing and image restoration problems. This has implications for Bayesian procedures since these use the same weightings of the frequencies as the likelihood.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号