首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
From a theoretical perspective, the paper considers the properties of the maximum likelihood estimator of the correlation coefficient, principally regarding precision, in various types of bivariate model which are popular in the applied literature. The models are: 'Full-Full', in which both variables are fully observed; 'Censored-Censored', in which both of the variables are censored at zero; and finally, 'Binary-Binary', in which both variables are observed only in sign. For analytical convenience, the underlying bivariate distribution which is assumed in each of these cases is the bivariate logistic. A central issue is the extent to which censoring reduces the level of Fisher's information pertaining to the correlation coefficient, and therefore reduces the precision with which this important parameter can be estimated.  相似文献   

2.
Thalassaemias are genetic blood disorders which cause varying degrees of anaemia. Their geographical distribution suggests a compensating protection against malaria, which kills between 0.5 and 2.5 million people per year in developing countries. Neal Alexander describes a study in Papua New Guinea which estimated this association more directly, and a triangle plot which clarified, for himself and his non-statistician colleagues, the relative risks of malaria for those with none, one or two copies of the relevant haemoglobin mutation.  相似文献   

3.
Measuring the quality of determined protein structures is a very important problem in bioinformatics. Kernel density estimation is a well-known nonparametric method which is often used for exploratory data analysis. Recent advances, which have extended previous linear methods to multi-dimensional circular data, give a sound basis for the analysis of conformational angles of protein backbones, which lie on the torus. By using an energy test, which is based on interpoint distances, we initially investigate the dependence of the angles on the amino acid type. Then, by computing tail probabilities which are based on amino-acid conditional density estimates, a method is proposed which permits inference on a test set of data. This can be used, for example, to validate protein structures, choose between possible protein predictions and highlight unusual residue angles.  相似文献   

4.
5.
We obtain an estimator of the r th central moment of a distribution, which is unbiased for all distributions for which the first r moments exist. We do this by finding the kernel which allows the r th central moment to be written as a regular statistical functional. The U-statistic associated with this kernel is the unique symmetric unbiased estimator of the r th central moment, and, for each distribution, it has minimum variance among all estimators which are unbiased for all these distributions.  相似文献   

6.
There are many hypothesis testing settings in which one can calculate a “reasonable” test statistic, but in which the null distribution of the statistic is unknown or completely intractable. Fortunately, in many such situations, it is possible to simulate values of the test statistic under the null hypothesis, in which case one can conduct a Monte Carlo test. A difficulty however arises in that Monte Carlo tests, as they are currently structured, are applicable only if ties cannot occur among the values of the test statistics. There is a frequently occurring scenario in which there are lots of ties, namely that in which the null distribution of the test statistic has a (single) point mass. It turns out that one can modify the current form of Monte Carlo tests so as to accommodate such settings. Developing this modification leads to an intriguing identity involving the binomial probability function and its derivatives. In this article, we will briefly explain the modified procedure, discuss simulation studies which demonstrate its efficacy, and provide a proof of the identity referred to above.  相似文献   

7.
Estimates of the largest wind gust that will occur at a given location over a specified period are required by civil engineers. Estimation is usually based on models which are derived from the limiting distributions of maxima of stationary time series and which are fitted to data on extreme gusts. In this paper we develop a model for maximum gusts which also incorporates data on hourly mean speeds through a distributional relationship between maxima and means. This joint model is closely linked to the physical processes which generate the most extreme values and thus provides a mechanism by which data on means can augment those on gusts. It is argued that this increases the credibility of extrapolation in estimates of long period return gusts. The model is shown to provide a good fit to data obtained at a location in northern England and is compared with a more traditional modelling approach, which also performs well for this site.  相似文献   

8.
A wish list of desirable statistical computing capabilities is presented. This may help one question which of these capabilities can be satisfied by existing packages, which might be met through reasonable extensions to these packages, which might require substantial new development, and which ought to be supplied by the computing environment rather than the packages. These questions are explored, taking into account the nature of the statistical work and the choices presented by technology. Attention is given to the barriers to be overcome if future statistical packages are to take full advantage of new technology.  相似文献   

9.
Lattice paths which are restricted by one or two nonlinear boundaries are of increasing importance in many applications of lattice path combinatorics, for instance in sequential statistics. It is typical for such applications that the number of paths which avoid certain boundaries has to be determined when the number of steps is large. Counting results, if available, and recursion are not very well suited in these cases. In this paper we present some asymptotic approximations which are easy to calculate and provide an accuracy which is sufficient for many practical purposes.  相似文献   

10.
Varying-coefficient partially linear models provide a useful tools for modeling of covariate effects on the response variable in regression. One key question in varying-coefficient partially linear models is the choice of model structure, that is, how to decide which covariates have linear effect and which have non linear effect. In this article, we propose a profile method for identifying the covariates with linear effect or non linear effect. Our proposed method is a penalized regression approach based on group minimax concave penalty. Under suitable conditions, we show that the proposed method can correctly determine which covariates have a linear effect and which do not with high probability. The convergence rate of the linear estimator is established as well as the asymptotical normality. The performance of the proposed method is evaluated through a simulation study which supports our theoretical results.  相似文献   

11.
This paper presents an approach to cross-validated window width choice which greatly reduces computation time, which can be used regardless of the nature of the kernel function, and which avoids the use of the Fast Fourier Transform. This approach is developed for window width selection in the context of kernel estimation of an unknown conditional mean.  相似文献   

12.
Summary This paper describes an axiomatic approach to index number theory. A general bilaeral formula which generates a set of indices of prices and quantities is proposed. This formula is written as a geometric mean, weighted with logarithmic means of relative values, and includes all of those indices and related cofactors which satisfy the following axiomatic properties: strong identity, commensurability, linear homogeneity and associativity (or monotonicity). Moreover, two subsets of indices are identified: the first subset includes those which can be expressed either as geometric means or as expenditure ratios; the corresponding bounds are given by the Laspeyres' and Paasche's indices. The second subset also satisfies the desired properties of base and factor reversibility; in this case the bounds are given by Sato Vartia's and Fisher's indices. In addition it is shown that the intersection between the two subsets identifies a new bilateral ideal index which satisfies the axiomatic and reversibility properties (base and factor) and may also be written as an expenditure ratio. All other formulas, which do not belong to the family of the proposed indices, violate some of the axiomatic properties. For multilateral comparisons, a mixed system of direct and indirect indices which satisfies the transitivity condition is proposed.  相似文献   

13.
基于顾客满意度陷阱的市场细分方法研究   总被引:1,自引:0,他引:1  
提出了基于顾客满意度陷阱的市场细分方法,企业可以根据该细分结果对不同满意程度的顾客采取不同的营销战略,将投向满意顾客的部分资源转向不满意顾客和非常满意顾客,能够更有效地提升顾客忠诚度,并拉动企业的业绩表现。以某个电视制造企业为例,采用了聚类分析、主成分回归分析、多重对应分析等统计分析方法,验证了顾客满意度陷阱的存在,同时表明顾客的人口统计特征与顾客满意作用于顾客忠诚的方式存在对应关系,从而佐证了方法的合理性。  相似文献   

14.
This paper presents results concerning the implementation of two estimators for the total of a finite populations each of which is optimal under either and additive are purely interaction model. Assumptions under which the estimators are derived, some mathematical properties of the estimators, and tables which compare the estimators and give optimal allocation rules as a function of relevant parameters are given.  相似文献   

15.
Thermodynamics have been shown to have direct applications in Bayesian model evaluation. Within a tempered transitions scheme, the Boltzmann–Gibbs distribution pertaining to different Hamiltonians is implemented to create a path which links the distributions of interest at the endpoints. As illustrated here, an optimal temperature exists along the path which directly provides the free energy, which in this context corresponds to the marginal likelihood and/or Bayes factor. Estimators which have been developed under this framework are organised here using a unifying approach, in parallel with their stepping-stone sampling counterparts. New estimators are presented and the use of compound paths is introduced. As a byproduct, it is shown how the thermodynamic integral allows for the estimation of probability distribution divergences and measures of statistical entropy. A geometric approach is employed here to illustrate the importance of the choice of the path in terms of the corresponding estimator’s error (path-related variance), which provides a more intuitive approach in tuning the error sources.  相似文献   

16.
This paper is about object deformations observed throughout a sequence of images. We present a statistical framework in which the observed images are defined as noisy realizations of a randomly deformed template image. In this framework, we focus on the problem of the estimation of parameters related to the template and deformations. Our main motivation is the construction of estimation framework and algorithm which can be applied to short sequences of complex and highly-dimensional images. The originality of our approach lies in the representations of the template and deformations, which are defined on a common triangulated domain, adapted to the geometry of the observed images. In this way, we have joint representations of the template and deformations which are compact and parsimonious. Using such representations, we are able to drastically reduce the number of parameters in the model. Besides, we adapt to our framework the Stochastic Approximation EM algorithm combined with a Markov Chain Monte Carlo procedure which was proposed in 2004 by Kuhn and Lavielle. Our implementation of this algorithm takes advantage of some properties which are specific to our framework. More precisely, we use the Markovian properties of deformations to build an efficient simulation strategy based on a Metropolis-Hasting-Within-Gibbs sampler. Finally, we present some experiments on sequences of medical images and synthetic data.  相似文献   

17.
The asymptotic distribution theory of test statistics which are functions of spacings is studied here. Distribution theory under appropriate close alternatives is also derived and used to find the locally most powerful spacing tests. For the two-sample problem, which is to test if two independent samples are from the same population, test statistics which are based on “spacing-frequencies” (i.e., the numbers of observations of one sample which fall in between the spacings made by the other sample) are utilized. The general asymptotic distribution theory of such statistics is studied both under the null hypothesis and under a sequence of close alternatives.  相似文献   

18.
SOME MODELS FOR OVERDISPERSED BINOMIAL DATA   总被引:1,自引:0,他引:1  
Various models are currently used to model overdispersed binomial data. It is not always clear which model is appropriate for a given situation. Here we examine the assumptions and discuss the problems and pitfalls of some of these models. We focus on clustered data with one level of nesting, briefly touching on more complex strata and longitudinal data. The estimation procedures are illustrated and some critical comments are made about the various models. We indicate which models are restrictive and how and which can be extended to model more complex situations. In addition some inadequacies in testing procedures are noted. Recommendations as to which models should be used, and when, are made.  相似文献   

19.
A framework for time varying parameter regression models is developed and employed in modeling and forecasting price expectations, using the Livingston data. Alternative model formulations, which include various choices for both the stochastic processes generating the varying parameters and the sets of explanatory variables, are examined and compared by using this framework. These models, some of which have appeared elsewhere and some of which are new, are estimated and used to assess the expectations formation process.  相似文献   

20.
金融体系脆弱性问题研究述评   总被引:1,自引:1,他引:0  
自从20世纪70年代以来,金融体系脆弱性问题被确立为金融学的研究对象,受到了各国政府和学者的关注,研究的内容主要有金融脆弱性的形成机理和度量方法。关于金融脆弱性的形成机理,经济基本面恶化观从宏观经济周期的角度进行了分析;“太阳黑子”观从微观角度即市场参与主体角度进行了分析;综合改进观实际上是前两种观点的糅和,其它观点则从信息经济学、行为经济学以及制度与政策层面等角度进行了分析。金融脆弱性的度量方法有两类,即零散指标度量法和指标体系综合度量法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号