首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
2.
3.
Abstract

A class of multivariate laws as an extension of univariate Weibull distribution is presented. A well known representation of the asymmetric univariate Laplace distribution is used as the starting point. This new family of distributions exhibits some similarities to the multivariate normal distribution. Properties of this class of distributions are explored including moments, correlations, densities and simulation algorithms. The distribution is applied to model bivariate exchange rate data. The fit of the proposed model seems remarkably good. Parameters are estimated and a bootstrap study performed to assess the accuracy of the estimators.  相似文献   

4.
Assume that a forecaster observes a sequence of random variables and issues predictions according to a point prediction systems, i.e. a rule which, at every time t , issues a point prediction for the next observation at time t +1. We introduce the concept of efficiency of a point prediction system, for the case that the joint distribution of the sequence of observations is known to belong to a parametric family, and performance is assessed by the long run sum of squared prediction errors. Independence is not a requirement. Under weak conditions, the class of efficient point prediction systems is non-empty, and any two efficient point prediction systems will, in a certain strong sense, make asymptotically identical predictions for the infinite future. We discuss the efficiency of point prediction systems based on Bayesian predictive means, and on plugging in parameter estimates. The results are applied to probability forecasting and stochastic regression.  相似文献   

5.
Motivated by a number of drawbacks of classical methods of point estimation, we generalize the definitions of point estimation, and address such notions as unbiasedness and estimation under constraints. The utility of the extension is shown by deriving more reliable estimates for small coefficients of regression models, and for variance components and random effects of mixed models. The extension is in the spirit of generalized confidence intervals introduced by Weerahandi (1993 Weerahandi , S. ( 1993 ). Generalized confidence intervals . J. Amer. Statist. Assoc. 88 : 899905 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) and should encourage much needed further research in point estimation in unbalanced models, multi-variate models, non normal models, and nonlinear models.  相似文献   

6.
An important contribution to the literature on frequentist model averaging (FMA) is the work of Hjort and Claeskens (2003 Hjort , N. L. , Claeskens , G. ( 2003 ). Frequestist model average estimators . J. Amer. Statist. Assoc. 98 : 879899 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]), who developed an asymptotic theory for frequentist model averaging in parametric models based on a local mis-specification framework. They also proposed a simple method for constructing confidence intervals of the unknown parameters. This article shows that the confidence intervals based on the FMA estimator suggested by Hjort and Claeskens (2003 Hjort , N. L. , Claeskens , G. ( 2003 ). Frequestist model average estimators . J. Amer. Statist. Assoc. 98 : 879899 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) are asymptotically equivalent to that obtained from the full model under both parametric and the varying-coefficient partially linear models. Thus, as long as interval estimation rather than point estimation is concerned, the confidence interval based on the full model already fulfills the objective and model averaging provides no additional useful information.  相似文献   

7.
The well-known Wilson and Agresti–Coull confidence intervals for a binomial proportion p are centered around a Bayesian estimator. Using this as a starting point, similarities between frequentist confidence intervals for proportions and Bayesian credible intervals based on low-informative priors are studied using asymptotic expansions. A Bayesian motivation for a large class of frequentist confidence intervals is provided. It is shown that the likelihood ratio interval for p approximates a Bayesian credible interval based on Kerman’s neutral noninformative conjugate prior up to O(n? 1) in the confidence bounds. For the significance level α ? 0.317, the Bayesian interval based on the Jeffreys’ prior is then shown to be a compromise between the likelihood ratio and Wilson intervals. Supplementary materials for this article are available online.  相似文献   

8.
Every attainable structure of the so-called continuous-time Homogeneous Markov System (HMS) with fixed size and state space S = {1, 2,…, n} is considered as a particle of R n and, consequently, the motion of the structure corresponds to the motion of the particle. Under the assumption that “the motion of every particle-structure at every time point is due to its interaction with its surroundings,” R n becomes a continuum (Tsaklidis, 1998 Tsaklidis , G. ( 1998 ). The continuous time homogeneous Markov system with fixed size as a Newtonian fluid? Appl. Stoch. Mod. Data Anal. 13 : 177182 .[Crossref] [Google Scholar]). Then the evolution of the set of the attainable structures corresponds to the motion of the continuum. For the case of a three-state HMS it is stated that the concept of the two-dimensional isotropic elasticity can further be used to interpret the three-state HMS's evolution.  相似文献   

9.
Noncentral distributions appear in two sample problems and are often used in several fields, for example, in biostatistics. A higher order approximation for a percentage point of the noncentral t-distribution under normality is given by Akahira (1995 Akahira, M. 1995. A higher order approximation to a percentage point of the non-central t-distribution. Communications in Statistics–Simulation, 24(3): 595605. [Taylor & Francis Online], [Web of Science ®] [Google Scholar]) and is also shown to be numerically better than others. In this article, without the normality assumption, we obtain a higher order approximation to a percentage point of the distribution of a noncentral t-statistic, in a similar way to Akahira (1995 Akahira, M. 1995. A higher order approximation to a percentage point of the non-central t-distribution. Communications in Statistics–Simulation, 24(3): 595605. [Taylor & Francis Online], [Web of Science ®] [Google Scholar]) where the statistic based on a linear combination of a normal random variable and a chi-statistic takes an important role. Its application to the confidence limit and the confidence interval for a noncentrality parameter are also given. Further, a numerical comparison of the higher order approximation with the limiting normal distribution is done and the former one is shown to be more accurate. As a result of the numerical calculation, the higher order approximation seems to be useful in practical situations, when the size of sample is not so small.  相似文献   

10.
In recent years, the suggestion of combining models as an alternative to selecting a single model from a frequentist prospective has been advanced in a number of studies. In this article, we propose a new semiparametric estimator of regression coefficients, which is in the form of a feasible generalized ridge estimator by Hoerl and Kennard (1970b Hoerl, A. E., Kennard, R. W. (1970b). Ridge regression: Application to nonorthogonal problems. Technometrics 12(1):6982.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) but with different biasing factors. We prove that after reparameterization such that the regressors are orthogonal, the generalized ridge estimator is algebraically identical to the model average estimator. Further, the biasing factors that determine the properties of both the generalized ridge and semiparametric estimators are directly linked to the weights used in model averaging. These are interesting results for the interpretations and applications of both semiparametric and ridge estimators. Furthermore, we demonstrate that these estimators based on model averaging weights can have properties superior to the well-known feasible generalized ridge estimator in a large region of the parameter space. Two empirical examples are presented.  相似文献   

11.
Starting from a standard pivot, exact inference for the pth-quantile and for the reliability of the two-parameter exponential distribution in case of singly Type II censored samples is developed in this article. Fernandez (2007 Fernandez , A. J. ( 2007 ). On calculating generalized confidence intervals for the two-parameter exponential reliability function . Statistics 41 : 129135 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) first obtained some of the results proposed in this article, but, differently from what are proposed here, and developed his theory starting from a generalized pivot. An illustrative example shows that, with the expressions proposed in this article, it is also possible to overcome some shortcomings raising from the formulas by Fernandez (2007 Fernandez , A. J. ( 2007 ). On calculating generalized confidence intervals for the two-parameter exponential reliability function . Statistics 41 : 129135 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]). Finally, a new expression for the moments of the pivot is obtained.  相似文献   

12.
ABSTRACT

In this article we consider estimating the bivariate survival function observations where one of the components is subject to left truncation and right censoring and the other is subject to right censoring only. Two types of nonparametric estimators are proposed. One is in the form of inverse-probability-weighted average (Satten and Datta, 2001 Satten , G. A. , Datta , S. ( 2001 ). The Kaplan–Meier estimator as an inverse-probability-of-censoring weighted average . Amer. Statist. 55 : 207210 . [CROSSREF] [Taylor & Francis Online], [Web of Science ®] [Google Scholar]) and the other is a generalization of Dabrowska's 1988 Dabrowska , D. M. ( 1988 ). Kaplan–Meier estimate on the plane . Ann. Statist. 18 : 308325 . [Google Scholar] estimator. The two are then compared based on their empirical performances.  相似文献   

13.
General minimum lower-order confounding (GMC) criterion is to choose optimal designs, which are based on the aliased effect-number pattern (AENP). The AENP and GMC criterion have been developed to form GMC theory. Zhang et al. (2015 Zhang, T.F., Yang, J.F., Li, Z.M., Zhang, R.C. (2015). Construction of regular 2n41 designs with general minimum lower-order confounding. Commun. Stat. - Theory Methods 46:27242735.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) introduced GMC 2n4m criterion for choosing optimal designs and constructed all GMC 2n41 designs with N/4 + 1 ? n + 2 ? 5N/16. In this article, we analyze the properties of 2n41 designs and construct GMC 2n41 designs with 5N/16 + 1 ? n + 2 < N ? 1, where n and N are, respectively, the numbers of two-level factors and runs. Further, GMC 2n41 designs with 16-run, 32-run are tabulated.  相似文献   

14.
It is well known that a Bayesian credible interval for a parameter of interest is derived from a prior distribution that appropriately describes the prior information. However, it is less well known that there exists a frequentist approach developed by Pratt (1961 Pratt , J. W. ( 1961 ). Length of confidence intervals . J. Amer. Statist. Assoc. 56 : 549657 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) that also utilizes prior information in the construction of frequentist confidence intervals. This frequentist approach produces confidence intervals that have minimum weighted average expected length, averaged according to some weight function that appropriately describes the prior information. We begin with a simple model as a starting point in comparing these two distinct procedures in interval estimation. Consider X 1,…, X n that are independent and identically N(μ, σ2) distributed random variables, where σ2 is known, and the parameter of interest is μ. Suppose also that previous experience with similar data sets and/or specific background and expert opinion suggest that μ = 0. Our aim is to: (a) develop two types of Bayesian 1 ? α credible intervals for μ, derived from an appropriate prior cumulative distribution function F(μ) more importantly; (b) compare these Bayesian 1 ? α credible intervals for μ to the frequentist 1 ? α confidence interval for μ derived from Pratt's frequentist approach, in which the weight function corresponds to the prior cumulative distribution function F(μ). We show that the endpoints of the Bayesian 1 ? α credible intervals for μ are very different to the endpoints of the frequentist 1 ? α confidence interval for μ, when the prior information strongly suggests that μ = 0 and the data supports the uncertain prior information about μ. In addition, we assess the performance of these intervals by analyzing their coverage probability properties and expected lengths.  相似文献   

15.
Consider a chronic disease process which is beginning to be observed at a point in chronological time. The backward recurrence and forward recurrence times are defined for prevalent cases as the time with disease and the time to leave the disease state, respectively, where the reference point is the point in time at which the disease process is being observed. In this setting the incidence of disease affects the recurrence time distributions. In addition, the survival of prevalent cases will tend to be greater than the population with disease due to length biased sampling. A similar problem arises in models for the early detection of disease. In this case the backward recurrence time is how long an individual has had disease before detection and the forward recurrence time is the time gained by early diagnosis, i.e., until the disease becomes clinical by exhibiting signs or symptoms. In these examples the incidence of disease may be age related resulting in a non-stationary process. The resulting recurrence time distributions are derived as well as some generalization of length-biased sampling.  相似文献   

16.
This article analyses the performance of a one-sided cumulative sum (CUSUM) chart that is initialized using a random starting point following the natural or intrinsic probability distribution of the CUSUM statistic. By definition, this probability distribution remains stable as the chart is used. The probability that the chart starts at zero according to this intrinsic distribution is always smaller than one, which confers on the chart a fast initial response feature. The article provides a fast and accurate algorithm to compute the in-control and out-of-control average run lengths and run-length probability distributions for one-sided CUSUM charts initialized using this random intrinsic fast initial response (RIFIR) scheme. The algorithm also computes the intrinsic distribution of the CUSUM statistic and random samples extracted from this distribution. Most importantly, no matter how the chart was initialized, if no level shifts and no alarms have occurred before time τ?>?0, the distribution of the run length remaining after τ is provided by this algorithm very accurately, provided that τ is not too small.  相似文献   

17.
Local influence is a well-known method for identifying the influential observations in a dataset and commonly needed in a statistical analysis. In this paper, we study the local influence on the parameters of interest in the seemingly unrelated regression model with ridge estimation, when there exists collinearity among the explanatory variables. We examine two types of perturbation schemes to identify influential observations: the perturbation of variance and the perturbation of individual explanatory variables. Finally, the efficacy of our proposed method is illustrated by analyzing [13 A. Munnell, Why has productivity declined? Productivity and public investment, New Engl. Econ. Rev. (1990), pp. 322. [Google Scholar]] productivity dataset.  相似文献   

18.
Summary The scientific attitude towards statistical method has always pursued two basic objectives: identifying false assumptions and selecting, amongst the likely assertions, those which are most consistent with a given system. The methodological demarcation between rejection of a statistical statement, because it is ?false?, or exclusion, because it is ?least probable?, lies in the fundamental premises of inferential procedures. In the first class we find the methods proposed by Fisher, Neyman and Pearson; in the second one, the Bayesian techniques. Even if different inferential theories may coexist, any particular solution has a limit of validity strictly bouded, to the conventional procedural rules on which it is based. Invited paper at the Conference on ?Statistical Tests: Methodology and Econometric Applications?, held in Bologna, Italy, 27–28 May 1993.  相似文献   

19.
Unit root tests with structural break developed by Zivot and Andrews (1992 Zivot , E. , Andrews , D. W. K. ( 1992 ). Further evidence on the great crash, the oil price shock and the unit root hypothesis . Journal of Business and Economic Statistics 10 : 251270 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) and Perron and Rodriguez (2003 Perron , P. , Rodriguez , G. ( 2003 ). GLS detrending, efficient unit root tests and structural change . Journal of Econometrics 115 : 127 .[Crossref], [Web of Science ®] [Google Scholar]) in the presence of additive outliers and breaks are studied by simulation experiments. The results show that the Zivot–Andrews test appears to have size distortions due to the additive outliers whereas the Perron–Rodriguez test exhibits good properties of size and power. However, the two tests are biased when a second break is present but not taken into account. Furthermore, these endogenous break unit root tests tend to determine the break point incorrectly at one period behind the true break point, leading to spurious rejections of the unit root null hypothesis.  相似文献   

20.
In a missing data setting, we have a sample in which a vector of explanatory variables ${\bf x}_i$ is observed for every subject i, while scalar responses $y_i$ are missing by happenstance on some individuals. In this work we propose robust estimators of the distribution of the responses assuming missing at random (MAR) data, under a semiparametric regression model. Our approach allows the consistent estimation of any weakly continuous functional of the response's distribution. In particular, strongly consistent estimators of any continuous location functional, such as the median, L‐functionals and M‐functionals, are proposed. A robust fit for the regression model combined with the robust properties of the location functional gives rise to a robust recipe for estimating the location parameter. Robustness is quantified through the breakdown point of the proposed procedure. The asymptotic distribution of the location estimators is also derived. The proofs of the theorems are presented in Supplementary Material available online. The Canadian Journal of Statistics 41: 111–132; 2013 © 2012 Statistical Society of Canada  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号