首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Mudholkar and Srivastava [1993. Exponentiated Weibull family for analyzing bathtub failure data. IEEE Trans. Reliability 42, 299–302] introduced three-parameter exponentiated Weibull distribution. Two-parameter exponentiated exponential or generalized exponential distribution is a particular member of the exponentiated Weibull distribution. Generalized exponential distribution has a right skewed unimodal density function and monotone hazard function similar to the density functions and hazard functions of the gamma and Weibull distributions. It is observed that it can be used quite effectively to analyze lifetime data in place of gamma, Weibull and log-normal distributions. The genesis of this model, several properties, different estimation procedures and their properties, estimation of the stress-strength parameter, closeness of this distribution to some of the well-known distribution functions are discussed in this article.  相似文献   

2.
Summary In this survey the most applicable order relations between linear experiments are studied. For linear normal experiments the cases of known and unknown variances require sophisticated arguments from linear algebra and some tools from convexity theory. The comparison of linear experiments also casts some new light on the popular statistical notions of sufficiency and deficiency.  相似文献   

3.
Nonstationary panel data analysis: an overview of some recent developments   总被引:2,自引:0,他引:2  
This paper overviews some recent developments in panel data asymptotics, concentrating on the nonstationary panel case and gives a new result for models with individual effects. Underlying recent theory are asymptotics for multi-indexed processes in which both indexes may pass to infinity. We review some of the new limit theory that has been developed, show how it can be applied and give a new interpretation of individual effects in nonstationary panel data. Fundamental to the interpretation of much of the asymptotics is the concept of a panel regression coefficient which measures the long run average relation across a section of the panel. This concept is analogous to the statistical interpretation of the coefficient in a classical regression relation. A variety of nonstationary panel data models are discussed and the paper reviews the asymptotic properties of estimators in these various models. Some recent developments in panel unit root tests and stationary dynamic panel regression models are also reviewed.  相似文献   

4.
This paper overviews some recent developments in panel data asymptotics, concentrating on the nonstationary panel case and gives a new result for models with individual effects. Underlying recent theory are asymptotics for multi-indexed processes in which both indexes may pass to infinity. We review some of the new limit theory that has been developed, show how it can be applied and give a new interpretation of individual effects in nonstationary panel data. Fundamental to the interpretation of much of the asymptotics is the concept of a panel regression coefficient which measures the long run average relation across a section of the panel. This concept is analogous to the statistical interpretation of the coefficient in a classical regression relation. A variety of nonstationary panel data models are discussed and the paper reviews the asymptotic properties of estimators in these various models. Some recent developments in panel unit root tests and stationary dynamic panel regression models are also reviewed.  相似文献   

5.
AStA Advances in Statistical Analysis - It is well known that standard tests for a mean shift are invalid in long-range dependent time series. Therefore, several long-memory robust extensions of...  相似文献   

6.
Summary.  Will the UK's aging population be fit and independent, or suffer from greater chronic ill health? Life expectancy of healthy people represents the expected number of years of healthy well-being that a life-table cohort would experience if age-specific rates of mortality and disability prevailed throughout the cohort's lifetime. Robust estimation of this life expectancy is thus essential for examining whether additional years of life are spent in good health and whether life expectancy is increasing faster than the decline of rates of disability. The paper examines a means of generating estimates of life expectancy for people who are healthy and unhealthy for the UK that are consistent with exogenous population mortality data. The method takes population transition matrices and adjusts these in a statistically coherent way so as to render them consistent with aggregate life-tables.  相似文献   

7.
In recent years an increase in nonresponse rates in major government and social surveys has been observed. It is thought that decreasing response rates and changes in nonresponse bias may affect, potentially severely, the quality of survey data. This paper discusses the problem of unit and item nonresponse in government surveys from an applied perspective and highlights some newer developments in this field with a focus on official statistics in the United Kingdom (UK). The main focus of the paper is on post-survey adjustment methods, in particular adjustment for item nonresponse. The use of various imputation and weighting methods is discussed in an example. The application also illustrates the close relationship between missing data and measurement error. JEL classification C42, C81  相似文献   

8.
9.
Abstract

The CONSER Summit, held in March 2004, was a meeting of representatives from all library service areas, the serials industry, and standards communities. It was organized as an opportunity for the Program for Cooperative Cataloging (PCC) to shape strategies on the provision and sharing of metadata for electronic resources. This column is an update on actions taken on some of the summit recommendations  相似文献   

10.
11.
The use of lower probabilities is considered for inferences in basic jury scenarios to study aspects of the size of juries and their composition if society consists of subpopulations. The use of lower probability seems natural in law, as it leads to robust inference in the sense of providing a defendant with the benefit of the doubt. The method presented in this paper focusses on how representative a jury is for the whole population, using a novel concept of a second ’imaginary’ jury together with exchangeability assumptions. It has the advantage that there is an explicit absence of any assumption with regard to guilt of a defendant. Although the concept of a jury in law is central in the presentation, the novel approach and the conclusions of this paper hold for representative decision making processes in many fields, and it also provides a new perspective to stratified sampling.  相似文献   

12.
In this article we consider a town in which the thoroughfares are laid out in a rectangular grid. Using the l1 metric, we determine the Dirichlet regions for competitive convenience stores. Under the assumption of normality, we exemplify a technique for calculating the probability of the Dirichlet region associated with each convenience store. This method is generalized to the case where intersecting thoroughfares are oblique. Finally the example is used to illustrate the calculation of posterior Pitman's measure of closeness for various Bayesian estimators.  相似文献   

13.
In non‐randomized biomedical studies using the proportional hazards model, the data often constitute an unrepresentative sample of the underlying target population, which results in biased regression coefficients. The bias can be avoided by weighting included subjects by the inverse of their respective selection probabilities, as proposed by Horvitz & Thompson (1952) and extended to the proportional hazards setting for use in surveys by Binder (1992) and Lin (2000). In practice, the weights are often estimated and must be treated as such in order for the resulting inference to be accurate. The authors propose a two‐stage weighted proportional hazards model in which, at the first stage, weights are estimated through a logistic regression model fitted to a representative sample from the target population. At the second stage, a weighted Cox model is fitted to the biased sample. The authors propose estimators for the regression parameter and cumulative baseline hazard. They derive the asymptotic properties of the parameter estimators, accounting for the difference in the variance introduced by the randomness of the weights. They evaluate the accuracy of the asymptotic approximations in finite samples through simulation. They illustrate their approach in an analysis of renal transplant patients using data obtained from the Scientific Registry of Transplant Recipients  相似文献   

14.
Les Hawkins 《Serials Review》2013,39(3):220-221
Abstract

The CONSER Summit on Serials in the Digital Environment was held March 18–19, 2004. Attendees represented all library service areas, commercial information services, and standards development. The program included panel discussions and facilitated breakout sessions that provided specific recommendations for shaping CONSER and the Program for Cooperative Cataloging efforts in providing metadata for electronic resources.  相似文献   

15.
Abstract

Elsevier's Strategic Partners Program (SPP) convened a forum prior to the American Library Association's 2003 midwinter conference in Philadelphia, PA. Two presentations from that forum are published here to indicate the enormous challenges, initiatives, and changes that face all of us in the serials, publishing, and information environments.  相似文献   

16.
This paper gives simple approximations for the distribution function and quantiles of the sum X + Y when X is a continuous variable and Y is an independent variable with variance small compared to that of X . The approximations are based around the distribution function or quantiles of X and require only the first two or three moments of Y to be known. Example evaluations with X having a normal, Student's t or chi-squared distribution suggest that the approximations are good in unbounded tail regions when the ratio of variances is less than 0.2.  相似文献   

17.
The order of the increase in the Fisher information measure contained in a finite number k of additive statistics or sample quantiles, constructed from a sample of size n, as n → ∞, is investigated. It is shown that the Fisher information in additive statistics increases asymptotically in a manner linear with respect to n, if 2 + δ moments of additive statistics exist for some δ > 0. If this condition does not hold, the order of increase in this information is non-linear and the information may even decrease. The problem of asymptotic sufficiency of sample quantiles is investigated and some linear analogues of maximum likelihood equations are constructed.  相似文献   

18.
Summary. We develop a flexible class of Metropolis–Hastings algorithms for drawing inferences about population histories and mutation rates from deoxyribonucleic acid (DNA) sequence data. Match probabilities for use in forensic identification are also obtained, which is particularly useful for mitochondrial DNA profiles. Our data augmentation approach, in which the ancestral DNA data are inferred at each node of the genealogical tree, simplifies likelihood calculations and permits a wide class of mutation models to be employed, so that many different types of DNA sequence data can be analysed within our framework. Moreover, simpler likelihood calculations imply greater freedom for generating tree proposals, so that algorithms with good mixing properties can be implemented. We incorporate the effects of demography by means of simple mechanisms for changes in population size and structure, and we estimate the corresponding demographic parameters, but we do not here allow for the effects of either recombination or selection. We illustrate our methods by application to four human DNA data sets, consisting of DNA sequences, short tandem repeat loci, single-nucleotide polymorphism sites and insertion sites. Two of the data sets are drawn from the male-specific Y-chromosome, one from maternally inherited mitochondrial DNA and one from the β -globin locus on chromosome 11.  相似文献   

19.
20.
This paper elaborates on earlier contributions of Bross (1985) and Millard (1987) who point out that when conducting conventional hypothesis tests in order to “prove” environmental hazard or environmental safety, unrealistically large sample sizes are required to achieve acceptable power with customarily-used values of Type I error probability. These authors also note that “proof of safety” typically requires much larger sample sizes than “proof of hazard”. When the sample has yet to be selected and it is feared that the sample size will be insufficient to conduct a reasonable.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号