首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Three linear prediction methods of a single missing value for a stationary first order multiplicative spatial autoregressive model are proposed based on the quarter observations, observations in the first neighborhood, and observations in the nearest neighborhood. Three different types of innovations including Gaussian (symmetric and thin tailed), exponential (skew to right), and asymmetric Laplace (skew and heavy tailed) are considered. In each case, the proposed predictors are compared based on the two well-known criteria: mean square prediction and Pitman's measure of closeness. Parameter estimation is performed by maximum likelihood, least square errors, and Markov chain Monte Carlo (MCMC).  相似文献   

3.
4.
Growing concern about the health effects of exposure to pollutants and other chemicals in the environment has stimulated new research to detect and quantify environmental hazards. This research has generated many interesting and challenging methodological problems for statisticians. One type of statistical research develops new methods for the design and analysis of individual studies. Because current research of this type is too diverse to summarize in a single article, we discuss current work in two areas of application: the carcinogen bioassay in small rodents and epidemiologic studies of air pollution. To assess the risk of a potentially harmful agent, one must frequently combine evidence from different and often quite dissimilar studies. Hence, this paper also discusses the central role of data synthesis in risk assessment, reviews some of the relevant statistical literature, and considers the role of statisticians in evaluating and combining evidence from diverse sources.  相似文献   

5.
A brief review of the methods of multidimensional analysis used in demography is presented, with reference to their use in the study of geographical differences, internal relationships, and dynamic concepts. Consideration is given to the development of such multidimensional analyses in Poland.  相似文献   

6.
Since 1989, there has been a major and unprecedented decline in the breeding population of willow warblers ( Phylloscopus trochilus ) in southern Britain. Between 1986 and 1993 the numbers of willow warbler territories counted on monitoring plots declined by 47% in southern Britain, compared to a decline of 7% in northern Britain. Breeding densities of willow warblers are generally higher in the north and west of Britain, than in the south. Data from nest record cards provided evidence of only minor regional differences in breeding performance with a small but significant increase in the loss rate of nests during the nestling stage in 1989-1992 in southern Britain, compared with 1974-1988. Mark-recapture data collected at 18 constant effort sites and from one intensive study were used to estimate apparent survival rates of adults during the period 1987-1993. Program SURGE4 was used to test for differences in survival rates and recapture probabilities between years, sexes, sites and regions. Recapture probabilities differed between sites and between the sexes but not between years. Survival rates differed significantly between years (in southern Britain) but not between sexes or sites. In southern Britain, adult survival declined from 45% during 1987-1988 to 24% during 1991-1992, while in northern Britain there was no evidence that survival changed during the same period. Although the pattern of annual variation in survival differed between northern and southern Britain, this was due mainly to a much lower survival rate in southern Britain during 1991-1992. Declining survival rates of adult willow warblers have probably been a major cause of the observed population decline.  相似文献   

7.
This paper discusses two problems, which can occur when using central composite designs (CCDs), that are not generally covered in the literature but can lead to wrong decisions-and therefore incorrect models-if they are ignored. Most industrialbased experimental designs are sequential. This usually involves running as few initial tests as possible, while getting enough information as is needed to provide a reasonable approximation to reality (the screening stage). The CCD design strategy generally requires the running of a full or fractional factorial design (the cube or hypercube) with one or more additional centre points. The cube is augmented, if deemed necessary, by additional experiments known as star-points. The major problems highlighted here concern the decision to run the star points or not. If the difference between the average response at the centre of the design and the average of the cube results is significant, there is probably a need for one or more quadratic terms in the predictive model. If not, then a simpler model that includes only main effects and interactions is usually considered sufficient. This test for 'curvature' in a main effect will often fail if the design space contains or surrounds a saddle-point. Such a point may disguise the need for a quadratic term. This paper describes the occurrence of a real saddle-point from an industrial project and how this was overcome. The second problem occurs because the cube and star point portions of a CCD are sometimes run as orthogonal blocks. Indeed, theory would suggest that this is the correct procedure. However in the industrial context, where minimizing the total number of tests is at a premium, this can lead to designs with star points a long way from the cube. In such a situation, were the curvature test to be found non-significant, we could end with a model that predicted well within the cube portion of the design space but that would be unreliable in the balance of the total area of investigation. The paper discusses just such a design, one that disguised the real need for a quadratic term.  相似文献   

8.
This paper discusses two problems, which can occur when using central composite designs (CCDs), that are not generally covered in the literature but can lead to wrong decisions-and therefore incorrect models-if they are ignored. Most industrialbased experimental designs are sequential. This usually involves running as few initial tests as possible, while getting enough information as is needed to provide a reasonable approximation to reality (the screening stage). The CCD design strategy generally requires the running of a full or fractional factorial design (the cube or hypercube) with one or more additional centre points. The cube is augmented, if deemed necessary, by additional experiments known as star-points. The major problems highlighted here concern the decision to run the star points or not. If the difference between the average response at the centre of the design and the average of the cube results is significant, there is probably a need for one or more quadratic terms in the predictive model. If not, then a simpler model that includes only main effects and interactions is usually considered sufficient. This test for 'curvature' in a main effect will often fail if the design space contains or surrounds a saddle-point. Such a point may disguise the need for a quadratic term. This paper describes the occurrence of a real saddle-point from an industrial project and how this was overcome. The second problem occurs because the cube and star point portions of a CCD are sometimes run as orthogonal blocks. Indeed, theory would suggest that this is the correct procedure. However in the industrial context, where minimizing the total number of tests is at a premium, this can lead to designs with star points a long way from the cube. In such a situation, were the curvature test to be found non-significant, we could end with a model that predicted well within the cube portion of the design space but that would be unreliable in the balance of the total area of investigation. The paper discusses just such a design, one that disguised the real need for a quadratic term.  相似文献   

9.
Estimation and prediction in generalized linear mixed models are often hampered by intractable high dimensional integrals. This paper provides a framework to solve this intractability, using asymptotic expansions when the number of random effects is large. To that end, we first derive a modified Laplace approximation when the number of random effects is increasing at a lower rate than the sample size. Second, we propose an approximate likelihood method based on the asymptotic expansion of the log-likelihood using the modified Laplace approximation which is maximized using a quasi-Newton algorithm. Finally, we define the second order plug-in predictive density based on a similar expansion to the plug-in predictive density and show that it is a normal density. Our simulations show that in comparison to other approximations, our method has better performance. Our methods are readily applied to non-Gaussian spatial data and as an example, the analysis of the rhizoctonia root rot data is presented.  相似文献   

10.
A probability inequality of conditionally independent and identically distributed (i.i.d.) random variables obtained recently by the author is applied to ranking and selection problems. It is shown that under both the indifference-zone and the subset formulations, the probability of a correct selection (PCS) is a cumulative probability of conditionally i.i.d, random variables. Therefore bounds on both the PCS and the sample size required can be obtained from that probability inequality. Applications of that inequality to other multiple decision problems are also considered. It is illustrated that general results concerning conditionally i.i.d. random variables are applicable to many problems in multiple decision theory.  相似文献   

11.
"The purpose of this article is to show that if many characteristics affect the mortality of individuals, there are intrinsic limits to the ability of demographers to answer two elementary questions:" whether the force of mortality in the last year was more or less severe in one country relative to that in a second, and whether an individual's chance of survival would have been greater in one or the other of the two countries. The author notes that the conclusions are applicable to all demographic crude rates. "The possibility of encountering Simpson's paradox suggests that since sex is only one of many possible stratifying variables that appear to affect mortality, the use of mortality tables distinguished by sex and by no other variables is, in the absence of information about the importance of other variables, demographically arbitrary."  相似文献   

12.
This paper extends the concept of risk unbiasedness for applying to statistical prediction and nonstandard inference problems, by formalizing the idea that a risk unbiased predictor should be at least as close to the “true” predictant as to any “wrong” predictant, on the average. A novel aspect of our approach is measuring closeness between a predicted value and the predictant by a regret function, derived suitably from the given loss function. The general concept is more relevant than mean unbiasedness, especially for asymmetric loss functions. For squared error loss, we present a method for deriving best (minimum risk) risk unbiased predictors when the regression function is linear in a function of the parameters. We derive a Rao–Blackwell type result for a class of loss functions that includes squared error and LINEX losses as special cases. For location-scale families, we prove that if a unique best risk unbiased predictor exists, then it is equivariant. The concepts and results are illustrated with several examples. One interesting finding is that in some problems a best unbiased predictor does not exist, but a best risk unbiased predictor can be obtained. Thus, risk unbiasedness can be a useful tool for selecting a predictor.  相似文献   

13.
Bradley (1958) proposed a very simple procedure for constructing latin square designs to counterbalance the immediate sequential effect for an even number of treatments. When the number of treatments is odd, balance in a single latin square is not possible. In the present note we have developed an analogous method for the construction of such designs which may be used for an even or odd number of treatments. A proof has also been offered to assure the general validity of the procedure.  相似文献   

14.
O.D. Anderson 《Statistics》2013,47(3):399-406
Box and JENKINS introduced the concept of invertibility for reasons which are argued to be largely irrelevant. However, the concept has some value since the boundary between invertible and ”strongly“ non-invertible moving average paramter sets, gives rise to bounds on the autocorrelations. As well as being of academic interest, these bounds may be useful for identifying processes.  相似文献   

15.
In this paper the indicator approach in spatial data analysis is presented for the determination of probability distributions to characterize the uncertainty about any unknown value. Such an analysis is non-parametric and is done independently of the estimate retained. These distributions are given through a series of quantile estimates and are not related to any particular prior model or shape. Moreover, determination of these distributions accounts for the data configuration and data values. An application is discussed. Moreover, some properties related to the Gaussian model are presented.  相似文献   

16.
The author describes topics included in a study of spatial, social, and occupational mobility in Poland. These include rural migration, the effect of the family life cycle on migration, the social impact of migration, the effect of migration on spatial distribution, and migration prospects until the end of this century.  相似文献   

17.
The analysis of word frequency count data can be very useful in authorship attribution problems. Zero-truncated generalized inverse Gaussian–Poisson mixture models are very helpful in the analysis of these kinds of data because their model-mixing density estimates can be used as estimates of the density of the word frequencies of the vocabulary. It is found that this model provides excellent fits for the word frequency counts of very long texts, where the truncated inverse Gaussian–Poisson special case fails because it does not allow for the large degree of over-dispersion in the data. The role played by the three parameters of this truncated GIG-Poisson model is also explored. Our second goal is to compare the fit of the truncated GIG-Poisson mixture model with the fit of the model that results from switching the order of the mixing and truncation stages. A heuristic interpretation of the mixing distribution estimates obtained under this alternative GIG-truncated Poisson mixture model is also provided.  相似文献   

18.
Good interdisciplinary research requires genuine team work and appreciation of the different skills contributed by the partners involved. By the nature of our subject, statisticians are often very important contributors to this kind of research. This short note offers some brief reflection on the issues of communication which are usually present. The skills required and the need to promote these in undergraduate education and beyond have been well documented in the literature but are further emphasised here.  相似文献   

19.
20.
For a random sample of size nn from an absolutely continuous random vector (X,Y)(X,Y), let Yi:nYi:n be iith YY-order statistic and Y[j:n]Y[j:n] be the YY-concomitant of Xj:nXj:n. We determine the joint pdf of Yi:nYi:n and Y[j:n]Y[j:n] for all i,j=1i,j=1 to nn, and establish some symmetry properties of the joint distribution for symmetric populations. We discuss the uses of the joint distribution in the computation of moments and probabilities of various ranks for Y[j:n]Y[j:n]. We also show how our results can be used to determine the expected cost of mismatch in broken bivariate samples and approximate the first two moments of the ratios of linear functions of Yi:nYi:n and Y[j:n]Y[j:n]. For the bivariate normal case, we compute the expectations of the product of Yi:nYi:n and Y[i:n]Y[i:n] for n=2n=2 to 8 for selected values of the correlation coefficient and illustrate their uses.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号