首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Smoothed nonparametric kernel spectral density estimates are considered for stationary data observed on a d-dimensional lattice. The implications for edge effect bias of the choice of kernel and bandwidth are considered. Under some circumstances the bias can be dominated by the edge effect. We show that this problem can be mitigated by tapering. Some extensions and related issues are discussed.  相似文献   

2.
Ranked set sampling (RSS) was first proposed by McIntyre [1952. A method for unbiased selective sampling, using ranked sets. Australian J. Agricultural Res. 3, 385–390] as an effective way to estimate the unknown population mean. Chuiv and Sinha [1998. On some aspects of ranked set sampling in parametric estimation. In: Balakrishnan, N., Rao, C.R. (Eds.), Handbook of Statistics, vol. 17. Elsevier, Amsterdam, pp. 337–377] and Chen et al. [2004. Ranked Set Sampling—Theory and Application. Lecture Notes in Statistics, vol. 176. Springer, New York] have provided excellent surveys of RSS and various inferential results based on RSS. In this paper, we use the idea of order statistics from independent and non-identically distributed (INID) random variables to propose ordered ranked set sampling (ORSS) and then develop optimal linear inference based on ORSS. We determine the best linear unbiased estimators based on ORSS (BLUE-ORSS) and show that they are more efficient than BLUE-RSS for the two-parameter exponential, normal and logistic distributions. Although this is not the case for the one-parameter exponential distribution, the relative efficiency of the BLUE-ORSS (to BLUE-RSS) is very close to 1. Furthermore, we compare both BLUE-ORSS and BLUE-RSS with the BLUE based on order statistics from a simple random sample (BLUE-OS). We show that BLUE-ORSS is uniformly better than BLUE-OS, while BLUE-RSS is not as efficient as BLUE-OS for small sample sizes (n<5n<5).  相似文献   

3.
In this article two-stage hierarchical Bayesian models are used for the observed occurrences of events in a rectangular region. Two Bayesian variable window scan statistics are introduced to test the null hypothesis that the observed events follow a specified two-stage hierarchical model vs an alternative that indicates a local increase in the average number of observed events in a subregion (clustering). Both procedures are based on a sequence of Bayes factors and their pp-values that have been generated via simulation of posterior samples of the parameters, under the null and alternative hypotheses. The posterior samples of the parameters have been generated by employing Gibbs sampling via introduction of auxiliary variables. Numerical results are presented to evaluate the performance of these variable window scan statistics.  相似文献   

4.
We exploit Bayesian criteria for designing M/M/c//rM/M/c//r queueing systems with spares. For illustration of our approach we use a real problem from aeronautic maintenance, where the numbers of repair crews and spare planes must be sufficiently large to meet the necessary operational capacity. Bayesian guarantees for this to happen can be given using predictive or posterior distributions.  相似文献   

5.
6.
We propose a method for the analysis of a spatial point pattern, which is assumed to arise as a set of observations from a spatial nonhomogeneous Poisson process. The spatial point pattern is observed in a bounded region, which, for most applications, is taken to be a rectangle in the space where the process is defined. The method is based on modeling a density function, defined on this bounded region, that is directly related with the intensity function of the Poisson process. We develop a flexible nonparametric mixture model for this density using a bivariate Beta distribution for the mixture kernel and a Dirichlet process prior for the mixing distribution. Using posterior simulation methods, we obtain full inference for the intensity function and any other functional of the process that might be of interest. We discuss applications to problems where inference for clustering in the spatial point pattern is of interest. Moreover, we consider applications of the methodology to extreme value analysis problems. We illustrate the modeling approach with three previously published data sets. Two of the data sets are from forestry and consist of locations of trees. The third data set consists of extremes from the Dow Jones index over a period of 1303 days.  相似文献   

7.
Abstract

In a 2-step monotone missing dataset drawn from a multivariate normal population, T2-type test statistic (similar to Hotelling’s T2 test statistic) and likelihood ratio (LR) are often used for the test for a mean vector. In complete data, Hotelling’s T2 test and LR test are equivalent, however T2-type test and LR test are not equivalent in the 2-step monotone missing dataset. Then we interest which statistic is reasonable with relation to power. In this paper, we derive asymptotic power function of both statistics under a local alternative and obtain an explicit form for difference in asymptotic power function. Furthermore, under several parameter settings, we compare LR and T2-type test numerically by using difference in empirical power and in asymptotic power function. Summarizing obtained results, we recommend applying LR test for testing a mean vector.  相似文献   

8.
Recently Jammalamadaka and Mangalam [2003. Non-parametric estimation for middle censored data. J. Nonparametric Statist. 15, 253–265] introduced a general censoring scheme called the “middle-censoring” scheme in non-parametric set up. In this paper we consider this middle-censoring scheme when the lifetime distribution of the items is exponentially distributed and the censoring mechanism is independent and non-informative. In this set up, we derive the maximum likelihood estimator and study its consistency and asymptotic normality properties. We also derive the Bayes estimate of the exponential parameter under a gamma prior. Since a theoretical construction of the credible interval becomes quite difficult, we propose and implement Gibbs sampling technique to construct the credible intervals. Monte Carlo simulations are performed to evaluate the small sample behavior of the techniques proposed. A real data set is analyzed to illustrate the practical application of the proposed methods.  相似文献   

9.
The authors develop default priors for the Gaussian random field model that includes a nugget parameter accounting for the effects of microscale variations and measurement errors. They present the independence Jeffreys prior, the Jeffreys‐rule prior and a reference prior and study posterior propriety of these and related priors. They show that the uniform prior for the correlation parameters yields an improper posterior. In case of known regression and variance parameters, they derive the Jeffreys prior for the correlation parameters. They prove posterior propriety and obtain that the predictive distributions at ungauged locations have finite variance. Moreover, they show that the proposed priors have good frequentist properties, except for those based on the marginal Jeffreys‐rule prior for the correlation parameters, and illustrate their approach by analyzing a dataset of zinc concentrations along the river Meuse. The Canadian Journal of Statistics 40: 304–327; 2012 © 2012 Statistical Society of Canada  相似文献   

10.
This paper presents a new Laplacian approximation to the posterior density of η = g(θ). It has a simpler analytical form than that described by Leonard et al. (1989). The approximation derived by Leonard et al. requires a conditional information matrix Rη to be positive definite for every fixed η. However, in many cases, not all Rη are positive definite. In such cases, the computations of their approximations fail, since the approximation cannot be normalized. However, the new approximation may be modified so that the corresponding conditional information matrix can be made positive definite for every fixed η. In addition, a Bayesian procedure for contingency-table model checking is provided. An example of cross-classification between the educational level of a wife and fertility-planning status of couples is used for explanation. Various Laplacian approximations are computed and compared in this example and in an example of public school expenditures in the context of Bayesian analysis of the multiparameter Fisher-Behrens problem.  相似文献   

11.
12.
We introduce the Hausdorff αα-entropy to study the strong Hellinger consistency of posterior distributions. We obtain general Bayesian consistency theorems which extend the well-known results of Barron et al. [1999. The consistency of posterior distributions in nonparametric problems. Ann. Statist. 27, 536–561] and Ghosal et al. [1999. Posterior consistency of Dirichlet mixtures in density estimation. Ann. Statist. 27, 143–158] and Walker [2004. New approaches to Bayesian consistency. Ann. Statist. 32, 2028–2043]. As an application we strengthen previous results on Bayesian consistency of the (normal) mixture models.  相似文献   

13.
14.
We propose a regime switching autoregressive model and apply it to analyze daily water discharge series of River Tisza in Hungary. The dynamics is governed by two regimes, along which both the autoregressive coefficients and the innovation distributions are altering, moreover, the hidden regime indicator process is allowed to be non-Markovian. After examining stationarity and basic properties of the model, we turn to its estimation by Markov Chain Monte Carlo (MCMC) methods and propose two algorithms. The values of the latent process serve as auxiliary parameters in the first one, while the change points of the regimes do the same in the second one in a reversible jump MCMC setting. After comparing the mixing performance of the two methods, the model is fitted to the water discharge data. Simulations show that it reproduces the important features of the water discharge series such as the highly skewed marginal distribution and the asymmetric shape of the hydrograph.  相似文献   

15.
We consider simulation-based methods for exploration and maximization of expected utility in sequential decision problems. We consider problems which require backward induction with analytically intractable expected utility integrals at each stage. We propose to use forward simulation to approximate the integral expressions, and a reduction of the allowable action space to avoid problems related to an increasing number of possible trajectories in the backward induction. The artificially reduced action space allows strategies to depend on the full history of earlier observations and decisions only indirectly through a low dimensional summary statistic. The proposed rule provides a finite-dimensional approximation to the unrestricted infinite-dimensional optimal decision rule. We illustrate the proposed approach with an application to an optimal stopping problem in a clinical trial.  相似文献   

16.
In this paper, we consider how to incorporate quantile information to improve estimator efficiency for regression model with missing covariates. We combine the quantile information with least-squares normal equations and construct an unbiased estimating equations (EEs). The lack of smoothness of the objective EEs is overcome by replacing them with smooth approximations. The maximum smoothed empirical likelihood (MSEL) estimators are established based on inverse probability weighted (IPW) smoothed EEs and their asymptotic properties are studied under some regular conditions. Moreover, we develop two novel testing procedures for the underlying model. The finite-sample performance of the proposed methodology is examined by simulation studies. A real example is used to illustrate our methods.  相似文献   

17.
A survey of research by Emanuel Parzen on how quantile functions provide elegant and applicable formulas that unify many statistical methods, especially frequentist and Bayesian confidence intervals and prediction distributions. Section 0: In honor of Ted Anderson's 90th birthday; Section 1: Quantile functions, endpoints of prediction intervals; Section 2: Extreme value limit distributions; Sections 3, 4: Confidence and prediction endpoint function: Uniform(0,θ)(0,θ), exponential; Sections: 5, 6: Confidence quantile and Bayesian inference normal parameters μμ, σσ; Section 7: Two independent samples confidence quantiles; Section 8: Confidence quantiles for proportions, Wilson's formula. We propose ways that Bayesians and frequentists can be friends!  相似文献   

18.
Coarse data is a general type of incomplete data that includes grouped data, censored data, and missing data. The likelihood‐based estimation approach with coarse data is challenging because the likelihood function is in integral form. The Monte Carlo EM algorithm of Wei & Tanner [Wei & Tanner (1990). Journal of the American Statistical Association, 85, 699–704] is adapted to compute the maximum likelihood estimator in the presence of coarse data. Stochastic coarse data is also covered and the computation can be implemented using the parametric fractional imputation method proposed by Kim [Kim (2011). Biometrika, 98, 119–132]. Results from a limited simulation study are presented. The proposed method is also applied to the Korean Longitudinal Study of Aging (KLoSA). The Canadian Journal of Statistics 40: 604–618; 2012 © 2012 Statistical Society of Canada  相似文献   

19.
In this paper, we consider a judgment post stratified (JPS) sample of set size H from a location and scale family of distributions. In a JPS sample, ranks of measured units are random variables. By conditioning on these ranks, we derive the maximum likelihood (MLEs) and best linear unbiased estimators (BLUEs) of the location and scale parameters. Since ranks are random variables, by considering the conditional distributions of ranks given the measured observations we construct Rao-Blackwellized version of MLEs and BLUEs. We show that Rao-Blackwellized estimators always have smaller mean squared errors than MLEs and BLUEs in a JPS sample. In addition, the paper provides empirical evidence for the efficiency of the proposed estimators through a series of Monte Carlo simulations.  相似文献   

20.
Previous work has been carried out on the use of double-sampling schemes for inference from categorical data subject to misclassification. The double-sampling schemes utilize a sample of n units classified by both a fallible and true device and another sample of n2 units classified only by a fallible device. In actual applications, one often hasavailable a third sample of n1 units, which is classified only by the true device. In this article we develop techniques of fitting log-linear models under various misclassification structures for a general triple-sampling scheme. The estimation is by maximum likelihood and the fitted models are hierarchical. The methodology is illustrated by applying it to data in traffic safety research from a study on the effectiveness of belts in reducing injuries.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号