首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider a sequence of contingency tables whose cell probabilities may vary randomly. The distribution of cell probabilities is modelled by a Dirichlet distribution. Bayes and empirical Bayes estimates of the log odds ratio are obtained. Emphasis is placed on estimating the risks associated with the Bayes, empirical Bayes and maximum lilkelihood estimates of the log odds ratio.  相似文献   

2.
A sequence of empirical Bayes estimators is defined for estimating, in a two-sample problem, the probability that X ≥ Y. The sequence is shown to be asymptotically optimal relative to a Ferguson Dirichlet process prior.  相似文献   

3.
This paper presents a smooth empirical Bayes estimation technique based on nonparametric maximum likelihood estimation of the prior distribution Posterior means based on this estimate of the prior are shown to be easily calculated for a variety of sampling situations Examples involving normal and binomial sampling are given.  相似文献   

4.
Three-stage and ‘accelerated’ sequential procedures are developed for estimating the mean of a normal population when the population coefficient of variation (CV) is known. In spite of the usual estimator, i.e. the sample mean, Searls' (1964 Searls, DT. (1964). The utilization of a known coefficient of variation in the estimation procedure. J. Amer. Statist. Assoc, 50: 12251226.  ) estimator is utilized for the estimation purpose. It is established that Searls' estimator dominates the sample mean under the two sampling schemes.  相似文献   

5.
Several procedures for ranking populations according to the quantile of a given order have been discussed in the literature. These procedures deal with continuous distributions. This paper deals with the problem of selecting a population with the largest α-quantile from k ≥ 2 finite populatins, where the size of each population is known. A selection rule is given based on the sample quantiles, where he samples are drawn without replacement. A formula for the minimum probability of a correct selection for the given rule, for a certain configuration of the population α-quantiles, is given in terms of the sample numbers.  相似文献   

6.
This article introduces a non parametric warping model for functional data. When the outcome of an experiment is a sample of curves, data can be seen as realizations of a stochastic process, which takes into account the variations between the different observed curves. The aim of this work is to define a mean pattern which represents the main behaviour of the set of all the realizations. So, we define the structural expectation of the underlying stochastic function. Then, we provide empirical estimators of this structural expectation and of each individual warping function. Consistency and asymptotic normality for such estimators are proved.  相似文献   

7.
Small area estimation (SAE) concerns with how to reliably estimate population quantities of interest when some areas or domains have very limited samples. This is an important issue in large population surveys, because the geographical areas or groups with only small samples or even no samples are often of interest to researchers and policy-makers. For example, large population health surveys, such as Behavioural Risk Factor Surveillance System and Ohio Mecaid Assessment Survey (OMAS), are regularly conducted for monitoring insurance coverage and healthcare utilization. Classic approaches usually provide accurate estimators at the state level or large geographical region level, but they fail to provide reliable estimators for many rural counties where the samples are sparse. Moreover, a systematic evaluation of the performances of the SAE methods in real-world setting is lacking in the literature. In this paper, we propose a Bayesian hierarchical model with constraints on the parameter space and show that it provides superior estimators for county-level adult uninsured rates in Ohio based on the 2012 OMAS data. Furthermore, we perform extensive simulation studies to compare our methods with a collection of common SAE strategies, including direct estimators, synthetic estimators, composite estimators, and Datta GS, Ghosh M, Steorts R, Maples J.'s [Bayesian benchmarking with applications to small area estimation. Test 2011;20(3):574–588] Bayesian hierarchical model-based estimators. To set a fair basis for comparison, we generate our simulation data with characteristics mimicking the real OMAS data, so that neither model-based nor design-based strategies use the true model specification. The estimators based on our proposed model are shown to outperform other estimators for small areas in both simulation study and real data analysis.  相似文献   

8.
Summary: In this paper, we present results of the estimation of a two–panel–waves wage equation based on completely observed units and on a multiply imputed data set. In addition to the survey information, reliable income data is available from the register. These external data are used to assess the reliability of wage regressions that suffer from item nonresponse. The findings reveal marked differences between the complete case analyses and both versions of multiple imputation analyses. We argue that the results based on the multiply imputed data sets are more reliable than those based on the complete case analysis.* We would like to thank Statistics Finland for providing the data. We are also very grateful to Susanna Sandström and Marjo Pyy–Martikainen for their helpful advice using the Finnish data. Helpful comments from Joachim Winter and participants of the Workshop on Item Nonresponse and Data Quality in Large Social Surveys, Basel, October, 2003, on an earlier version of the paper are greatfully acknowledged. Further, we would like to thank three anonymous referees and the editor for helpful comments and suggestions.  相似文献   

9.
The Enigma was a cryptographic (enciphering) machine used by the German military during WWII. The German navy changed part of the Enigma keys every other day. One of the important cryptanalytic attacks against the naval usage was called Banburismus, a sequentiai Bayesian procedure (anticipating sequential analysis) which was used from the sorine of 1941 until the middle of 1943. It was invented mainlv bv A. M. Turina and was perhaps the first important sequential Bayesian IE is unnecessab to describe it here. Before Banburismus could be started on a given day it was necessary to identifv which of nine ‘biaram’ (or ‘diaraph’) tables was in use on that day. In Turing’s approach to this identification hk had io istimate the probabilities of certain ‘trigraphs’. rrhese trigraphs were used. as described below. for determinine the initial wheel settings of messages). For estimatidg the probabilities, Turing inventedin important special case o the nonparametric (nonhypermetric) Empirid Bayes method independently of Herbert Robbins. The techniaue is the sumxisine form of Emdrical Baves in which a physical prior is assumed to eist but no apbroxiGate functional fonn is assumed for it.  相似文献   

10.
Segmentation of the mean of heteroscedastic data via cross-validation   总被引:1,自引:0,他引:1  
This paper tackles the problem of detecting abrupt changes in the mean of a heteroscedastic signal by model selection, without knowledge on the variations of the noise. A new family of change-point detection procedures is proposed, showing that cross-validation methods can be successful in the heteroscedastic framework, whereas most existing procedures are not robust to heteroscedasticity. The robustness to heteroscedasticity of the proposed procedures is supported by an extensive simulation study, together with recent partial theoretical results. An application to Comparative Genomic Hybridization (CGH) data is provided, showing that robustness to heteroscedasticity can indeed be required for their analysis.  相似文献   

11.
The analysis of data using a stable probability distribution with tail parameter α<2 (sometimes called a Pareto–Levy distribution) seems to have been avoided in the past in part because of the lack of a significance test for the mean, even though it appears to be the correct distribution to use for describing returns in the financial markets. A z test for the significance of the mean of a stable distribution with tail parameter 1<α≤2 is defined. Tables are calculated and displayed for the 5% and 1% significance levels for a range of tail and skew parameters α and β. Through the use of maximum likelihood estimates, the test becomes a practical tool even when α and β are not that accurately determined. As an example, the z test is applied to the daily closing prices for the Dow Jones Industrial average from 2 January 1940 to 19 March 2010.  相似文献   

12.
In the study of the reliability of technical systems, k-out-of-n systems play an important role. In the present paper, we consider a (nk + 1)-out-of-n system consisting of n identical components such that the lifetimes of components are independent and have a common distribution function F. It is assumed that the number of monitoring is l and the total number of failures of the components at time t i is m i , i = 1, . . . , l − 1. Also at time t l (t 1 < . . . < t l ) the system have failed or the system is still working. Under these conditions, the mean past lifetime, the mean residual lifetime of system and their properties are investigated.  相似文献   

13.
14.
15.
The Birnbaum–Saunders (BS) distribution is a positively skewed distribution, frequently used for analysing lifetime data. In this paper, we propose a simple method of estimation for the parameters of the two-parameter BS distribution by making use of some key properties of the distribution. Compared with the maximum likelihood estimators and the modified moment estimators, the proposed method has smaller bias, but having the same mean square errors as these two estimators. We also discuss some methods of construction of confidence intervals. The performance of the estimators is then assessed by means of Monte Carlo simulations. Finally, an example is used to illustrate the method of estimation developed here.  相似文献   

16.
ABSTRACT

We derive the exact distribution of the maximum likelihood estimator of the mean reversion parameter (κ) in the Ornstein–Uhlenbeck process using numerical integration through analytical evaluation of a joint characteristic function. Different scenarios are considered: known or unknown drift term, fixed or random start-up value, and zero or positive κ. Monte Carlo results demonstrate the remarkably reliable performance of our exact approach across all the scenarios. In comparison, misleading results may arise under the asymptotic distributions, including the advocated infill asymptotic distribution, which performs poorly in the tails when there is no intercept in the regression and the starting value of the process is nonzero.  相似文献   

17.
Intensity functions—which describe the spatial distribution of the occurrences of point processes—are useful for risk assessment. This paper deals with the robust nonparametric estimation of the intensity function of space–time data from events such as earthquakes. The basic approach consists of smoothing the frequency histograms with the local polynomial regression (LPR) estimator. This method allows for automatic boundary corrections, and its jump-preserving ability can be improved with robustness. We derive a robust local smoother from the weighted-average approach to M-estimation and we select its bandwidths with robust cross-validation (RCV). Further, we develop a robust recursive algorithm for sequential processing of the data binned in time. An extensive application to the Northern California earthquake catalog in the San Francisco, CA, area illustrates the method and proves its validity.  相似文献   

18.
"During the past twenty years Scandinavian countries have made changes in the methods of taking population and housing censuses that are more fundamental than any seen since modern census methods were first introduced two hundred years ago. These countries extract their census data in part or in whole from administrative registers. If other countries in Western Europe were to adopt this approach, most of them would have to make major improvements to their administrative records. But the primary reasons for making such improvements are concerned with administration and policy rather than statistics, namely, the need to secure a more effective and fairer system of public administration and to enable governments to exercise a wider range of policy options."  相似文献   

19.
The two-parameter Birnbaum–Saunders distribution is widely applicable to model failure times of fatiguing materials. Its maximum-likelihood estimators (MLEs) are very sensitive to outliers and also have no closed-form expressions. This motivates us to develop some alternative estimators. In this paper, we develop two robust estimators, which are also explicit functions of sample observations and are thus easy to compute. We derive their breakdown points and carry out extensive Monte Carlo simulation experiments to compare the performance of all the estimators under consideration. It has been observed from the simulation results that the proposed estimators outperform in a manner that is approximately comparable with the MLEs, whereas they are far superior in the presence of data contamination that often occurs in practical situations. A simple bias-reduction technique is presented to reduce the bias of the recommended estimators. Finally, the practical application of the developed procedures is illustrated with a real-data example.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号