共查询到20条相似文献,搜索用时 15 毫秒
1.
Consider a sequence x ≡ (x1,…, xn) of n independent observations, in which each observation xi is known to be a realization from either one of ki given populations, chosen among k (≥ ki) populations π1, …, πk Our main objective is to study the problem of the selection of the most reliable population πj at a fixed time ξ, when no assumptions about the k populations are made. Some numerical examples are presented. 相似文献
2.
Thomas J. Santner 《统计学通讯:理论与方法》2013,42(3):283-292
Suppose π1,…,πk are k normal populations with πi having unknown mean μi and unknown variance σ2. The population πi will be called δ?-optimal (or good) if μi is within a specified amountδ? of the largest mean. A two stage procedure is proposed which selects a subset of the k populations and guarantees with probability at least P? that the selected subset contains only δ?-optimal πi ’s. In addition to screening out non-good populations the rule guarantees a high proportion of sufficiently good πi’S will be selected. 相似文献
3.
S. Sengupta 《统计学通讯:理论与方法》2017,46(3):1456-1461
We consider the problem of unbiased estimation of a finite population proportion and compare the relative efficiency of the unequal probability sampling strategies due to Horvitz and Thompson (1952) and Murthy (1957) under a super-population model. It is shown that the model expected variance is smaller for the Murthy's (1957) strategy both when these two sampling strategies are based on data obtained from (i) a direct survey, and (ii) a randomized response (RR) survey employing some RR technique following a general RR model. 相似文献
4.
Three-stage and ‘accelerated’ sequential procedures are developed for estimating the mean of a normal population when the population coefficient of variation (CV) is known. In spite of the usual estimator, i.e. the sample mean, Searls' (1964) estimator is utilized for the estimation purpose. It is established that Searls' estimator dominates the sample mean under the two sampling schemes. 相似文献
5.
Small area estimation (SAE) concerns with how to reliably estimate population quantities of interest when some areas or domains have very limited samples. This is an important issue in large population surveys, because the geographical areas or groups with only small samples or even no samples are often of interest to researchers and policy-makers. For example, large population health surveys, such as Behavioural Risk Factor Surveillance System and Ohio Mecaid Assessment Survey (OMAS), are regularly conducted for monitoring insurance coverage and healthcare utilization. Classic approaches usually provide accurate estimators at the state level or large geographical region level, but they fail to provide reliable estimators for many rural counties where the samples are sparse. Moreover, a systematic evaluation of the performances of the SAE methods in real-world setting is lacking in the literature. In this paper, we propose a Bayesian hierarchical model with constraints on the parameter space and show that it provides superior estimators for county-level adult uninsured rates in Ohio based on the 2012 OMAS data. Furthermore, we perform extensive simulation studies to compare our methods with a collection of common SAE strategies, including direct estimators, synthetic estimators, composite estimators, and Datta GS, Ghosh M, Steorts R, Maples J.'s [Bayesian benchmarking with applications to small area estimation. Test 2011;20(3):574–588] Bayesian hierarchical model-based estimators. To set a fair basis for comparison, we generate our simulation data with characteristics mimicking the real OMAS data, so that neither model-based nor design-based strategies use the true model specification. The estimators based on our proposed model are shown to outperform other estimators for small areas in both simulation study and real data analysis. 相似文献
6.
7.
The Fay–Herriot model is a linear mixed model that plays a relevant role in small area estimation (SAE). Under the SAE set-up, tools for selecting an adequate model are required. Applied statisticians are often interested on deciding if it is worthwhile to use a mixed effect model instead of a simpler fixed-effect model. This problem is not standard because under the null hypothesis the random effect variance is on the boundary of the parameter space. The likelihood ratio test and the residual likelihood ratio test are proposed and their finite sample distributions are derived. Finally, we analyse their behaviour under simulated scenarios and we also apply them to real data. 相似文献
8.
No-constant strategy is considered for the heterogenous autoregressive (HAR) model of Corsi, which is motivated by smaller biases of its estimated HAR coefficients than those of the constant HAR model. The no-constant model produces better forecasts than the constant model for four real datasets of the realized volatilities (RVs) of some major assets. Robustness of forecast improvement is verified for other functions of realized variance and log RV and for the extended datasets of all 20 RVs of Oxford-Man realized library. A Monte Carlo simulation also reveals improved forecasts for some historic HAR model estimated by Corsi. 相似文献
9.
J. A. Roldán-Nofuentes R. M. Amro 《Journal of Statistical Computation and Simulation》2017,87(3):530-545
Case–control design to assess the accuracy of a binary diagnostic test (BDT) is very frequent in clinical practice. This design consists of applying the diagnostic test to all of the individuals in a sample of those who have the disease and in another sample of those who do not have the disease. The sensitivity of the diagnostic test is estimated from the case sample and the specificity is estimated from the control sample. Another parameter which is used to assess the performance of a BDT is the weighted kappa coefficient. The weighted kappa coefficient depends on the sensitivity and specificity of the diagnostic test, on the disease prevalence and on the weighting index. In this article, confidence intervals are studied for the weighted kappa coefficient subject to a case–control design and a method is proposed to calculate the sample sizes to estimate this parameter. The results obtained were applied to a real example. 相似文献
10.
The present article discusses the statistical distribution for the estimator of Rosenthal's ‘file-drawer’ number NR, which is an estimator of unpublished studies in meta-analysis. We calculate the probability distribution function of NR. This is achieved based on the central limit theorem and the proposition that certain components of the estimator NR follow a half-normal distribution, derived from the standard normal distribution. Our proposed distributions are supported by simulations and investigation of convergence. 相似文献
11.
x 1, ..., x n+r can be treated as the sample values of a Markov chain of order r or less (chain in which the dependence extends over r+1 consecutive variables only), and consider the problem of testing the hypothesis H 0 that a chain of order r− 1 will be sufficient on the basis of the tools given by the Statistical Information Theory: ϕ-Divergences. More precisely, if p a 1 ....., a r: a r +1 denotes the transition probability for a r th order Markov chain, the hypothesis to be tested is H 0:p a 1 ....., a r: a r +1 = p a 2 ....., a r: a r +1, a i ∈{1, ..., s}, i = 1, ..., r + 1 The tests given in this paper, for the first time, will have as a particular case the likelihood ratio test and the test based on the chi-squared statistic. Received: August 3, 1998; revised version: November 25, 1999 相似文献
12.
In this paper, we propose a multiple deferred state repetitive group sampling plan which is a new sampling plan developed by incorporating the features of both multiple deferred state sampling plan and repetitive group sampling plan, for assuring Weibull or gamma distributed mean life of the products. The quality of the product is represented by the ratio of true mean life and specified mean life of the products. Two points on the operating characteristic curve approach is used to determine the optimal parameters of the proposed plan. The plan parameters are determined by formulating an optimization problem for various combinations of producer's risk and consumer's risk for both distributions. The sensitivity analysis of the proposed plan is discussed. The implementation of the proposed plan is explained using real-life data and simulated data. The proposed plan under Weibull distribution is compared with the existing sampling plans. The average sample number (ASN) of the proposed plan and failure probability of the product are obtained under Weibull, gamma and Birnbaum–Saunders distributions for a specified value of shape parameter and compared with each other. In addition, a comparative study is made between the ASN of the proposed plan under Weibull and gamma distributions. 相似文献
13.
Jörg Drechsler Agnes Dundler Stefan Bender Susanne Rässler Thomas Zwick 《AStA Advances in Statistical Analysis》2008,92(4):439-458
For micro-datasets considered for release as scientific or public use files, statistical agencies have to face the dilemma of guaranteeing the confidentiality of survey respondents on the one hand and offering sufficiently detailed data on the other hand. For that reason, a variety of methods to guarantee disclosure control is discussed in the literature. In this paper, we present an application of Rubin’s (J. Off. Stat. 9, 462–468, 1993) idea to generate synthetic datasets from existing confidential survey data for public release.We use a set of variables from the 1997 wave of the German IAB Establishment Panel and evaluate the quality of the approach by comparing results from an analysis by Zwick (Ger. Econ. Rev. 6(2), 155–184, 2005) with the original data with the results we achieve for the same analysis run on the dataset after the imputation procedure. The comparison shows that valid inferences can be obtained using the synthetic datasets in this context, while confidentiality is guaranteed for the survey participants. 相似文献
14.
In this article, we investigate the nonparametric estimation of the conditional density of a scalar response variable Y, given the explanatory variable X taking value in a Hilbert space when the observations are linked with a single index structure. The goal of this article is to present the asymptotic results such as pointwise almost complete consistency and the uniform almost complete convergence of the kernel estimation with rate for the conditional density in the setting of the α-mixing functional data, which extend the i.i.d case in Attaoui et al. (2011) to the dependence setting. As an application, the convergence rate of the kernel estimation for the conditional mode is also obtained. 相似文献
15.
16.
The profile likelihood of the reliability parameter θ = P(X < Y) or of the ratio of means, when X and Y are independent exponential random variables, has a simple analytical expression and is a powerful tool for making inferences. Inferences about θ can be given in terms of likelihood-confidence intervals with a simple algebraic structure even for small and unequal samples. The case of right censored data can also be handled in a simple way. This is in marked contrast with the complicated expressions that depend on cumbersome numerical calculations of multidimensional integrals required to obtain asymptotic confidence intervals that have been traditionally presented in scientific literature. 相似文献
17.
P. Jagers 《Statistics》2013,47(4):455-464
For a suitable norm, conservation of the distance between expectation and hypothesis may furnish a basis for data reduction by invariance in the linear, not neces-sarily normal, model. If the norm is Euclidean (i.e. based on some inner product), the maximal invariant is a pair of sums of squares. This provides support for traditional χ2 (or F) - methods also in nonnormal cases. If the norm is lp p≠2, or the supnorm, the maximal invariant is, at the best a air of order statistics. 相似文献
18.
This article studies the construction of a Bayesian confidence interval for risk difference in a 2×2 table with structural zero. The exact posterior distribution of the risk difference is derived under the Dirichlet prior distribution, and a tail-based interval is used to construct the Bayesian confidence interval. The frequentist performance of the tail-based interval is investigated and compared with the score-based interval by simulation. Our results show that the tail-based interval at Jeffreys prior performs as well as or better than the score-based confidence interval. 相似文献
19.
We propose a new method for the Maximum Likelihood Estimator (MLE) of nonlinear mixed effects models when the variance matrix
of Gaussian random effects has a prescribed pattern of zeros (PPZ). The method consists of coupling the recently developed
Iterative Conditional Fitting (ICF) algorithm with the Expectation Maximization (EM) algorithm. It provides positive definite
estimates for any sample size, and does not rely on any structural assumption concerning the PPZ. It can be easily adapted
to many versions of EM. 相似文献
20.
AbstractThe purpose of this paper is to develop a detection algorithm for the first jump point in sampling trajectories of jump-diffusions which are described as solutions of stochastic differential equations driven by α-stable white noise. This is done by a multivariate Lagrange interpolation approach. To this end, we utilize computer simulation algorithm in MATLAB to visualize the sampling trajectories of the jump-diffusions for various combinations of parameters arising in the modeling structure of stochastic differential equations. 相似文献