首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 520 毫秒
1.
Multipath fading is one of the most common distortions in wireless communications. The simulation of a fading channel typically requires drawing samples from a Rayleigh, Rice or Nakagami distribution. The Nakagami-m distribution is particularly important due to its good agreement with empirical channel measurements, as well as its ability to generalize the well-known Rayleigh and Rice distributions. In this paper, a simple and extremely efficient rejection sampling (RS) algorithm for generating independent samples from a Nakagami-m distribution is proposed. This RS approach is based on a novel proposal density composed of three pieces of well-known densities from which samples can be drawn easily and efficiently. The proposed method is valid for any combination of parameters of the Nakagami distribution, without any restriction in the domain and without requiring any adjustment from the final user. Simulations for several parameter combinations show that the proposed approach attains acceptance rates above 90% in all cases, outperforming all the RS techniques currently available in the literature.  相似文献   

2.
Recently, Ong and Mukerjee [Probability matching priors for two-sided tolerance intervals in balanced one-way and two-way nested random effects models. Statistics. 2011;45:403–411] developed two-sided Bayesian tolerance intervals, with approximate frequentist validity, for a future observation in balanced one-way and two-way nested random effects models. These were obtained using probability matching priors (PMP). On the other hand, Krishnamoorthy and Lian [Closed-form approximate tolerance intervals for some general linear models and comparison studies. J Stat Comput Simul. 2012;82:547–563] studied closed-form approximate tolerance intervals by the modified large-sample (MLS) approach. We compare the performances of these two approaches for normal as well as non-normal error distributions. Monte Carlo simulation methods are used to evaluate the resulting tolerance intervals with regard to achieved confidence levels and expected widths. It turns out that PMP tolerance intervals are less conservative for data with large number of classes and small number of observations per class and the MLS procedure is preferable for smaller sample sizes.  相似文献   

3.
The authors provide an overview of optimal scaling results for the Metropolis algorithm with Gaussian proposal distribution. They address in more depth the case of high‐dimensional target distributions formed of independent, but not identically distributed components. They attempt to give an intuitive explanation as to why the well‐known optimal acceptance rate of 0.234 is not always suitable. They show how to find the asymptotically optimal acceptance rate when needed, and they explain why it is sometimes necessary to turn to inhomogeneous proposal distributions. Their results are illustrated with a simple example.  相似文献   

4.
We consider the optimal scaling problem for proposal distributions in Hastings–Metropolis algorithms derived from Langevin diffusions. We prove an asymptotic diffusion limit theorem and show that the relative efficiency of the algorithm can be characterized by its overall acceptance rate, independently of the target distribution. The asymptotically optimal acceptance rate is 0.574. We show that, as a function of dimension n , the complexity of the algorithm is O ( n 1/3), which compares favourably with the O ( n ) complexity of random walk Metropolis algorithms. We illustrate this comparison with some example simulations.  相似文献   

5.
This paper addresses the estimation for the unknown scale parameter of the half-logistic distribution based on a Type-I progressively hybrid censoring scheme. We evaluate the maximum likelihood estimate (MLE) via numerical method, and EM algorithm, and also the approximate maximum likelihood estimate (AMLE). We use a modified acceptance rejection method to obtain the Bayes estimate and corresponding highest posterior confidence intervals. We perform Monte Carlo simulations to compare the performances of the different methods, and we analyze one dataset for illustrative purposes.  相似文献   

6.
Rejection sampling is a well-known method to generate random samples from arbitrary target probability distributions. It demands the design of a suitable proposal probability density function (pdf) from which candidate samples can be drawn. These samples are either accepted or rejected depending on a test involving the ratio of the target and proposal densities. The adaptive rejection sampling method is an efficient algorithm to sample from a log-concave target density, that attains high acceptance rates by improving the proposal density whenever a sample is rejected. In this paper we introduce a generalized adaptive rejection sampling procedure that can be applied with a broad class of target probability distributions, possibly non-log-concave and exhibiting multiple modes. The proposed technique yields a sequence of proposal densities that converge toward the target pdf, thus achieving very high acceptance rates. We provide a simple numerical example to illustrate the basic use of the proposed technique, together with a more elaborate positioning application using real data.  相似文献   

7.
深入研究外部信息传入特别是信息内容可信度对消费者转基因食品接受程度的影响对中国政府转基因科普宣传具有重要意义。利用江苏省消费者调查数据,定量研究信息内容可信度等因素对消费者转基因食品接受程度的影响,具体来说,定量研究积极属性信息内容和消极属性信息内容对不同风险人群(所有样本、低风险人群、高风险人群)转基因食品接受程度的影响。研究结果表明,外部信息传入更容易影响低风险人群对转基因食品的接受程度。积极属性和消极属性信息内容可信度分别显著正向和负向影响低风险人群对转基因食品的接受程度,且前者的作用较大。积极属性信息内容可信度对高风险人群转基因食品接受程度的影响较弱,而消极属性信息内容可信度对高风险人群几乎没有影响。  相似文献   

8.
Abstract. In this paper, we show how the construction of a trans‐dimensional equivalent of the Gibbs sampler can be used to obtain a powerful suite of adaptive algorithms suitable for trans‐dimensional MCMC samplers. These algorithms adapt at the local scale, optimizing performance at each iteration in contrast to the globally adaptive scheme proposed by others for the fixeddimensional problem. Our adaptive scheme ensures suitably high acceptance rates for MCMC and RJMCMC proposals without the need for (often prohibitively) time‐consuming pilot‐tuning exercises. We illustrate our methods using the problem of Bayesian model discrimination for the important class of autoregressive time series models and, through the use of a variety of prior and proposal structures, demonstrate their ability to provide powerful and effective adaptive sampling schemes.  相似文献   

9.
Markov chain Monte Carlo (MCMC) is an important computational technique for generating samples from non-standard probability distributions. A major challenge in the design of practical MCMC samplers is to achieve efficient convergence and mixing properties. One way to accelerate convergence and mixing is to adapt the proposal distribution in light of previously sampled points, thus increasing the probability of acceptance. In this paper, we propose two new adaptive MCMC algorithms based on the Independent Metropolis–Hastings algorithm. In the first, we adjust the proposal to minimize an estimate of the cross-entropy between the target and proposal distributions, using the experience of pre-runs. This approach provides a general technique for deriving natural adaptive formulae. The second approach uses multiple parallel chains, and involves updating chains individually, then updating a proposal density by fitting a Bayesian model to the population. An important feature of this approach is that adapting the proposal does not change the limiting distributions of the chains. Consequently, the adaptive phase of the sampler can be continued indefinitely. We include results of numerical experiments indicating that the new algorithms compete well with traditional Metropolis–Hastings algorithms. We also demonstrate the method for a realistic problem arising in Comparative Genomics.  相似文献   

10.
In this paper, we investigate Bayesian generalized nonlinear mixed‐effects (NLME) regression models for zero‐inflated longitudinal count data. The methodology is motivated by and applied to colony forming unit (CFU) counts in extended bactericidal activity tuberculosis (TB) trials. Furthermore, for model comparisons, we present a generalized method for calculating the marginal likelihoods required to determine Bayes factors. A simulation study shows that the proposed zero‐inflated negative binomial regression model has good accuracy, precision, and credibility interval coverage. In contrast, conventional normal NLME regression models applied to log‐transformed count data, which handle zero counts as left censored values, may yield credibility intervals that undercover the true bactericidal activity of anti‐TB drugs. We therefore recommend that zero‐inflated NLME regression models should be fitted to CFU count on the original scale, as an alternative to conventional normal NLME regression models on the logarithmic scale.  相似文献   

11.
ABSTRACT

We discuss problems the null hypothesis significance testing (NHST) paradigm poses for replication and more broadly in the biomedical and social sciences as well as how these problems remain unresolved by proposals involving modified p-value thresholds, confidence intervals, and Bayes factors. We then discuss our own proposal, which is to abandon statistical significance. We recommend dropping the NHST paradigm—and the p-value thresholds intrinsic to it—as the default statistical paradigm for research, publication, and discovery in the biomedical and social sciences. Specifically, we propose that the p-value be demoted from its threshold screening role and instead, treated continuously, be considered along with currently subordinate factors (e.g., related prior evidence, plausibility of mechanism, study design and data quality, real world costs and benefits, novelty of finding, and other factors that vary by research domain) as just one among many pieces of evidence. We have no desire to “ban” p-values or other purely statistical measures. Rather, we believe that such measures should not be thresholded and that, thresholded or not, they should not take priority over the currently subordinate factors. We also argue that it seldom makes sense to calibrate evidence as a function of p-values or other purely statistical measures. We offer recommendations for how our proposal can be implemented in the scientific publication process as well as in statistical decision making more broadly.  相似文献   

12.
There exist various methods for providing confidence intervals for unknown parameters of interest on the basis of a random sample. Generally, the bounds are derived from a system of non-linear equations. In this article, we present a general solution to obtain an unbiased confidence interval with confidence coefficient 1 ? α in one-parameter exponential families. Also we discuss two Bayesian credible intervals, the highest posterior density (HPD) and relative surprise (RS) credible intervals. Standard criteria like the coverage length and coverage probability are used to assess the performance of the HPD and RS credible intervals. Simulation studies and real data applications are presented for illustrative purposes.  相似文献   

13.
In this article, we discuss the utility of tolerance intervals for various regression models. We begin with a discussion of tolerance intervals for linear and nonlinear regression models. We then introduce a novel method for constructing nonparametric regression tolerance intervals by extending the well-established procedure for univariate data. Simulation results and application to real datasets are presented to help visualize regression tolerance intervals and to demonstrate that the methods we discuss have coverage probabilities very close to the specified nominal confidence level.  相似文献   

14.
The author shows how to construct distribution‐free statistical intervals from ranked‐set sampling. He considers the cases of confidence, tolerance, and prediction intervals. He shows how to compute their coverage probabilities and he compares their performance to that of intervals based on simple random sampling. He finds that the ranked‐set sampling‐based intervals are at least as good as their classical counterpart in most settings of interest, although the nature of the advantage depends on the type of interval considered.  相似文献   

15.
Abstract.  We propose new control variates for variance reduction in estimation of mean values using the Metropolis–Hastings algorithm. Traditionally, states that are rejected in the Metropolis–Hastings algorithm are simply ignored, which intuitively seems to be a waste of information. We present a setting for construction of zero mean control variates for general target and proposal distributions and develop ideas for the standard Metropolis–Hastings and reversible jump algorithms. We give results for three simulation examples. We get best results for variates that are functions of the current state x and the proposal y , but we also consider variates that in addition are functions of the Metropolis–Hastings acceptance/rejection decision. The variance reduction achieved varies depending on the target distribution and proposal mechanisms used. In simulation experiments, we typically achieve relative variance reductions between 15% and 35%.  相似文献   

16.
Validation of tolerance interval   总被引:1,自引:0,他引:1  
The tolerance interval receives very much attention in literature and is widely applied in industry. However, it is generally constructed through the criterion of minimum width by Eisenhart et al. (1947). Although effort for clarification of several prediction related intervals has been made recently by Huang et al. (2010), the appropriateness of the tolerance interval for its role in industry applications is insufficiently discussed. According to manufacturers' requests, a concept of admissibility of tolerance intervals is defined in this paper and we show that these types of tolerance intervals are not admissible due to short of confidence. We further prove that a 100(1−α)% confidence interval of a γ-coverage interval is admissible and is appropriate for use.  相似文献   

17.
We develop an approach to evaluating frequentist model averaging procedures by considering them in a simple situation in which there are two‐nested linear regression models over which we average. We introduce a general class of model averaged confidence intervals, obtain exact expressions for the coverage and the scaled expected length of the intervals, and use these to compute these quantities for the model averaged profile likelihood (MPI) and model‐averaged tail area confidence intervals proposed by D. Fletcher and D. Turek. We show that the MPI confidence intervals can perform more poorly than the standard confidence interval used after model selection but ignoring the model selection process. The model‐averaged tail area confidence intervals perform better than the MPI and postmodel‐selection confidence intervals but, for the examples that we consider, offer little over simply using the standard confidence interval for θ under the full model, with the same nominal coverage.  相似文献   

18.
New Metropolis–Hastings algorithms using directional updates are introduced in this paper. Each iteration of a directional Metropolis–Hastings algorithm consists of three steps (i) generate a line by sampling an auxiliary variable, (ii) propose a new state along the line, and (iii) accept/reject according to the Metropolis–Hastings acceptance probability. We consider two classes of directional updates. The first uses a point in n as auxiliary variable, the second an auxiliary direction vector. The proposed algorithms generalize previous directional updating schemes since we allow the distribution of the auxiliary variable to depend on properties of the target at the current state. By letting the proposal distribution along the line depend on the density of the auxiliary variable, we identify proposal mechanisms that give unit acceptance rate. When we use direction vector as auxiliary variable, we get the advantageous effect of large moves in the Markov chain and hence the autocorrelation length of the samples is small. We apply the directional Metropolis–Hastings algorithms to a Gaussian example, a mixture of Gaussian densities, and a Bayesian model for seismic data.  相似文献   

19.
Early phase 2 tuberculosis (TB) trials are conducted to characterize the early bactericidal activity (EBA) of anti‐TB drugs. The EBA of anti‐TB drugs has conventionally been calculated as the rate of decline in colony forming unit (CFU) count during the first 14 days of treatment. The measurement of CFU count, however, is expensive and prone to contamination. Alternatively to CFU count, time to positivity (TTP), which is a potential biomarker for long‐term efficacy of anti‐TB drugs, can be used to characterize EBA. The current Bayesian nonlinear mixed‐effects (NLME) regression model for TTP data, however, lacks robustness to gross outliers that often are present in the data. The conventional way of handling such outliers involves their identification by visual inspection and subsequent exclusion from the analysis. However, this process can be questioned because of its subjective nature. For this reason, we fitted robust versions of the Bayesian nonlinear mixed‐effects regression model to a wide range of TTP datasets. The performance of the explored models was assessed through model comparison statistics and a simulation study. We conclude that fitting a robust model to TTP data obviates the need for explicit identification and subsequent “deletion” of outliers but ensures that gross outliers exert no undue influence on model fits. We recommend that the current practice of fitting conventional normal theory models be abandoned in favor of fitting robust models to TTP data.  相似文献   

20.
Abstract. While it is a popular selection criterion for spline smoothing, generalized cross‐validation (GCV) occasionally yields severely undersmoothed estimates. Two extensions of GCV called robust GCV (RGCV) and modified GCV have been proposed as more stable criteria. Each involves a parameter that must be chosen, but the only guidance has come from simulation results. We investigate the performance of the criteria analytically. In most studies, the mean square prediction error is the only loss function considered. Here, we use both the prediction error and a stronger Sobolev norm error, which provides a better measure of the quality of the estimate. A geometric approach is used to analyse the superior small‐sample stability of RGCV compared to GCV. In addition, by deriving the asymptotic inefficiency for both the prediction error and the Sobolev error, we find intervals for the parameters of RGCV and modified GCV for which the criteria have optimal performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号