首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Herein, we propose a fully Bayesian approach to the greenhouse gas emission problem. The goal of this work is to estimate the emission rate of polluting gases from the area flooded by hydroelectric reservoirs. We present models for gas concentration evolution in two ways: first, by proposing them from ordinary differential equation solutions and, second, by using stochastic differential equations with a discretization scheme. Finally, we present techniques to estimate the emission rate for the entire reservoir. In order to carry out the inference, we use the Bayesian framework with Monte Carlo via Markov Chain methods. Discretization schemes over continuous differential equations are used when necessary. These models applied to greenhouse gas emission and Bayesian inference for this purpose are completely new in statistical literature, as far as we know, and contribute to estimate the amount of polluting gases released from hydroelectric reservoirs in Brazil. The proposed models are applied in a real data set and results are presented.  相似文献   

2.
In this paper, we adapt recently developed simulation-based sequential algorithms to the problem concerning the Bayesian analysis of discretely observed diffusion processes. The estimation framework involves the introduction of m−1 latent data points between every pair of observations. Sequential MCMC methods are then used to sample the posterior distribution of the latent data and the model parameters on-line. The method is applied to the estimation of parameters in a simple stochastic volatility model (SV) of the U.S. short-term interest rate. We also provide a simulation study to validate our method, using synthetic data generated by the SV model with parameters calibrated to match weekly observations of the U.S. short-term interest rate.  相似文献   

3.
This paper developed an exact method of random permutations when testing both interaction and main effects in the two-way ANOVA model. The method of this paper can be regarded as a much improved model when compared with those of the previous studies such as Still and White (1981) and ter Braak (1992). We further conducted a simulation experiment in order to check the statistical performance of the proposed method. The proposed method works relatively well for small sample sizes compare with the existing methods. This work was supported by Korea Science and Engineering Foundation Grant (R14-2003-002-0100)  相似文献   

4.
The main advantages of the regression estimator are that it can be used for both the situations of positively and negatively correlated variables and its precision is usually higher than the simple expansion (direct), ratio and product estimators. Extensive empirical studies on the properties of many types of ratio estimators have been undertaken in several papers, for example by Rao (1969), Rao and Rao (1971), Hutchinson (1971), Royall and Cumberland (1981), Wu and Deng (1983). However not much attention has been given to the use of similar methods for regression-type estimators. In this paper, an attempt has been made to compare the relative performances of some biased and unbiased regression-type strategies with the help of a wide variety of natural populations.  相似文献   

5.
Quality Measurement Plan (QMP) as developed by Hoadley (1981) is a statistical method for analyzing discrete quality audit data which consist of the expected number of defects given the standard quality. The QMP is based on an empirical Bayes (EB) model of the audit sampling process. Despite its wide publicity, Hoadley's method has often been described as heuristic. In this paper we offer an hierarchical Bayes (HB) alternative to Hoadley's EB model, and overcome much of the criticism against this model. Gibbs sampling is used to implement the HB model proposed in this paper. Also, the convergence of the Gibbs sampler is monitored via the algorithm of Gelman and Rubin (1992).  相似文献   

6.
While much used in practice, latent variable models raise challenging estimation problems due to the intractability of their likelihood. Monte Carlo maximum likelihood (MCML), as proposed by Geyer & Thompson (1992 ), is a simulation-based approach to maximum likelihood approximation applicable to general latent variable models. MCML can be described as an importance sampling method in which the likelihood ratio is approximated by Monte Carlo averages of importance ratios simulated from the complete data model corresponding to an arbitrary value of the unknown parameter. This paper studies the asymptotic (in the number of observations) performance of the MCML method in the case of latent variable models with independent observations. This is in contrast with previous works on the same topic which only considered conditional convergence to the maximum likelihood estimator, for a fixed set of observations. A first important result is that when is fixed, the MCML method can only be consistent if the number of simulations grows exponentially fast with the number of observations. If on the other hand, is obtained from a consistent sequence of estimates of the unknown parameter, then the requirements on the number of simulations are shown to be much weaker.  相似文献   

7.
In this research, we employ Bayesian inference and stochastic dynamic programming approaches to select the binomial population with the largest probability of success from n independent Bernoulli populations based upon the sample information. To do this, we first define a probability measure called belief for the event of selecting the best population. Second, we explain the way to model the selection problem using Bayesian inference. Third, we clarify the model by which we improve the beliefs and prove that it converges to select the best population. In this iterative approach, we update the beliefs by taking new observations on the populations under study. This is performed using Bayesian rule and prior beliefs. Fourth, we model the problem of making the decision in a predetermined number of decision stages using the stochastic dynamic programming approach. Finally, in order to understand and to evaluate the proposed methodology, we provide two numerical examples and a comparison study by simulation. The results of the comparison study show that the proposed method performs better than that of Levin and Robbins (1981 Levin , B. , Robbins , H. ( 1981 ). Selecting the highest probability in Binomial or multinomial trials . Proc. Nat. Acad. Sci. USA 78 : 46634666 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) for some values of estimated probability of making a correct selection.  相似文献   

8.
Modelling Correlated Zero-inflated Count Data   总被引:2,自引:0,他引:2  
This paper extends the two-component approach to modelling count data with extra zeros, considered by Mullahy (1986), Heilbron (1994) and Welsh et al. (1996), to take account of possible serial dependence between repeated observations. Generalized estimating equations (Liang & Zeger, 1986) are constructed for each component of the model by incorporating correlation matrices into each of the maximum likelihood estimating equations. The proposed method is demonstrated on weekly counts of Noisy Friarbirds ( Philemon cornic-ulatus ), which were recorded by observers for the Canberra Garden Bird Survey (Hermes, 1981).  相似文献   

9.
Length‐biased sampling data are often encountered in the studies of economics, industrial reliability, epidemiology, genetics and cancer screening. The complication of this type of data is due to the fact that the observed lifetimes suffer from left truncation and right censoring, where the left truncation variable has a uniform distribution. In the Cox proportional hazards model, Huang & Qin (Journal of the American Statistical Association, 107, 2012, p. 107) proposed a composite partial likelihood method which not only has the simplicity of the popular partial likelihood estimator, but also can be easily performed by the standard statistical software. The accelerated failure time model has become a useful alternative to the Cox proportional hazards model. In this paper, by using the composite partial likelihood technique, we study this model with length‐biased sampling data. The proposed method has a very simple form and is robust when the assumption that the censoring time is independent of the covariate is violated. To ease the difficulty of calculations when solving the non‐smooth estimating equation, we use a kernel smoothed estimation method (Heller; Journal of the American Statistical Association, 102, 2007, p. 552). Large sample results and a re‐sampling method for the variance estimation are discussed. Some simulation studies are conducted to compare the performance of the proposed method with other existing methods. A real data set is used for illustration.  相似文献   

10.
ON ESTIMATION OF LONG-MEMORY TIME SERIES MODELS   总被引:1,自引:0,他引:1  
This paper discusses estimation associated with the long-memory time series models proposed by Granger & Joyeux (1980) and Hosking (1981). We consider the maximum likelihood estimator and the least squares estimator. Certain regularity conditions introduced by several authors to develop the asymptotic theory of these estimators do not hold in this model. However we can show that these estimators are strongly consistent, and we derive the limiting distribution and the rate of convergence.  相似文献   

11.
The response adaptive randomization (RAR) method is used to increase the number of patients assigned to more efficacious treatment arms in clinical trials. In many trials evaluating longitudinal patient outcomes, RAR methods based only on the final measurement may not benefit significantly from RAR because of its delayed initiation. We propose a Bayesian RAR method to improve RAR performance by accounting for longitudinal patient outcomes (longitudinal RAR). We use a Bayesian linear mixed effects model to analyze longitudinal continuous patient outcomes for calculating a patient allocation probability. In addition, we aim to mitigate the loss of statistical power because of large patient allocation imbalances by embedding adjusters into the patient allocation probability calculation. Using extensive simulation we compared the operating characteristics of our proposed longitudinal RAR method with those of the RAR method based only on the final measurement and with an equal randomization method. Simulation results showed that our proposed longitudinal RAR method assigned more patients to the presumably superior treatment arm compared with the other two methods. In addition, the embedded adjuster effectively worked to prevent extreme patient allocation imbalances. However, our proposed method may not function adequately when the treatment effect difference is moderate or less, and still needs to be modified to deal with unexpectedly large departures from the presumed longitudinal data model. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
Stochastic dominance is usually used to rank random variables by comparing their distributions, so it is widely applied in economics and finance. In actual applications, complete stochastic dominance is too demanding to meet, so relaxation indexes of stochastic dominance have attracted more attention. The π index, the biggest gap between two distributions, can be a measure of the degree of deviation from complete dominance. The traditional estimation method is to use the empirical distribution functions to estimate it. Considering the populations under comparison are generally of the same nature, we can link the populations through density ratio model under certain condition. Based on this model, we propose a new estimator and establish its statistical inference theory. Simulation results show that the proposed estimator substantially improves estimation efficiency and power of the tests and coverage probabilities satisfactorily match the confidence levels of the tests, which show the superiority of the proposed estimator. Finally we apply our method to a real example of the Chinese household incomes.  相似文献   

13.
In this paper, a small-sample asymptotic method is proposed for higher order inference in the stress–strength reliability model, R=P(Y<X), where X and Y are distributed independently as Burr-type X distributions. In a departure from the current literature, we allow the scale parameters of the two distributions to differ, and the likelihood-based third-order inference procedure is applied to obtain inference for R. The difficulty of the implementation of the method is in obtaining the the constrained maximum likelihood estimates (MLE). A penalized likelihood method is proposed to handle the numerical complications of maximizing the constrained likelihood model. The proposed procedures are illustrated using a sample of carbon fibre strength data. Our results from simulation studies comparing the coverage probabilities of the proposed small-sample asymptotic method with some existing large-sample asymptotic methods show that the proposed method is very accurate even when the sample sizes are small.  相似文献   

14.
This paper proposes two methods of estimation for the parameters in a Poisson-exponential model. The proposed methods combine the method of moments with a regression method based on the empirical moment generating function. One of the methods is an adaptation of the mixed-moments procedure of Koutrouvelis & Canavos (1999). The asymptotic distribution of the estimator obtained with this method is derived. Finite-sample comparisons are made with the maximum likelihood estimator and the method of moments. The paper concludes with an exploratory-type analysis of real data based on the empirical moment generating function.  相似文献   

15.
Starting with a decision theoretic formulation of simultaneous testing of null hypotheses against two-sided alternatives, a procedure controlling the Bayesian directional false discovery rate (BDFDR) is developed through controlling the posterior directional false discovery rate (PDFDR). This is an alternative to Lewis and Thayer [2004. A loss function related to the FDR for random effects multiple comparison. J. Statist. Plann. Inference 125, 49–58.] with a better control of the BDFDR. Moreover, it is optimum in the sense of being the non-randomized part of the procedure maximizing the posterior expectation of the directional per-comparison power rate given the data, while controlling the PDFDR. A corresponding empirical Bayes method is proposed in the context of one-way random effects model. Simulation study shows that the proposed Bayes and empirical Bayes methods perform much better from a Bayesian perspective than the procedures available in the literature.  相似文献   

16.
A new procedure is proposed for deriving variable bandwidths in univariate kernel density estimation, based upon likelihood cross-validation and an analysis of a Bayesian graphical model. The procedure admits bandwidth selection which is flexible in terms of the amount of smoothing required. In addition, the basic model can be extended to incorporate local smoothing of the density estimate. The method is shown to perform well in both theoretical and practical situations, and we compare our method with those of Abramson (The Annals of Statistics 10: 1217–1223) and Sain and Scott (Journal of the American Statistical Association 91: 1525–1534). In particular, we note that in certain cases, the Sain and Scott method performs poorly even with relatively large sample sizes.We compare various bandwidth selection methods using standard mean integrated square error criteria to assess the quality of the density estimates. We study situations where the underlying density is assumed both known and unknown, and note that in practice, our method performs well when sample sizes are small. In addition, we also apply the methods to real data, and again we believe our methods perform at least as well as existing methods.  相似文献   

17.
In this paper, we develop a variable selection framework with the spike-and-slab prior distribution via the hazard function of the Cox model. Specifically, we consider the transformation of the score and information functions for the partial likelihood function evaluated at the given data from the parameter space into the space generated by the logarithm of the hazard ratio. Thereby, we reduce the nonlinear complexity of the estimation equation for the Cox model and allow the utilization of a wider variety of stable variable selection methods. Then, we use a stochastic variable search Gibbs sampling approach via the spike-and-slab prior distribution to obtain the sparsity structure of the covariates associated with the survival outcome. Additionally, we conduct numerical simulations to evaluate the finite-sample performance of our proposed method. Finally, we apply this novel framework on lung adenocarcinoma data to find important genes associated with decreased survival in subjects with the disease.  相似文献   

18.
Bechhofer and Tamhane (1981) proposed a new class of incomplete block designs called BTIB designs for comparing p ≥ 2 test treatments with a control treatment in blocks of equal size k < p + 1. All BTIB designs for given (p,k) can be constructed by forming unions of replications of a set of elementary BTIB designs called generator designs for that (p,k). In general, there are many generator designs for given (p,k) but only a small subset (called the minimal complete set) of these suffices to obtain all admissible BTIB designs (except possibly any equivalent ones). Determination of the minimal complete set of generator designs for given (p,k) was stated as an open problem in Bechhofer and Tamhane (1981). In this paper we solve this problem for k = 3. More specifically, we give the minimal complete sets of generator designs for k = 3, p = 3(1)10; the relevant proofs are given only for the cases p = 3(1)6. Some additional combinatorial results concerning BTIB designs are also given.  相似文献   

19.
Dynamic principal component analysis (DPCA), also known as frequency domain principal component analysis, has been developed by Brillinger [Time Series: Data Analysis and Theory, Vol. 36, SIAM, 1981] to decompose multivariate time-series data into a few principal component series. A primary advantage of DPCA is its capability of extracting essential components from the data by reflecting the serial dependence of them. It is also used to estimate the common component in a dynamic factor model, which is frequently used in econometrics. However, its beneficial property cannot be utilized when missing values are present, which should not be simply ignored when estimating the spectral density matrix in the DPCA procedure. Based on a novel combination of conventional DPCA and self-consistency concept, we propose a DPCA method when missing values are present. We demonstrate the advantage of the proposed method over some existing imputation methods through the Monte Carlo experiments and real data analysis.  相似文献   

20.
The AMMI (additive main effects-multiplicative interaction) model is often used to investigate interactions in two-way tables, in particular for genotype-environment interactions. Both Gollob (1968) and Mandel (1969, 1971) proposed methods for testing the significance of such interactions. These methods are compared using simulated data. Our results support Mandel's conclusions, but his method is conservative and we recommend a test proposed by Johnson & Graybill (1972).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号