首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recently, Zhang [Simultaneous confidence intervals for several inverse Gaussian populations. Stat Probab Lett. 2014;92:125–131] proposed simultaneous pairwise confidence intervals (SPCIs) based on the fiducial generalized pivotal quantity concept to make inferences about the inverse Gaussian means under heteroscedasticity. In this paper, we propose three new methods for constructing SPCIs to make inferences on the means of several inverse Gaussian distributions when scale parameters and sample sizes are unequal. One of the methods results in a set of classic SPCIs (in the sense that it is not simulation-based inference) and the two others are based on a parametric bootstrap approach. The advantages of our proposed methods over Zhang’s (2014) method are: (i) the simulation results show that the coverage probability of the proposed parametric bootstrap approaches is fairly close to the nominal confidence coefficient while the coverage probability of Zhang’s method is smaller than the nominal confidence coefficient when the number of groups and the variance of groups are large and (ii) the proposed set of classic SPCIs is conservative in contrast to Zhang’s method.  相似文献   

2.
Among statistical inferences, one of the main interests is drawing the inferences about the log-normal means since the log-normal distribution is a well-known candidate model for analyzing positive and right-skewed data. In the past, the researchers only focused on one or two log-normal populations or used the large sample theory or quadratic procedure to deal with several log-normal distributions. In this article, we focus on making inferences on several log-normal means based on the modification of the quadratic method, in which the researchers often used the vector of the generalized variables to deal with the means of the symmetric distributions. Simulation studies show that the quadratic method performs well only for symmetric distributions. However, the modified procedure fits both symmetric and skew distribution. The numerical results show that the proposed modified procedure can provide the confidence interval with coverage probabilities close to the nominal level and the hypothesis testing performed with satisfactory results.  相似文献   

3.
Misclassifications in binary responses have long been a common problem in medical and health surveys. One way to handle misclassifications in clustered or longitudinal data is to incorporate the misclassification model through the generalized estimating equation (GEE) approach. However, existing methods are developed under a non-survey setting and cannot be used directly for complex survey data. We propose a pseudo-GEE method for the analysis of binary survey responses with misclassifications. We focus on cluster sampling and develop analysis strategies for analyzing binary survey responses with different forms of additional information for the misclassification process. The proposed methodology has several attractive features, including simultaneous inferences for both the response model and the association parameters. Finite sample performance of the proposed estimators is evaluated through simulation studies and an application using a real dataset from the Canadian Longitudinal Study on Aging.  相似文献   

4.
The continuous threshold expectile regression model could capture the effect of a covariate on the response variable with two different straight lines, while intersecting an unknown threshold needed be estimated. This article proposes a new estimation method via a linearization technique to estimate the regression coefficients and the threshold simultaneously. Statistical inferences of the proposed estimators are easily derived from the existing theory. Moreover, the estimation procedure is readily implemented by the current software. Simulation studies and an application on GDP per capita and quality of electricity supply data illustrate the proposed method.  相似文献   

5.
The multivariate log-normal distribution is a good candidate to describe data that are not only positive and skewed, but also contain many characteristic values. In this study, we apply the generalized variable method to compare the mean vectors of two independent multivariate log-normal populations that display heteroscedasticity. Two generalized pivotal quantities are derived for constructing the generalized confidence region and for testing the difference between two mean vectors. Simulation results indicate that the proposed procedures exhibit satisfactory performance regardless of the sample sizes and heteroscedasticity. The type I error rates obtained are consistent with expectations and the coverage probabilities are close to the nominal level when compared with the other method which is currently available. These features make the proposed method a worthy alternative for inferential analysis of problems involving multivariate log-normal means. The results are illustrated using three examples.  相似文献   

6.
ABSTRACT

This paper proposes a hysteretic autoregressive model with GARCH specification and a skew Student's t-error distribution for financial time series. With an integrated hysteresis zone, this model allows both the conditional mean and conditional volatility switching in a regime to be delayed when the hysteresis variable lies in a hysteresis zone. We perform Bayesian estimation via an adaptive Markov Chain Monte Carlo sampling scheme. The proposed Bayesian method allows simultaneous inferences for all unknown parameters, including threshold values and a delay parameter. To implement model selection, we propose a numerical approximation of the marginal likelihoods to posterior odds. The proposed methodology is illustrated using simulation studies and two major Asia stock basis series. We conduct a model comparison for variant hysteresis and threshold GARCH models based on the posterior odds ratios, finding strong evidence of the hysteretic effect and some asymmetric heavy-tailness. Versus multi-regime threshold GARCH models, this new collection of models is more suitable to describe real data sets. Finally, we employ Bayesian forecasting methods in a Value-at-Risk study of the return series.  相似文献   

7.
Semiparametric Analysis of Truncated Data   总被引:1,自引:0,他引:1  
Randomly truncated data are frequently encountered in many studies where truncation arises as a result of the sampling design. In the literature, nonparametric and semiparametric methods have been proposed to estimate parameters in one-sample models. This paper considers a semiparametric model and develops an efficient method for the estimation of unknown parameters. The model assumes that K populations have a common probability distribution but the populations are observed subject to different truncation mechanisms. Semiparametric likelihood estimation is studied and the corresponding inferences are derived for both parametric and nonparametric components in the model. The method can also be applied to two-sample problems to test the difference of lifetime distributions. Simulation results and a real data analysis are presented to illustrate the methods.  相似文献   

8.
A development of the 'starship' method (Owen, 1988), a computer intensive estimation method, is presented for two forms of generalized λ distributions (gλd). The method can be used for the full parameter space and is flexible, allowing choice of both the form of the generalized λ distribution and of the nature of fit required. Some examples of its use in fitting data and approximating distributions are given. Some simulation studies explore the sampling distribution of the parameter estimates produced by this method for selected values of the parameters and consider comparisons with two other methods, for one of the gλd distributional forms, not previously so investigated. In the forms and parameter regions available to the other methods, it is demonstrated that the starship compares favourably. Although the differences between the methods, where available, tend to disappear with largersamples, the parameter coverage, flexibility and adaptability of the starship method make it attractive. However, the paper also demonstrates that care is needed when fitting and using such quantile-defined distributional families that are rich in shape, but have complex properties.  相似文献   

9.
The case-cohort design brings cost reduction in large cohort studies. In this paper, we consider a nonlinear quantile regression model for censored competing risks under the case-cohort design. Two different estimation equations are constructed with or without the covariates information of other risks included, respectively. The large sample properties of the estimators are obtained. The asymptotic covariances are estimated by using a fast resampling method, which is useful to consider further inferences. The finite sample performance of the proposed estimators is assessed by simulation studies. Also a real example is used to demonstrate the application of the proposed methods.  相似文献   

10.
In this article, the parametric robust regression approaches are proposed for making inferences about regression parameters in the setting of generalized linear models (GLMs). The proposed methods are able to test hypotheses on the regression coefficients in the misspecified GLMs. More specifically, it is demonstrated that with large samples, the normal and gamma regression models can be properly adjusted to become asymptotically valid for inferences about regression parameters under model misspecification. These adjusted regression models can provide the correct type I and II error probabilities and the correct coverage probability for continuous data, as long as the true underlying distributions have finite second moments.  相似文献   

11.
Comparison of accuracy between two diagnostic tests can be implemented by investigating the difference in paired Youden indices. However, few literature articles have discussed the inferences for the difference in paired Youden indices. In this paper, we propose an exact confidence interval for the difference in paired Youden indices based on the generalized pivotal quantities. For comparison, the maximum likelihood estimate‐based interval and a bootstrap‐based interval are also included in the study for the difference in paired Youden indices. Abundant simulation studies are conducted to compare the relative performance of these intervals by evaluating the coverage probability and average interval length. Our simulation results demonstrate that the exact confidence interval outperforms the other two intervals even with small sample size when the underlying distributions are normal. A real application is also used to illustrate the proposed intervals. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
Panel studies are statistical studies in which two or more variables are observed for two or more subjects at two or more points In time. Cross- lagged panel studies are those studies in which the variables are continuous and divide naturally into two effects or impacts of each set of variables on the other. If a regression approach is taken5 a regression structure Is formulated for the cross-lagged models This structure may assume that the regression parameters are homogeneous across waves and across subpopulations. Under such assumptions the methods of multivariate regression analysis can be adapted to make inferences about the parameters. These inferences are limited to the degree that homogeneity of the parameters Is 'supported b}T the data. We consider the problem of testing the hypotheses of homogeneity and consider the problem of making statistical inferences about the cross-effects should there be evidence against one of the homogeneity assumptions. We demonstrate the methods developed by applying then to two panel data sets.  相似文献   

13.
ABSTRACT

In statistical practice, inferences on standardized regression coefficients are often required, but complicated by the fact that they are nonlinear functions of the parameters, and thus standard textbook results are simply wrong. Within the frequentist domain, asymptotic delta methods can be used to construct confidence intervals of the standardized coefficients with proper coverage probabilities. Alternatively, Bayesian methods solve similar and other inferential problems by simulating data from the posterior distribution of the coefficients. In this paper, we present Bayesian procedures that provide comprehensive solutions for inferences on the standardized coefficients. Simple computing algorithms are developed to generate posterior samples with no autocorrelation and based on both noninformative improper and informative proper prior distributions. Simulation studies show that Bayesian credible intervals constructed by our approaches have comparable and even better statistical properties than their frequentist counterparts, particularly in the presence of collinearity. In addition, our approaches solve some meaningful inferential problems that are difficult if not impossible from the frequentist standpoint, including identifying joint rankings of multiple standardized coefficients and making optimal decisions concerning their sizes and comparisons. We illustrate applications of our approaches through examples and make sample R functions available for implementing our proposed methods.  相似文献   

14.
Abstract

Inferential methods based on ranks present robust and powerful alternative methodology for testing and estimation. In this article, two objectives are followed. First, develop a general method of simultaneous confidence intervals based on the rank estimates of the parameters of a general linear model and derive the asymptotic distribution of the pivotal quantity. Second, extend the method to high dimensional data such as gene expression data for which the usual large sample approximation does not apply. It is common in practice to use the asymptotic distribution to make inference for small samples. The empirical investigation in this article shows that for methods based on the rank-estimates, this approach does not produce a viable inference and should be avoided. A method based on the bootstrap is outlined and it is shown to provide a reliable and accurate method of constructing simultaneous confidence intervals based on rank estimates. In particular it is shown that commonly applied methods of normal or t-approximation are not satisfactory, particularly for large-scale inferences. Methods based on ranks are uniquely suitable for analysis of microarray gene expression data since they often involve large scale inferences based on small samples containing a large number of outliers and violate the assumption of normality. A real microarray data is analyzed using the rank-estimate simultaneous confidence intervals. Viability of the proposed method is assessed through a Monte Carlo simulation study under varied assumptions.  相似文献   

15.
It is common to fit generalized linear models with binomial and Poisson responses, where the data show a variability that is greater than the theoretical variability assumed by the model. This phenomenon, known as overdispersion, may spoil inferences about the model by considering significant parameters associated with variables that have no significant effect on the dependent variable. This paper explains some methods to detect overdispersion and presents and evaluates three well-known methodologies that have shown their usefulness in correcting this problem, using random mean models, quasi-likelihood methods and a double exponential family. In addition, it proposes some new Bayesian model extensions that have proved their usefulness in correcting the overdispersion problem. Finally, using the information provided by the National Demographic and Health Survey 2005, the departmental factors that have an influence on the mortality of children under 5 years and female postnatal period screening are determined. Based on the results, extensions that generalize some of the aforementioned models are also proposed, and their use is motivated by the data set under study. The results conclude that the proposed overdispersion models provide a better statistical fit of the data.  相似文献   

16.
In many chemical data sets, the amount of radiation absorbed (absorbance) is related to the concentration of the element in the sample by Lambert–Beer's law. However, this relation changes abruptly when the variable concentration reaches an unknown threshold level, the so-called change point. In the context of analytical chemistry, there are many methods that describe the relationship between absorbance and concentration, but none of them provide inferential procedures to detect change points. In this paper, we propose partially linear models with a change point separating the parametric and nonparametric components. The Schwarz information criterion is used to locate a change point. A back-fitting algorithm is presented to obtain parameter estimates and the penalized Fisher information matrix is obtained to calculate the standard errors of the parameter estimates. To examine the proposed method, we present a simulation study. Finally, we apply the method to data sets from the chemistry area. The partially linear models with a change point developed in this paper are useful supplements to other methods of absorbance–concentration analysis in chemical studies, for example, and in many other practical applications.  相似文献   

17.
收入基尼系数的统计推断   总被引:2,自引:0,他引:2  
 基尼系数估计量的统计推断是基尼系数研究的一个重点。本文我们使用Davidson(2009)提出的近似大样本渐进分布方法,对收入基尼系数估计量进行统计推断,包括计算估计量的标准差、构造置信区间和进行假设检验。通过模拟试验,我们验证了在小样本情形下,依据该方法所做的统计推断具有较高的可靠性。在此基础上,我们对我国城镇居民的真实收入基尼系数进行了统计推断。  相似文献   

18.
Abstract

In risk assessment, it is often desired to make inferences on the minimum dose levels (benchmark doses or BMDs) at which a specific benchmark risk (BMR) is attained. The estimation and inferences of BMDs are well understood in the case of an adverse response to a single-exposure agent. However, the theory of finding BMDs and making inferences on the BMDs is much less developed for cases where the adverse effect of two hazardous agents is studied simultaneously. Deutsch and Piegorsch [2012. Benchmark dose profiles for joint-action quantal data in quantitative risk assessment. Biometrics 68(4):1313–22] proposed a benchmark modeling paradigm in dual exposure setting—adapted from the single-exposure setting—and developed a strategy for conducting full benchmark analysis with joint-action quantal data, and they further extended the proposed benchmark paradigm to continuous response outcomes [Deutsch, R. C., and W. W. Piegorsch. 2013. Benchmark dose profiles for joint-action continuous data in quantitative risk assessment. Biometrical Journal 55(5):741–54]. In their 2012 article, Deutsch and Piegorsch worked exclusively with the complementary log link for modeling the risk with quantal data. The focus of the current paper is on the logit link; particularly, we consider an Abbott-adjusted [A method of computing the effectiveness of an insecticide. Journal of Economic Entomology 18(2):265–7] log-logistic model for the analysis of quantal data with nonzero background response. We discuss the estimation of the benchmark profile (BMP)—a collection of benchmark points which induce the prespecified BMR—and propose different methods for building benchmark inferences in studies involving two hazardous agents. We perform Monte Carlo simulation studies to evaluate the characteristics of the confidence limits. An example is given to illustrate the use of the proposed methods.  相似文献   

19.

Structural change in any time series is practically unavoidable, and thus correctly detecting breakpoints plays a pivotal role in statistical modelling. This research considers segmented autoregressive models with exogenous variables and asymmetric GARCH errors, GJR-GARCH and exponential-GARCH specifications, which utilize the leverage phenomenon to demonstrate asymmetry in response to positive and negative shocks. The proposed models incorporate skew Student-t distribution and prove the advantages of the fat-tailed skew Student-t distribution versus other distributions when structural changes appear in financial time series. We employ Bayesian Markov Chain Monte Carlo methods in order to make inferences about the locations of structural change points and model parameters and utilize deviance information criterion to determine the optimal number of breakpoints via a sequential approach. Our models can accurately detect the number and locations of structural change points in simulation studies. For real data analysis, we examine the impacts of daily gold returns and VIX on S&P 500 returns during 2007–2019. The proposed methods are able to integrate structural changes through the model parameters and to capture the variability of a financial market more efficiently.

  相似文献   

20.
Generalized Pareto distribution (GPD) is widely used to model exceedances over thresholds. In this paper, we propose a new method, called weighted non linear least squares (WNLS), to estimate the parameters of the three-parameter GPD. Some asymptotic results of the proposed method are provided. An extensive simulation is carried out to evaluate the finite sample behaviour of the proposed method and to compare the behaviour with other methods suggested in the literature. The simulation results show that WNLS outperforms other methods in general situations. Finally, the WNLS is applied to analysis the real-life data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号