首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 475 毫秒
1.
Abstract. In regression experiments, to learn about the strength of the relationship between a covariate vector and a dependent variable, we propose a ‘coefficient of determination’ based on the quantiles. Such a coefficient is a ‘local’ measure in the sense that the strength is measured at a prespecified quantile level. Once estimated, it can be used, for example, to measure the relative importance of a subset of covariates in the quantile regression context. Related to this coefficient, we also propose a new ‘local’ lack‐of‐fit measure of a given parametric model. We provide some asymptotic results of the proposed measures and carry out a Monte Carlo simulation study to illustrate their use and performance in practice.  相似文献   

2.
In this article, the problem of testing the equality of coefficients of variation in a multivariate normal population is considered, and an asymptotic approach and a generalized p-value approach based on the concepts of generalized test variable are proposed. Monte Carlo simulation studies show that the proposed generalized p-value test has good empirical sizes, and it is better than the asymptotic approach. In addition, the problem of hypothesis testing and confidence interval for the common coefficient variation of a multivariate normal population are considered, and a generalized p-value and a generalized confidence interval are proposed. Using Monte Carlo simulation, we find that the coverage probabilities and expected lengths of this generalized confidence interval are satisfactory, and the empirical sizes of the generalized p-value are close to nominal level. We illustrate our approaches using a real data.  相似文献   

3.
In recent years, immunological science has evolved, and cancer vaccines are now approved and available for treating existing cancers. Because cancer vaccines require time to elicit an immune response, a delayed treatment effect is expected and is actually observed in drug approval studies. Accordingly, we propose the evaluation of survival endpoints by weighted log‐rank tests with the Fleming–Harrington class of weights. We consider group sequential monitoring, which allows early efficacy stopping, and determine a semiparametric information fraction for the Fleming–Harrington family of weights, which is necessary for the error spending function. Moreover, we give a flexible survival model in cancer vaccine studies that considers not only the delayed treatment effect but also the long‐term survivors. In a Monte Carlo simulation study, we illustrate that when the primary analysis is a weighted log‐rank test emphasizing the late differences, the proposed information fraction can be a useful alternative to the surrogate information fraction, which is proportional to the number of events. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
Bandwidth plays an important role in determining the performance of nonparametric estimators, such as the local constant estimator. In this article, we propose a Bayesian approach to bandwidth estimation for local constant estimators of time-varying coefficients in time series models. We establish a large sample theory for the proposed bandwidth estimator and Bayesian estimators of the unknown parameters involved in the error density. A Monte Carlo simulation study shows that (i) the proposed Bayesian estimators for bandwidth and parameters in the error density have satisfactory finite sample performance; and (ii) our proposed Bayesian approach achieves better performance in estimating the bandwidths than the normal reference rule and cross-validation. Moreover, we apply our proposed Bayesian bandwidth estimation method for the time-varying coefficient models that explain Okun’s law and the relationship between consumption growth and income growth in the U.S. For each model, we also provide calibrated parametric forms of the time-varying coefficients. Supplementary materials for this article are available online.  相似文献   

5.
There have been numerous tests proposed to determine whether or not the exponential model is suitable for a given data set. In this article, we propose a new test statistic based on spacings to test whether the general progressive Type-II censored samples are from exponential distribution. The null distribution of the test statistic is discussed and it could be approximated by the standard normal distribution. Meanwhile, we propose an approximate method for calculating the expectation and variance of samples under null hypothesis and corresponding power function is also given. Then, a simulation study is conducted. We calculate the approximation of the power based on normality and compare the results with those obtained by Monte Carlo simulation under different alternatives with distinct types of hazard function. Results of simulation study disclose that the power properties of this statistic by using Monte Carlo simulation are better for the alternatives with monotone increasing hazard function, and otherwise, normal approximation simulation results are relatively better. Finally, two illustrative examples are presented.  相似文献   

6.
The assessment of overall homogeneity of time‐to‐event curves is a key element in survival analysis in biomedical research. The currently commonly used testing methods, e.g. log‐rank test, Wilcoxon test, and Kolmogorov–Smirnov test, may have a significant loss of statistical testing power under certain circumstances. In this paper we propose a new testing method that is robust for the comparison of the overall homogeneity of survival curves based on the absolute difference of the area under the survival curves using normal approximation by Greenwood's formula. Monte Carlo simulations are conducted to investigate the performance of the new testing method compared against the log‐rank, Wilcoxon, and Kolmogorov–Smirnov tests under a variety of circumstances. The proposed new method has robust performance with greater power to detect the overall differences than the log‐rank, Wilcoxon, and Kolmogorov–Smirnov tests in many scenarios in the simulations. Furthermore, the applicability of the new testing approach is illustrated in a real data example from a kidney dialysis trial. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

7.
对半参数变系数回归模型,构造了新的空间相关性检验统计量,利用三阶矩 逼近方法导出了其检验 值的近似计算公式,蒙特卡罗模拟结果表明该统计量在检测空间相关性方面具有较高的准确性和可靠性。同时考察了误差项服从不同分布时的检验功效,体现出该检验方法的稳健性。进一步,我们还给出了检验统计量的Bootstrap方法以及检验水平的模拟效果。  相似文献   

8.
The maximum likelihood (ML) method is used to estimate the unknown Gamma regression (GR) coefficients. In the presence of multicollinearity, the variance of the ML method becomes overstated and the inference based on the ML method may not be trustworthy. To combat multicollinearity, the Liu estimator has been used. In this estimator, estimation of the Liu parameter d is an important problem. A few estimation methods are available in the literature for estimating such a parameter. This study has considered some of these methods and also proposed some new methods for estimation of the d. The Monte Carlo simulation study has been conducted to assess the performance of the proposed methods where the mean squared error (MSE) is considered as a performance criterion. Based on the Monte Carlo simulation and application results, it is shown that the Liu estimator is always superior to the ML and recommendation about which best Liu parameter should be used in the Liu estimator for the GR model is given.  相似文献   

9.
We present a bootstrap Monte Carlo algorithm for computing the power function of the generalized correlation coefficient. The proposed method makes no assumptions about the form of the underlying probability distribution and may be used with observed data to approximate the power function and pilot data for sample size determination. In particular, the bootstrap power functions of the Pearson product moment correlation and the Spearman rank correlation are examined. Monte Carlo experiments indicate that the proposed algorithm is reliable and compares well with the asymptotic values. An example which demonstrates how this method can be used for sample size determination and power calculations is provided.  相似文献   

10.
In the common factor model for subtest scores, several reliability coefficients, including Cronbach's α, have been found to be biased. In this article, we introduce a new coefficient, θG, or Generalized θ, which is a generalized version of Armor's θ coefficient and is equal to the true reliability when the dimensions are orthogonal and the measures are parallel. We assessed the McDonald's ωt, α, and θG in terms of mean bias, efficiency, and precision using a Monte Carlo simulation. θG outperformed ωt when the factors were orthogonal or nearly orthogonal with low correlations between them.  相似文献   

11.
张晶等 《统计研究》2020,37(11):57-67
近年来,我国消费金融发展迅速,但同时也面临着更加复杂的欺诈和信用风险,为了更好地对消费金融中借贷客户的信用风险进行监测,本文提出了基于稀疏结构连续比率模型的风控方法。相对于传统的二分类模型,该模型的特点是可以处理借贷客户被分为三类或三类以上的有序数据,估计系数的同时能从众多纷繁复杂的数据中自动筛选重要变量,并在变量筛选过程中考虑不同子模型系数的结构特征。通过蒙特卡洛模拟发现,本文所提出的稀疏结构连续比率模型在分类泛化误差和变量筛选上的表现都较好。最后将本文提出的模型应用到实际的消费金融信用风险分析中,针对传统征信信息不足的借款人,通过引入高频电商消费行为数据,利用本文提出的高维有序多分类模型能有效识别借款人的信用风险,可以弥补传统征信方法的不足。  相似文献   

12.
In this paper,we propose a class of general partially linear varying-coefficient transformation models for ranking data. In the models, the functional coefficients are viewed as nuisance parameters and approximated by B-spline smoothing approximation technique. The B-spline coefficients and regression parameters are estimated by rank-based maximum marginal likelihood method. The three-stage Monte Carlo Markov Chain stochastic approximation algorithm based on ranking data is used to compute estimates and the corresponding variances for all the B-spline coefficients and regression parameters. Through three simulation studies and a Hong Kong horse racing data application, the proposed procedure is illustrated to be accurate, stable and practical.  相似文献   

13.
The present paper considers the weighted mixed regression estimation of the coefficient vector in a linear regression model with stochastic linear restrictions binding the regression coefficients. We introduce a new two-parameter-weighted mixed estimator (TPWME) by unifying the weighted mixed estimator of Schaffrin and Toutenburg [1] and the two-parameter estimator (TPE) of Özkale and Kaç?ranlar [2]. This new estimator is a general estimator which includes the weighted mixed estimator, the TPE and the restricted two-parameter estimator (RTPE) proposed by Özkale and Kaç?ranlar [2] as special cases. Furthermore, we compare the TPWME with the weighted mixed estimator and the TPE with respect to the matrix mean square error criterion. A numerical example and a Monte Carlo simulation experiment are presented by using different estimators of the biasing parameters to illustrate some of the theoretical results.  相似文献   

14.
Inferences for survival curves based on right censored data are studied for situations in which it is believed that the treatments have survival times at least as large as the control or at least as small as the control. Testing homogeneity with the appropriate order restricted alternative and testing the order restriction as the null hypothesis are considered. Under a proportional hazards model, the ordering on the survival curves corresponds to an ordering on the regression coefficients. Approximate likelihood methods, which are obtained by applying order restricted procedures to the estimates of the regression coefficients, and ordered analogues to the log rank test, which are based on the score statistics, are considered. Mau's (1988) test, which does not require proportional hazards, is extended to this ordering on the survival curves. Using Monte Carlo techniques, the type I error rates are found to be close to the nominal level and the powers of these tests are compared. Other order restrictions on the survival curves are discussed briefly.  相似文献   

15.
Synthetic likelihood is an attractive approach to likelihood-free inference when an approximately Gaussian summary statistic for the data, informative for inference about the parameters, is available. The synthetic likelihood method derives an approximate likelihood function from a plug-in normal density estimate for the summary statistic, with plug-in mean and covariance matrix obtained by Monte Carlo simulation from the model. In this article, we develop alternatives to Markov chain Monte Carlo implementations of Bayesian synthetic likelihoods with reduced computational overheads. Our approach uses stochastic gradient variational inference methods for posterior approximation in the synthetic likelihood context, employing unbiased estimates of the log likelihood. We compare the new method with a related likelihood-free variational inference technique in the literature, while at the same time improving the implementation of that approach in a number of ways. These new algorithms are feasible to implement in situations which are challenging for conventional approximate Bayesian computation methods, in terms of the dimensionality of the parameter and summary statistic.  相似文献   

16.
We consider estimating the mode of a response given an error‐prone covariate. It is shown that ignoring measurement error typically leads to inconsistent inference for the conditional mode of the response given the true covariate, as well as misleading inference for regression coefficients in the conditional mode model. To account for measurement error, we first employ the Monte Carlo corrected score method (Novick & Stefanski, 2002) to obtain an unbiased score function based on which the regression coefficients can be estimated consistently. To relax the normality assumption on measurement error this method requires, we propose another method where deconvoluting kernels are used to construct an objective function that is maximized to obtain consistent estimators of the regression coefficients. Besides rigorous investigation on asymptotic properties of the new estimators, we study their finite sample performance via extensive simulation experiments, and find that the proposed methods substantially outperform a naive inference method that ignores measurement error. The Canadian Journal of Statistics 47: 262–280; 2019 © 2019 Statistical Society of Canada  相似文献   

17.
As a measure of certainty, informational energy has been used in many statistical problems. In this article, we introduce some estimators of this quantity by modifying the basic estimator available in the literature. The new measures are then used to develop tests of uniformity. A Monte Carlo simulation study is performed to evaluate power behavior of the proposed tests. The results confirm the preference of the new tests in some situations.  相似文献   

18.
This paper introduces a new class of skew distributions by extending the alpha skew normal distribution proposed by Elal-Olivero [Elal-Olivero, D. Alpha-skew-normal distribution. Proyecciones. 2010;29:224–240]. Statistical properties of the new family are studied in details. In particular, explicit expressions for the moments and the shape parameters including the skewness and the kurtosis coefficients and the moment generating function are derived. The problem of estimating parameters on the basis of a random sample coming from the new class of distribution is considered. To examine the performance of the obtained estimators, a Monte Carlo simulation study is conducted. Flexibility and usefulness of the proposed family of distributions are illustrated by analysing three real data sets.  相似文献   

19.
In this article, we propose a new empirical likelihood method for linear regression analysis with a right censored response variable. The method is based on the synthetic data approach for censored linear regression analysis. A log-empirical likelihood ratio test statistic for the entire regression coefficients vector is developed and we show that it converges to a standard chi-squared distribution. The proposed method can also be used to make inferences about linear combinations of the regression coefficients. Moreover, the proposed empirical likelihood ratio provides a way to combine different normal equations derived from various synthetic response variables. Maximizing this empirical likelihood ratio yields a maximum empirical likelihood estimator which is asymptotically equivalent to the solution of the estimating equation that are optimal linear combination of the original normal equations. It improves the estimation efficiency. The method is illustrated by some Monte Carlo simulation studies as well as a real example.  相似文献   

20.
Screening procedures play an important role in data analysis, especially in high-throughput biological studies where the datasets consist of more covariates than independent subjects. In this article, a Bayesian screening procedure is introduced for the binary response models with logit and probit links. In contrast to many screening rules based on marginal information involving one or a few covariates, the proposed Bayesian procedure simultaneously models all covariates and uses closed-form screening statistics. Specifically, we use the posterior means of the regression coefficients as screening statistics; by imposing a generalized g-prior on the regression coefficients, we derive the analytical form of their posterior means and compute the screening statistics without Markov chain Monte Carlo implementation. We evaluate the utility of the proposed Bayesian screening method using simulations and real data analysis. When the sample size is small, the simulation results suggest improved performance with comparable computational cost.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号