首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 156 毫秒
1.
大数据下的样本大多为非概率样本,其入样概率未知,同时可能面临着协变量较多甚至是高维的情况,那么如何对这种情况下的非概率样本进行推断值得探索。针对该问题,文章考虑到Model-X Knockoffs的降维特点,提出采用Model-X Knockoffs筛选出重要变量,建立Logistic倾向得分模型来估计非概率样本的入样概率或倾向得分,对总体进行推断,从而提高估计的精度,同时可控制变量选择的错误发现率与功效。模拟与实证研究结果表明:基于Model-X Knockoffs的Logistic倾向得分模型的总体均值估计相比一般的Logistic倾向得分模型和广义线性回归模型的总体均值估计,偏差更小、效率更高、估计效果更好,并且能很好地控制错误发现率的水平,功效值也接近1。  相似文献   

2.
利用抽样调查数据对总体参数进行推断通常分为两种途径:一种是基于设计的推断体系;另一种是基于模型的推断体系。基于设计的推断以随机化理论为基础,推断依赖于抽样设计,在大样本下估计量具有无偏性和一致性,但在样本量较小或存在非抽样误差等情况下效率较低。基于模型的推断认为有限总体是一个来自无限超总体的随机样本,推断依赖于模型假设,构建超总体模型具有很大的灵活性,有利于充分利用总体辅助信息并提高估计精度,但在模型假定有误或样本的入样过程不具有无信息性时存在估计误差。如何将两种推断途径相结合,在体现样本对总体代表性的同时,保证估计效率和估计量的优良性质,尚待研究。权数在基于设计的推断中起着核心作用,能够反映抽样设计对样本的影响,实现样本对总体的还原。将权数引入基于模型的推断,可以使基于模型推断的结果具有总体代表性,能更好地发挥两种推断体系的组合优势,并削弱模型假定对推断效果的影响。据此,从权数对于模型推断的影响入手,针对因果推断问题,提出将权数同时引入倾向得分模型和预测模型的建模过程,来构造双稳健估计的方法,并通过模拟研究加以验证。最终结果表明,根据文章所提出的方法进行处理效应的估计,能够充分发挥权数的作用,得到更准确、更稳健的估计结果。实证部分采用2017年CGSS调查数据进行分析,进一步说明在基于调查数据进行模型推断时应充分考虑抽样设计的影响,为科研人员进行因果推断以及其他基于调查数据开展的研究提供参考。  相似文献   

3.
如何解决网络访问固定样本调查的统计推断问题,是大数据背景下网络调查面临的严重挑战。针对此问题,提出将网络访问固定样本的调查样本与概率样本结合,利用倾向得分逆加权和加权组调整构造伪权数来估计目标总体,进一步采用基于有放回概率抽样的Vwr方法、基于广义回归估计的Vgreg方法与Jackknife方法来估计方差,并比较不同方法估计的效果。研究表明:无论概率样本的样本量较大还是较小,本研究所提出的总体均值估计方法效果较好,并且在方差估计中Jackknife方法的估计效果最好。  相似文献   

4.
金勇进  刘展 《统计研究》2016,33(3):11-17
利用大数据进行抽样,很多情况下抽样框的构造比较困难,使得抽取的样本属于非概率样本,难以将传统的抽样推断理论应用到非概率样本中,如何解决非概率抽样的统计推断问题,是大数据背景下抽样调查面临的严重挑战。本文提出了解决非概率抽样统计推断问题的基本思路:一是抽样方法,可以考虑基于样本匹配的样本选择、链接跟踪抽样方法等,使得到的非概率样本近似于概率样本,从而可采用概率样本的统计推断理论;二是权数的构造与调整,可以考虑基于伪设计、模型和倾向得分等方法得到类似于概率样本的基础权数;三是估计,可以考虑基于伪设计、模型和贝叶斯的混合概率估计。最后,以基于样本匹配的样本选择为例探讨了具体解决方法。  相似文献   

5.
刘展等 《统计研究》2021,38(11):130-140
随着大数据与互联网技术的迅猛发展,网络调查的应用越来越广泛。本文提出网络调查样本的随机森林倾向得分模型推断方法,通过构建若干棵分类决策树组成随机森林,对网络调查样本单元的倾向得分进行估计,从而实现对总体的推断。模拟分析和实证研究结果表明:基于随机森林倾向得分模型的总体均值估计的相对偏差、方差与均方误差均比基于Logistic倾向得分模型的总体均值估计的相对偏差、方差与均方误差小,提出的方法估计效果更好。  相似文献   

6.
文章针对非概率抽样统计推断问题,提出了一种解决方法:首先采用倾向得分匹配选择样本,然后采用倾向得分逆加权、加权组调整和事后分层调整三种方法对匹配样本进行加权调整来估计目标总体,并比较不同方法估计的效果.蒙特卡罗模拟与实证研究表明:当网络访问固定样本大小与目标样本大小的比率小于3对,三种加权方法估计的效果均比未加权时匹配样本的估计效果好;当网络访问固定样本大小与目标样本大小的比率不小于3时,倾向得分事后分层调整与未加权的匹配样本估计效果较好.  相似文献   

7.
模型辅助方法的思想是基于抽样设计借助于超总体模型获得对总体参数的有效推断.满足辅助变量的HT估计等于总体总量真值的样本被称为平衡样本.对于平衡样本,如果超总体模型的异方差性可以通过辅助变量解释,由此得出最优抽样策略:平衡抽样设计与HT估计结合是最优策略,包含概率正比于模型残差的标准差.  相似文献   

8.
于力超  金勇进 《统计研究》2016,33(1):95-102
抽样调查领域常采用对多个受访者进行跟踪调查得到面板数据,进而对总体特性进行统计推断,在面板数据中常含缺失数据,大多数处理面板缺失数据的软件都是直接删去含缺失值的受访者以得到完全数据集,当数据缺失机制为非随机缺失时会导致总体参数估计结果有偏。本文针对数据缺失机制为非随机缺失情形下,如何对面板数据进行统计分析进行了阐述,主要采用的是基于模型的似然推断法,对目标变量、缺失指示变量和随机效应向量的联合分布建模,在已有选择模型和模式混合模型的基础上,引入随机效应,研究目标变量期望的计算方法,并研究随机效应杂合模型下参数的估计方法,在变量分布相对简单的情形下给出了用极大似然法推断总体参数的估计步骤,最后通过模拟分析比较方法的优劣。  相似文献   

9.
对复杂样本进行推断通常有两种体系,一种是传统的基于随机化理论的统计推断,另一种是基于模型的统计推断。传统的抽样理论以随机化理论为基础,将总体取值视为固定,随机性仅体现在样本的选取上,对总体的推断依赖于抽样设计。该方法在大样本情况下具有稳健估计量,但在小样本、数据缺失等情况下失效。基于模型的抽样推断认为总体是超总体模型中抽取的一个随机样本,对总体的推断取决于模型的建立,但在不可忽略抽样设计下估计量是有偏估计。在对这两类推断方法分析的基础上,提出抽样设计辅助的模型推断,并指出该方法在复杂抽样中具有重要的应用价值。  相似文献   

10.
基于抽样设计推断与基于模型推断是有限总体推断的两个不同途径。文章针对基于模型的推断方法-最优线性无偏估计(BLUE)进行了讨论,指出在特定的超总体模型下,BLUE与基于抽样设计的估计是一致的。数值分析解释了模型推断的优越性。  相似文献   

11.
Analysts of survey data are often interested in modelling the population process, or superpopulation, that gave rise to a 'target' set of survey variables. An important tool for this is maximum likelihood estimation. A survey is said to provide limited information for such inference if data used in the design of the survey are unavailable to the analyst. In this circumstance, sample inclusion probabilities, which are typically available, provide information which needs to be incorporated into the analysis. We consider the case where these inclusion probabilities can be modelled in terms of a linear combination of the design and target variables, and only sample values of these are available. Strict maximum likelihood estimation of the underlying superpopulation means of these variables appears to be analytically impossible in this case, but an analysis based on approximations to the inclusion probabilities leads to a simple estimator which is a close approximation to the maximum likelihood estimator. In a simulation study, this estimator outperformed several other estimators that are based on approaches suggested in the sampling literature.  相似文献   

12.
Results in five areas of survey sampling dealing with the choice of the sampling design are reviewed. In Section 2, the results and discussions surrounding the purposive selection methods suggested by linear regression superpopulation models are reviewed. In Section 3, similar models to those in the previous section are considered; however, random sampling designs are considered and attention is focused on the optimal choice of πj. Then in Section 4, systematic sampling methods obtained under autocorrelated superpopulation models are reviewed. The next section examines minimax sampling designs. The work in the final section is based solely on the randomization. In Section 6 methods of sample selection which yield inclusion probabilities πj = n/N and πij = n(n - 1)/N(N - 1), but for which there are fewer than NCn possible samples, are mentioned briefly.  相似文献   

13.
The logistic regression model has become a standard tool to investigate the relationship between a binary outcome and a set of potential predictors. When analyzing binary data, it often arises that the observed proportion of zeros is greater than expected under the postulated logistic model. Zero-inflated binomial (ZIB) models have been developed to fit binary data that contain too many zeros. Maximum likelihood estimators in these models have been proposed and their asymptotic properties established. Several aspects of ZIB models still deserve attention however, such as the estimation of odds-ratios and event probabilities. In this article, we propose estimators of these quantities and we investigate their properties both theoretically and via simulations. Based on these results, we provide recommendations about the range of conditions (minimum sample size, maximum proportion of zeros in excess) under which a reliable statistical inference on the odds-ratios and event probabilities can be obtained in a ZIB regression model. A real-data example illustrates the proposed estimators.  相似文献   

14.
Inference in model-based cluster analysis   总被引:6,自引:0,他引:6  
A new approach to cluster analysis has been introduced based on parsimonious geometric modelling of the within-group covariance matrices in a mixture of multivariate normal distributions, using hierarchical agglomeration and iterative relocation. It works well and is widely used via the MCLUST software available in S-PLUS and StatLib. However, it has several limitations: there is no assessment of the uncertainty about the classification, the partition can be suboptimal, parameter estimates are biased, the shape matrix has to be specified by the user, prior group probabilities are assumed to be equal, the method for choosing the number of groups is based on a crude approximation, and no formal way of choosing between the various possible models is included. Here, we propose a new approach which overcomes all these difficulties. It consists of exact Bayesian inference via Gibbs sampling, and the calculation of Bayes factors (for choosing the model and the number of groups) from the output using the Laplace–Metropolis estimator. It works well in several real and simulated examples.  相似文献   

15.
It is suggested that inference under the proportional hazard model can be carried out by programs for exact inference under the logistic regression model. Advantages of such inference is that software is available and that multivariate models can be addressed. The method has been evaluated by means of coverage and power calculations in certain situations. In all situations coverage was above the nominal level, but on the other hand rather conservative. A different type of exact inference is developed under Type II censoring. Inference was then less conservative, however there are limitations with respect to censoring mechanism, multivariate generalizations and software is not available. This method also requires extensive computational power. Performance of large sample Wald, score and likelihood inference was also considered. Large sample methods works remarkably well with small data sets, but inference by score statistics seems to be the best choice. There seems to be some problems with likelihood ratio inference that may originate from how this method works with infinite estimates of the regression parameter. Inference by Wald statistics can be quite conservative with very small data sets.  相似文献   

16.
Approximate Bayesian Inference for Survival Models   总被引:1,自引:0,他引:1  
Abstract. Bayesian analysis of time‐to‐event data, usually called survival analysis, has received increasing attention in the last years. In Cox‐type models it allows to use information from the full likelihood instead of from a partial likelihood, so that the baseline hazard function and the model parameters can be jointly estimated. In general, Bayesian methods permit a full and exact posterior inference for any parameter or predictive quantity of interest. On the other side, Bayesian inference often relies on Markov chain Monte Carlo (MCMC) techniques which, from the user point of view, may appear slow at delivering answers. In this article, we show how a new inferential tool named integrated nested Laplace approximations can be adapted and applied to many survival models making Bayesian analysis both fast and accurate without having to rely on MCMC‐based inference.  相似文献   

17.
The article considers nonparametric inference for quantile regression models with time-varying coefficients. The errors and covariates of the regression are assumed to belong to a general class of locally stationary processes and are allowed to be cross-dependent. Simultaneous confidence tubes (SCTs) and integrated squared difference tests (ISDTs) are proposed for simultaneous nonparametric inference of the latter models with asymptotically correct coverage probabilities and Type I error rates. Our methodologies are shown to possess certain asymptotically optimal properties. Furthermore, we propose an information criterion that performs consistent model selection for nonparametric quantile regression models of nonstationary time series. For implementation, a wild bootstrap procedure is proposed, which is shown to be robust to the dependent and nonstationary data structure. Our method is applied to studying the asymmetric and time-varying dynamic structures of the U.S. unemployment rate since the 1940s. Supplementary materials for this article are available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号