首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   51篇
  免费   1篇
丛书文集   1篇
社会学   1篇
统计学   50篇
  2018年   2篇
  2017年   4篇
  2016年   1篇
  2013年   22篇
  2012年   5篇
  2009年   1篇
  2008年   7篇
  2007年   2篇
  2005年   2篇
  2004年   4篇
  1995年   1篇
  1993年   1篇
排序方式: 共有52条查询结果,搜索用时 239 毫秒
41.
As the Gibbs sampler has become one of the standard tools in computing, the practice of burn-in is almost the default option. Because it takes a certain number of iterations for the initial distribution to reach stationarity, supporters of burn-in will throw away an initial segment of the samples and argue that such a practice ensures unbiasedness. Running time analysis studies the question of how many samples to be thrown away. Basically, it equates the number of iterations to stationarity with the number of initial samples to be discarded. However, many practitioners have found that burn-in wastes potentially useful samples and the practice is inefficient, and thus unnecessary. For the example considered, a single chain without burn-in offers both efficiency and accuracy superior to multiple chains with burn-in. We show that the Gibbs sampler uses odds to generate samples. Because the correct odds are used from the onset of the iterative process, the observations generated by the Gibbs sampler are identically distributed as the target distribution; thus throwing away those valid samples is wasteful. When the chain of distributions and the trajectory (sample path) of the chain are considered based on their separate merits, the disagreement can be settled. We advocate carefully choosing the initial state, but without burn-in to quicken the formation of the stationary distribution.  相似文献   
42.
Understanding and modeling multivariate dependence structures depending upon the direction are challenging but an interest of theoretical and applied researchers. In this paper, we propose a characterization of tables generated by Bernoulli variables through the uniformization of the marginals and refer to them as Q-type tables. The idea is similar to the copulas. This approach helps to see the dependence structure clearly by eliminating the effect of the marginals that have nothing to do with the dependence structure. We define and study conditional and unconditional Q-type tables and provide various applications for them. The limitations of existing approaches such as Cochran-Mantel-Haenszel pooled odds ratio are discussed, and a new one that stems naturally from our approach is introduced.  相似文献   
43.
Summary.  Using standard correlation bounds, we show that in generalized estimation equations (GEEs) the so-called 'working correlation matrix' R ( α ) for analysing binary data cannot in general be the true correlation matrix of the data. Methods for estimating the correlation param-eter in current GEE software for binary responses disregard these bounds. To show that the GEE applied on binary data has high efficiency, we use a multivariate binary model so that the covariance matrix from estimating equation theory can be compared with the inverse Fisher information matrix. But R ( α ) should be viewed as the weight matrix, and it should not be confused with the correlation matrix of the binary responses. We also do a comparison with more general weighted estimating equations by using a matrix Cauchy–Schwarz inequality. Our analysis leads to simple rules for the choice of α in an exchangeable or autoregressive AR(1) weight matrix R ( α ), based on the strength of dependence between the binary variables. An example is given to illustrate the assessment of dependence and choice of α .  相似文献   
44.
In longitudinal studies, as repeated observations are made on the same individual the response variables will usually be correlated. In analyzing such data, this dependence must be taken into account to avoid misleading inferences. The focus of this paper is to apply a logistic marginal model with Markovian dependence proposed by Azzalini [A. Azzalini, Logistic regression for autocorrelated data with application to repeated measures, Biometrika 81 (1994) 767–775] to the study of the influence of time-dependent covariates on the marginal distribution of the binary response in serially correlated binary data. We have shown how to construct the model so that the covariates relate only to the mean value of the process, independent of the association parameters. After formulating the proposed model for repeated measures data, the same approach is applied to missing data. An application is provided to the diabetes mellitus data of registered patients at the Bangladesh Institute of Research and Rehabilitation in Diabetes, Endocrine and Metabolic Disorders (BIRDEM) in 1984, using both time stationary and time varying covariates.  相似文献   
45.
Clinical trials often assess whether or not subjects have a disease at predetermined follow-up times. When the response of interest is a recurrent event, a subject may respond at multiple follow-up times over the course of the study. Alternatively, when the response of interest is an irreversible event, a subject is typically only observed until the time at which the response is first detected. However, some recent studies have recorded subjects responses at follow-up times after an irreversible event is initially observed. This study compares how existing models perform when failure time data are treated as recurrent events.  相似文献   
46.
47.
Summary.  The standard cumulative sum (CUSUM), risk-adjusted CUSUM and Shiryayev–Roberts schemes for monitoring surgical performance are compared. We find that both CUSUM schemes are comparable in run length performance except when there is a high heterogeneity of surgical risks, in which case the risk-adjusted CUSUM scheme is more sensitive in detecting a shift in surgical performance. The Shiryayev–Roberts scheme is found to be less sensitive compared with the CUSUM schemes in detecting a deterioration in surgical performance. Using the Markov chain method, the exact average run length of a standard CUSUM scheme can be computed whereas the average run length of a risk-adjusted CUSUM scheme is approximated. For a risk-adjusted CUSUM scheme, the accuracy of the average run length depends on the fineness of the discretization of CUSUM values, which relies on the chart limit, shift to be detected optimally and in-control surgical risk distribution. A sensitivity analysis shows that the risk-adjusted CUSUM and Shiryayev–Roberts schemes still perform moderately well in detecting a deterioration and an improvement in surgical performances respectively even though there is a misspecification of the in-control surgical risk distribution. In general, the run length performance of the Shiryayev–Roberts scheme is comparatively less sensitive to a misspecification of the in-control surgical risk distribution.  相似文献   
48.
在区域确定的前提下,将各高等教育资源数据进行整理,建立Multinomial Logistic模型,分析各结构的相对发生比率以及各结构的最佳分布。分析该回归分析的多维发生比率,由此确定各种离散等级状态之间的调整方向及调整程度。对黑龙江省的各高等教育区域进行实证分析,结果表明,黑龙江省高等教育资源结构以教学型、教学研究型、研究教学型、研究型的比例来判断,高等教育效用有待于进一步挖掘。多维发生比率以及自变量对多维发生比率的变化影响这两个参数对区域高等教育资源结构优化起着关键作用。  相似文献   
49.
Adjustment for covariates is a time-honored tool in statistical analysis and is often implemented by including the covariates that one intends to adjust as additional predictors in a model. This adjustment often does not work well when the underlying model is misspecified. We consider here the situation where we compare a response between two groups. This response may depend on a covariate for which the distribution differs between the two groups one intends to compare. This creates the potential that observed differences are due to differences in covariate levels rather than “genuine” population differences that cannot be explained by covariate differences. We propose a bootstrap-based adjustment method. Bootstrap weights are constructed with the aim of aligning bootstrap–weighted empirical distributions of the covariate between the two groups. Generally, the proposed weighted-bootstrap algorithm can be used to align or match the values of an explanatory variable as closely as desired to those of a given target distribution. We illustrate the proposed bootstrap adjustment method in simulations and in the analysis of data on the fecundity of historical cohorts of French-Canadian women.  相似文献   
50.
Odds ratios are frequently used to describe the relationship between a binary treatment or exposure and a binary outcome. An odds ratio can be interpreted as a causal effect or a measure of association, depending on whether it involves potential outcomes or the actual outcome. An odds ratio can also be characterized as marginal versus conditional, depending on whether it involves conditioning on covariates. This article proposes a method for estimating a marginal causal odds ratio subject to confounding. The proposed method is based on a logistic regression model relating the outcome to the treatment indicator and potential confounders. Simulation results show that the proposed method performs reasonably well in moderate-sized samples and may even offer an efficiency gain over the direct method based on the sample odds ratio in the absence of confounding. The method is illustrated with a real example concerning coronary heart disease.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号