首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   22篇
  免费   3篇
管理学   9篇
人口学   1篇
社会学   2篇
统计学   13篇
  2021年   1篇
  2019年   2篇
  2017年   2篇
  2016年   2篇
  2015年   2篇
  2014年   1篇
  2013年   4篇
  2012年   2篇
  2011年   1篇
  2010年   1篇
  2005年   1篇
  2003年   1篇
  2002年   3篇
  1998年   1篇
  1985年   1篇
排序方式: 共有25条查询结果,搜索用时 148 毫秒
1.
The benefits of adjusting for baseline covariates are not as straightforward with repeated binary responses as with continuous response variables. Therefore, in this study, we compared different methods for analyzing repeated binary data through simulations when the outcome at the study endpoint is of interest. Methods compared included chi‐square, Fisher's exact test, covariate adjusted/unadjusted logistic regression (Adj.logit/Unadj.logit), covariate adjusted/unadjusted generalized estimating equations (Adj.GEE/Unadj.GEE), covariate adjusted/unadjusted generalized linear mixed model (Adj.GLMM/Unadj.GLMM). All these methods preserved the type I error close to the nominal level. Covariate adjusted methods improved power compared with the unadjusted methods because of the increased treatment effect estimates, especially when the correlation between the baseline and outcome was strong, even though there was an apparent increase in standard errors. Results of the Chi‐squared test were identical to those for the unadjusted logistic regression. Fisher's exact test was the most conservative test regarding the type I error rate and also with the lowest power. Without missing data, there was no gain in using a repeated measures approach over a simple logistic regression at the final time point. Analysis of results from five phase III diabetes trials of the same compound was consistent with the simulation findings. Therefore, covariate adjusted analysis is recommended for repeated binary data when the study endpoint is of interest. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
2.
Non‐likelihood‐based methods for repeated measures analysis of binary data in clinical trials can result in biased estimates of treatment effects and associated standard errors when the dropout process is not completely at random. We tested the utility of a multiple imputation approach in reducing these biases. Simulations were used to compare performance of multiple imputation with generalized estimating equations and restricted pseudo‐likelihood in five representative clinical trial profiles for estimating (a) overall treatment effects and (b) treatment differences at the last scheduled visit. In clinical trials with moderate to high (40–60%) dropout rates with dropouts missing at random, multiple imputation led to less biased and more precise estimates of treatment differences for binary outcomes based on underlying continuous scores. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   
3.
In this article, we develop a model to study treatment, period, carryover, and other applicable effects in a crossover design with a time-to-event response variable. Because time-to-event outcomes on different treatment regimens within the crossover design are correlated for an individual, we adopt a proportional hazards frailty model. If the frailty is assumed to have a gamma distribution, and the hazard rates are piecewise constant, then the likelihood function can be determined via closed-form expressions. We illustrate the methodology via an application to a data set from an asthma clinical trial and run simulations that investigate sensitivity of the model to data generated from different distributions.  相似文献   
4.
We study mechanism design in dynamic quasilinear environments where private information arrives over time and decisions are made over multiple periods. We make three contributions. First, we provide a necessary condition for incentive compatibility that takes the form of an envelope formula for the derivative of an agent's equilibrium expected payoff with respect to his current type. It combines the familiar marginal effect of types on payoffs with novel marginal effects of the current type on future ones that are captured by “impulse response functions.” The formula yields an expression for dynamic virtual surplus that is instrumental to the design of optimal mechanisms and to the study of distortions under such mechanisms. Second, we characterize the transfers that satisfy the envelope formula and establish a sense in which they are pinned down by the allocation rule (“revenue equivalence”). Third, we characterize perfect Bayesian equilibrium‐implementable allocation rules in Markov environments, which yields tractable sufficient conditions that facilitate novel applications. We illustrate the results by applying them to the design of optimal mechanisms for the sale of experience goods (“bandit auctions”).  相似文献   
5.
A sorting-and-measuring machine (SMM) measures and sorts (classifies) on-line produced items into several groups according to their size. The measuring devices of the SMM perceive the actual item size with a random error ε and classify the item as being smaller than b iff z+ε<b. Here ε is a normal zero-mean r.v. with unknown standard deviation σ which is the main parameter characterizing the precision and technical condition of an SMM. The paper gives the following method of estimating σ. N0 items are measured and N1 of them are recognized by the SMM as belonging to the group a<zb. These N1 items are sorted again and N2 of them return to this group, these are sorted again, and so on. The estimation of σ is based on the statistics Nm/Nn. Moments of the ratio statistics Nm/Nn and their distributional properties are investigated. It turns out that the expected value of Nm/Nn depends almost linearly on σ which allows us to construct ‘almost’ unbiased estimators of type σ?mn=ANm/Nn+B with good propert including robustness with respect to the distribution of item size. Convex combinations of σ?mn statistics are considered to obtain an estimator with minimal variance.  相似文献   
6.
Spatial statistics has a rich tradition in earth, economic, and epidemiological sciences and has potential to affect the study of couples as well. When applied to couple data, spatial statistics can model within‐ and between‐couple differences with results that are readily accessible for researchers and clinicians. This article offers a primer in using spatial statistics as a methodological tool for analyzing dyadic data. The article will introduce spatial approaches, review data structure required for spatial analysis, available software, and examples of data output.  相似文献   
7.
We consider how firms develop internal corporate governance policies based on external nation-wide standards. Flexibility in interpreting external standards allows firms to develop internal regulations focused on governance procedures that are only loosely coupled with expected governance outcomes. Our results demonstrate that firms tend to adopt less restrictive policies than what is recommended by the national standard and are more willing to adopt policies regulating governance procedures than policies regulating governance decisions. We also argue that the process of translating external standards into internal guidelines is affected by firm-specific characteristics and explore factors that determine to what extent firms switch the focus of internal policies from regulating governance decisions to regulating governance procedures.  相似文献   
8.
Statistical inference in the wavelet domain remains a vibrant area of contemporary statistical research because of desirable properties of wavelet representations and the need of scientific community to process, explore, and summarize massive data sets. Prime examples are biomedical, geophysical, and internet related data. We propose two new approaches to wavelet shrinkage/thresholding.

In the spirit of Efron and Tibshirani's recent work on local false discovery rate, we propose Bayesian Local False Discovery Rate (BLFDR), where the underlying model on wavelet coefficients does not assume known variances. This approach to wavelet shrinkage is shown to be connected with shrinkage based on Bayes factors. The second proposal, Bayesian False Discovery Rate (BaFDR), is based on ordering of posterior probabilities of hypotheses on true wavelets coefficients being null, in Bayesian testing of multiple hypotheses.

We demonstrate that both approaches result in competitive shrinkage methods by contrasting them to some popular shrinkage techniques.  相似文献   
9.
This paper constructs an efficient, budget‐balanced, Bayesian incentive‐compatible mechanism for a general dynamic environment with quasilinear payoffs in which agents observe private information and decisions are made over countably many periods. First, under the assumption of “private values” (other agents' private information does not directly affect an agent's payoffs), we construct an efficient, ex post incentive‐compatible mechanism, which is not budget‐balanced. Second, under the assumption of “independent types” (the distribution of each agent's private information is not directly affected by other agents' private information), we show how the budget can be balanced without compromising agents' incentives. Finally, we show that the mechanism can be made self‐enforcing when agents are sufficiently patient and the induced stochastic process over types is an ergodic finite Markov chain.  相似文献   
10.
We performed a simulation study comparing the statistical properties of the estimated log odds ratio from propensity scores analyses of a binary response variable, in which missing baseline data had been imputed using a simple imputation scheme (Treatment Mean Imputation), compared with three ways of performing multiple imputation (MI) and with a Complete Case analysis. MI that included treatment (treated/untreated) and outcome (for our analyses, outcome was adverse event [yes/no]) in the imputer's model had the best statistical properties of the imputation schemes we studied. MI is feasible to use in situations where one has just a few outcomes to analyze. We also found that Treatment Mean Imputation performed quite well and is a reasonable alternative to MI in situations where it is not feasible to use MI. Treatment Mean Imputation performed better than MI methods that did not include both the treatment and outcome in the imputer's model. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号