首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   29篇
  免费   1篇
管理学   1篇
理论方法论   2篇
社会学   16篇
统计学   11篇
  2020年   2篇
  2019年   1篇
  2018年   1篇
  2015年   2篇
  2014年   2篇
  2013年   2篇
  2012年   2篇
  2011年   1篇
  2010年   1篇
  2008年   1篇
  2006年   1篇
  2005年   2篇
  2004年   1篇
  2003年   4篇
  2002年   2篇
  1997年   1篇
  1996年   1篇
  1995年   2篇
  1994年   1篇
排序方式: 共有30条查询结果,搜索用时 15 毫秒
1.
The Simon's two‐stage design is the most commonly applied among multi‐stage designs in phase IIA clinical trials. It combines the sample sizes at the two stages in order to minimize either the expected or the maximum sample size. When the uncertainty about pre‐trial beliefs on the expected or desired response rate is high, a Bayesian alternative should be considered since it allows to deal with the entire distribution of the parameter of interest in a more natural way. In this setting, a crucial issue is how to construct a distribution from the available summaries to use as a clinical prior in a Bayesian design. In this work, we explore the Bayesian counterparts of the Simon's two‐stage design based on the predictive version of the single threshold design. This design requires specifying two prior distributions: the analysis prior, which is used to compute the posterior probabilities, and the design prior, which is employed to obtain the prior predictive distribution. While the usual approach is to build beta priors for carrying out a conjugate analysis, we derived both the analysis and the design distributions through linear combinations of B‐splines. The motivating example is the planning of the phase IIA two‐stage trial on anti‐HER2 DNA vaccine in breast cancer, where initial beliefs formed from elicited experts' opinions and historical data showed a high level of uncertainty. In a sample size determination problem, the impact of different priors is evaluated.  相似文献   
2.
Nowadays, the majority of human beings live in urban ecosystems, with this proportion expected to continue increasing in the future. With the growing importance of urban rat-associated issues (e.g. damages to urban infrastructures, costs of rat-control programs, rat-associated health risks), it is becoming indispensable to fill the identified gaps in knowledge on the urban brown rat regarding, among others, its density, home range, genetic structure, and infectious status. In this context, live-trapping is a crucial prerequisite to any scientific investigation. This paper assesses the main constraints and challenges regarding the urban field and describes the major steps to be considered when planning research on urban rats. The primary challenges are i) the characterization of the urban experimental unit; ii) the choice of a trapping design: the use of live-trapping in capture-mark-recapture design, in association with modern statistics, is highly recommended to answer ecological questions (although these methods, mostly developed in natural ecosystems, need to be implemented for the urban field); iii) the potential ethical considerations with regard to animal welfare and field-worker safety; iv) the building of mutually-beneficial collaborations with city stakeholders, pest control professionals, and citizens. Emphasis must be put on communication to the public and education of field-workers. One major need of modern urban rat research is a peer-validated field methodology allowing reproducibility, repeatability, and inference from urban field studies and enabling researchers to answer long-standing key questions about urban rat ecology.  相似文献   
3.
Highly skewed outcome distributions observed across clusters are common in medical research. The aim of this paper is to understand how regression models widely used for accommodating asymmetry fit clustered data under heteroscedasticity. In a simulation study, we provide evidence on the performance of the Gamma Generalized Linear Mixed Model (GLMM) and log-Linear Mixed-Effect (LME) model under a variety of data-generating mechanisms. Two case studies from health expenditures literature, the cost of strategies after myocardial infarction randomized clinical trial on the cost of strategies after myocardial infarction and the European Pressure Ulcer Advisory Panel hospital prevalence survey of pressure ulcers, are analyzed and discussed. According to simulation results, the log-LME model for a Gamma response can lead to estimations that are biased by as much as 10% of the true value, depending on the error variance. In the Gamma GLMM, the bias never exceeds 1%, regardless of the extent of heteroscedasticity, and the confidence intervals perform as nominally stated under most conditions. The Gamma GLMM with a log link seems to be more robust to both Gamma and log-normal generating mechanisms than the log-LME model.  相似文献   
4.
In this article I argue that quality ratings can be conceptualized as reflecting the extent to which departments are visible to outside raters. Using cross-sectional as well as panel data on sociology departments from the two latest surveys of graduate education published by the National Research Council in 1982 and 1995, I explain departmental quality ratings in terms of measures that reflect a department’s visibility, such as its faculty productivity, size, age, and location at an elite-status university. While the results of the cross-sectional and the longitudinal models tell different stories, the two are not incompatible. Specifically, both models suggest strong effects of departmental size and age. By comparison, the estimated effects of faculty productivity and location at an elite-status university are weaker and significant only in the cross-sectional model. An earlier version of this paper was presented at the annual meeting of the American Sociological Association, New York, NY, August 1996. This research was conducted under a FLAS fellowship from the U.S. Department of Education. I wish to thank Lowell Hargens for helpful comments and advice.  相似文献   
5.
Outlining some recently obtained results of Hu and Rosenberger [2003. Optimality, variability, power: evaluating response-adaptive randomization procedures for treatment comparisons. J. Amer. Statist. Assoc. 98, 671–678] and Chen [2006. The power of Efron's biased coin design. J. Statist. Plann. Inference 136, 1824–1835] on the relationship between sequential randomized designs and the power of the usual statistical procedures for testing the equivalence of two competing treatments, the aim of this paper is to provide theoretical proofs of the numerical results of Chen [2006. The power of Efron's biased coin design. J. Statist. Plann. Inference 136, 1824–1835]. Furthermore, we prove that the Adjustable Biased Coin Design [Baldi Antognini A., Giovagnoli, A., 2004. A new “biased coin design” for the sequential allocation of two treatments. J. Roy. Statist. Soc. Ser. C 53, 651–664] is uniformly more powerful than the other “coin” designs proposed in the literature for any sample size.  相似文献   
6.
Summary.  Efron's biased coin design is a well-known randomization technique that helps to neutralize selection bias in sequential clinical trials for comparing treatments, while keeping the experiment fairly balanced. Extensions of the biased coin design have been proposed by several researchers who have focused mainly on the large sample properties of their designs. We modify Efron's procedure by introducing an adjustable biased coin design, which is more flexible than his. We compare it with other existing coin designs; in terms of balance and lack of predictability, its performance for small samples appears in many cases to be an improvement with respect to the other sequential randomized allocation procedures.  相似文献   
7.
Concerns have been raised regarding the appropriateness of asking about violence victimization in telephone interviews and whether asking such questions increases respondents' distress or risk for harm. However, no large-scale studies have evaluated the impact of asking such questions during a telephone interview. This study explored respondents' reactions to questions regarding violence in two large recently completed telephone surveys. After respondents were asked about violence, they were asked if they thought surveys should ask such questions and whether they felt upset or afraid because of the questions. In both surveys, the majority of respondents (regardless of their victimization history) were willing to answer questions about violence and were not upset or afraid because of the questions. More than 92% of respondents thought such questions should be asked. These results challenge commonly held beliefs and assumptions and provide some assurance to those concerned with the ethical collection of data on violent victimization.  相似文献   
8.
9.
This paper reviews research on product design in the broad domain of business studies. It highlights established and emerging perspectives and lines of inquiry, and organizes them around three core areas, corresponding to different stages of the design process (design activities, design choices, design results). Avenues for further research at the intersection of these bodies of research are identified and discussed, and the authors argue that management scholars possess conceptual and methodological tools suited to enriching research on design and effectively pursuing lines of investigation only partially addressed by other communities, such as the construction and deployment of design capabilities, or the organizational and institutional context of design activities.  相似文献   
10.
The concept of location depth was introduced as a way to extend the univariate notion of ranking to a bivariate configuration of data points. It has been used successfully for robust estimation, hypothesis testing, and graphical display. The depth contours form a collection of nested polygons, and the center of the deepest contour is called the Tukey median. The only available implemented algorithms for the depth contours and the Tukey median are slow, which limits their usefulness. In this paper we describe an optimal algorithm which computes all bivariate depth contours in O(n 2) time and space, using topological sweep of the dual arrangement of lines. Once these contours are known, the location depth of any point can be computed in O(log2 n) time with no additional preprocessing or in O(log n) time after O(n 2) preprocessing. We provide fast implementations of these algorithms to allow their use in everyday statistical practice.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号