首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   444篇
  免费   18篇
  国内免费   1篇
管理学   3篇
民族学   1篇
人口学   1篇
丛书文集   3篇
理论方法论   15篇
综合类   19篇
社会学   17篇
统计学   404篇
  2023年   9篇
  2021年   11篇
  2020年   9篇
  2019年   17篇
  2018年   16篇
  2017年   26篇
  2016年   19篇
  2015年   8篇
  2014年   12篇
  2013年   155篇
  2012年   32篇
  2011年   15篇
  2010年   16篇
  2009年   11篇
  2008年   12篇
  2007年   11篇
  2006年   9篇
  2005年   12篇
  2004年   8篇
  2003年   3篇
  2002年   5篇
  2001年   4篇
  2000年   5篇
  1999年   7篇
  1998年   4篇
  1997年   2篇
  1996年   1篇
  1995年   1篇
  1993年   2篇
  1992年   5篇
  1991年   1篇
  1990年   3篇
  1988年   1篇
  1987年   2篇
  1986年   1篇
  1985年   1篇
  1983年   2篇
  1982年   2篇
  1981年   1篇
  1978年   1篇
  1975年   1篇
排序方式: 共有463条查询结果,搜索用时 468 毫秒
21.
Sample size estimation for comparing the rates of change in two-arm repeated measurements has been investigated by many investigators. In contrast, the literature has paid relatively less attention to sample size estimation for studies with multi-arm repeated measurements where the design and data analysis can be more complex than two-arm trials. For continuous outcomes, Jung and Ahn (2004 Jung, S., Ahn, C. (2004). K-sample test and sample size calculation for comparing slopes in data with repeated measurements. Biometrical J. 46(5):554564.[Crossref], [Web of Science ®] [Google Scholar]) and Zhang and Ahn (2013 Zhang, S., Ahn, C. (2013). Sample size calculation for comparing time-averaged responses in k-group repeated measurement studies. Comput. Stat. Data Anal. 58:283291.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) have presented sample size formulas to compare the rates of change and time-averaged responses in multi-arm trials, using the generalized estimating equation (GEE) approach. To our knowledge, there has been no corresponding development for multi-arm trials with count outcomes. We present a sample size formula for comparing the rates of change in multi-arm repeated count outcomes using the GEE approach that accommodates various correlation structures, missing data patterns, and unbalanced designs. We conduct simulation studies to assess the performance of the proposed sample size formula under a wide range of designing configurations. Simulation results suggest that empirical type I error and power are maintained close to their nominal levels. The proposed method is illustrated using an epileptic clinical trial example.  相似文献   
22.
In oncology, it may not always be possible to evaluate the efficacy of new medicines in placebo-controlled trials. Furthermore, while some newer, biologically targeted anti-cancer treatments may be expected to deliver therapeutic benefit in terms of better tolerability or improved symptom control, they may not always be expected to provide increased efficacy relative to existing therapies. This naturally leads to the use of active-control, non-inferiority trials to evaluate such treatments. In recent evaluations of anti-cancer treatments, the non-inferiority margin has often been defined in terms of demonstrating that at least 50% of the active control effect has been retained by the new drug using methods such as those described by Rothmann et al., Statistics in Medicine 2003; 22:239-264 and Wang and Hung Controlled Clinical Trials 2003; 24:147-155. However, this approach can lead to prohibitively large clinical trials and results in a tendency to dichotomize trial outcome as either 'success' or 'failure' and thus oversimplifies interpretation. With relatively modest modification, these methods can be used to define a stepwise approach to design and analysis. In the first design step, the trial is sized to show indirectly that the new drug would have beaten placebo; in the second analysis step, the probability that the new drug is superior to placebo is assessed and, if sufficiently high in the third and final step, the relative efficacy of the new drug to control is assessed on a continuum of effect retention via an 'effect retention likelihood plot'. This stepwise approach is likely to provide a more complete assessment of relative efficacy so that the value of new treatments can be better judged.  相似文献   
23.
Time to event outcome trials in clinical research are typically large, expensive and high‐profile affairs. Such trials are commonplace in oncology and cardiovascular therapeutic areas but are also seen in other areas such as respiratory in indications like chronic obstructive pulmonary disease. Their progress is closely monitored and results are often eagerly awaited. Once available, the top line result is often big news, at least within the therapeutic area in which it was conducted, and the data are subsequently fully scrutinized in a series of high‐profile publications. In such circumstances, the statistician has a vital role to play in the design, conduct, analysis and reporting of the trial. In particular, in drug development it is incumbent on the statistician to ensure at the outset that the sizing of the trial is fully appreciated by their medical, and other non‐statistical, drug development team colleagues and that the risk of delivering a statistically significant but clinically unpersuasive result is minimized. The statistician also has a key role in advising the team when, early in the life of an outcomes trial, a lower than anticipated event rate appears to be emerging. This paper highlights some of the important features relating to outcome trial sample sizing and makes a number of simple recommendations aimed at ensuring a better, common understanding of the interplay between sample size and power and the final result required to provide a statistically positive and clinically persuasive outcome. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   
24.
In this paper we present an approach to using historical control data to augment information from a randomized controlled clinical trial, when it is not possible to continue the control regimen to obtain the most reliable and valid assessment of long term treatment effects. Using an adjustment procedure to the historical control data, we investigate a method of estimating the long term survival function for the clinical trial control group and for evaluating the long term treatment effect. The suggested method is simple to interpret, and particularly motivated in clinical trial settings when ethical considerations preclude the long term follow-up of placebo controls. A simulation study reveals that the bias in parameter estimates that arises in the setting of group sequential monitoring will be attenuated when long term historical control information is used in the proposed manner. Data from the first and second National Wilms' Tumor studies are used to illustrate the method.  相似文献   
25.
A three-parameter generalisation of the beta-binomial distribution (BBD) derived by Chandon (1976) is examined. We obtain the maximum likelihood estimates of the parameters and give the elements of the information matrix. To exhibit the applicability of the generalised distribution we show how it gives an improved fit over the BBD for magazine exposure and consumer purchasing data. Finally we derive an empirical Bayes estimate of a binomial proportion based on the generalised beta distribution used in this study.  相似文献   
26.
The problem is that of estimating the probabilities of m independent binomial random variables when their probabilities are known to be nondecreasing and the loss function is squared error. In the cases where the m.l.e. is inadmissible (essentially when the total number of trials is 7 or more) we present a method for modifying the m.l.e. to get a better estimator. The method requires a series of changes. At each step we alter the action taken by the m.l.e. on each of three, appropriately chosen, points in the sample space.  相似文献   
27.
We consider the situation where one wants to maximise a functionf(θ,x) with respect tox, with θ unknown and estimated from observationsy k . This may correspond to the case of a regression model, where one observesy k =f(θ,x k )+ε k , with ε k some random error, or to the Bernoulli case wherey k ∈{0, 1}, with Pr[y k =1|θ,x k |=f(θ,x k ). Special attention is given to sequences given by , with an estimated value of θ obtained from (x1, y1),...,(x k ,y k ) andd k (x) a penalty for poor estimation. Approximately optimal rules are suggested in the linear regression case with a finite horizon, where one wants to maximize ∑ i=1 N w i f(θ, x i ) with {w i } a weighting sequence. Various examples are presented, with a comparison with a Polya urn design and an up-and-down method for a binary response problem.  相似文献   
28.
The problem of calculating approximate confidence limits for the difference between success probability parameters of two Pólya distributions is solved for the first time. We suggest some new methods for determining these approximate confidence limits and consider their application to special cases: namely for the binomial and hypergeometric distributions. The various approximate confidence limits are evaluated and compared.  相似文献   
29.

Discussions of ethical issues in research involving human subjects most usually provoke concerns about valid informed consent procedures. However, considering the recognized limitations of informed consent, arguably the way a study is designed is a more consequential concern for subject well‐being. This paper summarizes ethical issues in the design of clinical research, with reference to historic and current guidelines. Special attention is given to randomized clinical trials (RCTs) and psychiatric research.  相似文献   
30.
Asymptotic expansions for the null distribution of the logrank statistic and its distribution under local proportional hazards alternatives are developed in the case of iid observations. The results, which are derived from the work of Gu (1992) and Taniguchi (1992), are easy to interpret, and provide some theoretical justification for many behavioral characteristics of the logrank test that have been previously observed in simulation studies. We focus primarily upon (i) the inadequacy of the usual normal approximation under treatment group imbalance; and, (ii) the effects of treatment group imbalance on power and sample size calculations. A simple transformation of the logrank statistic is also derived based on results in Konishi (1991) and is found to substantially improve the standard normal approximation to its distribution under the null hypothesis of no survival difference when there is treatment group imbalance. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号