首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   445篇
  免费   18篇
  国内免费   1篇
管理学   3篇
民族学   1篇
人口学   1篇
丛书文集   3篇
理论方法论   15篇
综合类   19篇
社会学   17篇
统计学   405篇
  2023年   9篇
  2022年   1篇
  2021年   11篇
  2020年   9篇
  2019年   17篇
  2018年   16篇
  2017年   26篇
  2016年   19篇
  2015年   8篇
  2014年   12篇
  2013年   155篇
  2012年   32篇
  2011年   15篇
  2010年   16篇
  2009年   11篇
  2008年   12篇
  2007年   11篇
  2006年   9篇
  2005年   12篇
  2004年   8篇
  2003年   3篇
  2002年   5篇
  2001年   4篇
  2000年   5篇
  1999年   7篇
  1998年   4篇
  1997年   2篇
  1996年   1篇
  1995年   1篇
  1993年   2篇
  1992年   5篇
  1991年   1篇
  1990年   3篇
  1988年   1篇
  1987年   2篇
  1986年   1篇
  1985年   1篇
  1983年   2篇
  1982年   2篇
  1981年   1篇
  1978年   1篇
  1975年   1篇
排序方式: 共有464条查询结果,搜索用时 15 毫秒
121.
Summary.  The concept of reliability denotes one of the most important psychometric properties of a measurement scale. Reliability refers to the capacity of the scale to discriminate between subjects in a given population. In classical test theory, it is often estimated by using the intraclass correlation coefficient based on two replicate measurements. However, the modelling framework that is used in this theory is often too narrow when applied in practical situations. Generalizability theory has extended reliability theory to a much broader framework but is confronted with some limitations when applied in a longitudinal setting. We explore how the definition of reliability can be generalized to a setting where subjects are measured repeatedly over time. On the basis of four defining properties for the concept of reliability, we propose a family of reliability measures which circumscribes the area in which reliability measures should be sought. It is shown how different members assess different aspects of the problem and that the reliability of the instrument can depend on the way that it is used. The methodology is motivated by and illustrated on data from a clinical study on schizophrenia. On the basis of this study, we estimate and compare the reliabilities of two different rating scales to evaluate the severity of the disorder.  相似文献   
122.
One of the most important steps in the design of a pharmaceutical clinical trial is the estimation of the sample size. For a superiority trial the sample size formula (to achieve a stated power) would be based on a given clinically meaningful difference and a value for the population variance. The formula is typically used as though this population variance is known whereas in reality it is unknown and is replaced by an estimate with its associated uncertainty. The variance estimate would be derived from an earlier similarly designed study (or an overall estimate from several previous studies) and its precision would depend on its degrees of freedom. This paper provides a solution for the calculation of sample sizes that allows for the imprecision in the estimate of the sample variance and shows how traditional formulae give sample sizes that are too small since they do not allow for this uncertainty with the deficiency being more acute with fewer degrees of freedom. It is recommended that the methodology described in this paper should be used when the sample variance has less than 200 degrees of freedom.  相似文献   
123.
A common challenge in clinical research trials is for applied statistics to manage, analyse, summarize and report an enormous amount of data. Nowadays, due to advances in medical technology, situations frequently arise where it is difficult to display and interpret results. Consequently, a creative approach is required to summarize the main outcomes of the statistical analyses in a form which is easy to grasp, to interpret and possibly to remember. In this paper a number of clinical case studies are provided. Firstly, a topographical map of the brain summarizing P-values obtained from comparisons across different EEG sites; secondly, a bulls eye plot, showing the agreement between observers in different regions of the heart; thirdly, a pictorial table reporting inter- and intra-rater reliability scores of a speech assessment; fourthly a star-plot to deal with numerous questionnaire results and finally a correlogram to illustrate significant correlation values between two diagnostic tools. The intention of this paper is to encourage the effort of visual representations of multiple statistical outcomes. Such representations do not only embellish the report, but aid interpretation by conveying a specific statistical meaning.  相似文献   
124.
The efficiency of a sequential test is related to the “importance” of the trials within the test. This relationship is used to find the optimal test for selecting the greater of two binomial probabilities, pα and pb, namely, the stopping rule is “gambler's ruin” and the optimal discipline when pα+pb 1 (≥ 1) is play-the-winner (loser), i.e. an α-trial which results in a success is followed by an α-trial (b-trial) whereas an α-trial which results in a failure is followed by α b-trid (α-trial) and correspondingly for b-trials.  相似文献   
125.
Summary.  The treatments embodied in social interventions are characterized by their heterogeneity, delivered as they often are by different individuals operating in different social and geographical contexts. One implication of this heterogeneity is that average treatment effects will often be less useful than estimates of differential impacts across contexts. The paper shows how multilevel models can be used to estimate variability of impact and to account for systematic effects. These models are specified for multisite interventions, for studies using cluster allocation and for designs that incorporate matching. The paper indicates how qualitative and quantitative approaches to evaluation could be linked.  相似文献   
126.
The last decade saw enormous progress in the development of causal inference tools to account for noncompliance in randomized clinical trials. With survival outcomes, structural accelerated failure time (SAFT) models enable causal estimation of effects of observed treatments without making direct assumptions on the compliance selection mechanism. The traditional proportional hazards model has however rarely been used for causal inference. The estimator proposed by Loeys and Goetghebeur (2003, Biometrics vol. 59 pp. 100–105) is limited to the setting of all or nothing exposure. In this paper, we propose an estimation procedure for more general causal proportional hazards models linking the distribution of potential treatment-free survival times to the distribution of observed survival times via observed (time-constant) exposures. Specifically, we first build models for observed exposure-specific survival times. Next, using the proposed causal proportional hazards model, the exposure-specific survival distributions are backtransformed to their treatment-free counterparts, to obtain – after proper mixing – the unconditional treatment-free survival distribution. Estimation of the parameter(s) in the causal model is then based on minimizing a test statistic for equality in backtransformed survival distributions between randomized arms.  相似文献   
127.
Summary. The paper presents a multilevel framework for the analysis of multivariate count data that are observed over several time periods for a random sample of individuals. The approach proposed facilitates studying observed and unobserved sources of dependences among the event categories in the presence of possibly higher order autoregressive effects. In an investigation of the relationships between pleasant and unpleasant emotional experiences and the personality traits neuroticism and extraversion over time, we find that the two personality factors are related to both the mean rates of the emotional experiences and their carry-over effects. Respondents with high neuroticism scores not only reported more unpleasant than pleasant emotional experiences but also exhibited higher carry-over effects for unpleasant than for pleasant emotions. In contrast, respondents with high extraversion scores reported fewer anxiety and more euphoria emotions than respondents with low extraversion scores with weaker carry-over effects for both pleasant and unpleasant emotions.  相似文献   
128.
Response-adaptive (RA) allocation designs can skew the allocation of incoming subjects toward the better performing treatment group based on the previously accrued responses. While unstable estimators and increased variability can adversely affect adaptation in early trial stages, Bayesian methods can be implemented with decreasingly informative priors (DIP) to overcome these difficulties. DIPs have been previously used for binary outcomes to constrain adaptation early in the trial, yet gradually increase adaptation as subjects accrue. We extend the DIP approach to RA designs for continuous outcomes, primarily in the normal conjugate family by functionalizing the prior effective sample size to equal the unobserved sample size. We compare this effective sample size DIP approach to other DIP formulations. Further, we considered various allocation equations and assessed their behavior utilizing DIPs. Simulated clinical trials comparing the behavior of these approaches with traditional Frequentist and Bayesian RA as well as balanced designs show that the natural lead-in approaches maintain improved treatment with lower variability and greater power.  相似文献   
129.
In drug development, bioequivalence studies are used to indirectly demonstrate clinical equivalence of a test formulation and a reference formulation of a specific drug by establishing their equivalence in bioavailability. These studies are typically run as crossover studies. In the planning phase of such trials, investigators and sponsors are often faced with a high variability in the coefficients of variation of the typical pharmacokinetic endpoints such as the area under the concentration curve or the maximum plasma concentration. Adaptive designs have recently been considered to deal with this uncertainty by adjusting the sample size based on the accumulating data. Because regulators generally favor sample size re‐estimation procedures that maintain the blinding of the treatment allocations throughout the trial, we propose in this paper a blinded sample size re‐estimation strategy and investigate its error rates. We show that the procedure, although blinded, can lead to some inflation of the type I error rate. In the context of an example, we demonstrate how this inflation of the significance level can be adjusted for to achieve control of the type I error rate at a pre‐specified level. Furthermore, some refinements of the re‐estimation procedure are proposed to improve the power properties, in particular in scenarios with small sample sizes. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   
130.
Relative risks (RRs) are often considered as preferred measures of association in randomized controlled trials especially when the binary outcome of interest is common. To directly estimate RRs, log-binomial regression has been recommended. Although log-binomial regression is a special case of generalized linear models, it does not respect the natural parameter constraints, and maximum likelihood estimation is often subject to numerical instability that leads to convergence problems. Alternative methods for solving log-binomial regression convergence problems have been proposed. A Bayesian approach also was introduced, but the comparison between this method and frequentist methods has not been fully explored. We compared five frequentist and one Bayesian methods for estimating RRs under a variety of scenario. Based on our simulation study, there is not a method that can perform well based on different statistical properties, but COPY 1000 and modified log-Poisson regression can be considered in practice.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号