首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   651篇
  免费   26篇
  国内免费   1篇
管理学   42篇
民族学   2篇
人口学   16篇
丛书文集   22篇
理论方法论   25篇
综合类   69篇
社会学   40篇
统计学   462篇
  2023年   15篇
  2022年   1篇
  2021年   13篇
  2020年   16篇
  2019年   25篇
  2018年   17篇
  2017年   38篇
  2016年   22篇
  2015年   9篇
  2014年   27篇
  2013年   177篇
  2012年   43篇
  2011年   31篇
  2010年   24篇
  2009年   28篇
  2008年   26篇
  2007年   26篇
  2006年   16篇
  2005年   17篇
  2004年   19篇
  2003年   11篇
  2002年   10篇
  2001年   9篇
  2000年   4篇
  1999年   12篇
  1998年   9篇
  1997年   4篇
  1996年   8篇
  1995年   2篇
  1994年   2篇
  1993年   3篇
  1992年   6篇
  1990年   1篇
  1988年   1篇
  1987年   1篇
  1983年   3篇
  1979年   1篇
  1975年   1篇
排序方式: 共有678条查询结果,搜索用时 375 毫秒
191.
在随机截尾模型的基础上,为了保护被调查者的隐私,文章提出了一种改进的调查数量敏感性问题的随机化回答方法,并把改进模型的精度与随机截尾模型、随机截尾Warner模型进行了比较。  相似文献   
192.
The last decade saw enormous progress in the development of causal inference tools to account for noncompliance in randomized clinical trials. With survival outcomes, structural accelerated failure time (SAFT) models enable causal estimation of effects of observed treatments without making direct assumptions on the compliance selection mechanism. The traditional proportional hazards model has however rarely been used for causal inference. The estimator proposed by Loeys and Goetghebeur (2003, Biometrics vol. 59 pp. 100–105) is limited to the setting of all or nothing exposure. In this paper, we propose an estimation procedure for more general causal proportional hazards models linking the distribution of potential treatment-free survival times to the distribution of observed survival times via observed (time-constant) exposures. Specifically, we first build models for observed exposure-specific survival times. Next, using the proposed causal proportional hazards model, the exposure-specific survival distributions are backtransformed to their treatment-free counterparts, to obtain – after proper mixing – the unconditional treatment-free survival distribution. Estimation of the parameter(s) in the causal model is then based on minimizing a test statistic for equality in backtransformed survival distributions between randomized arms.  相似文献   
193.
The authors consider regression analysis for binary data collected repeatedly over time on members of numerous small clusters of individuals sharing a common random effect that induces dependence among them. They propose a mixed model that can accommodate both these structural and longitudinal dependencies. They estimate the parameters of the model consistently and efficiently using generalized estimating equations. They show through simulations that their approach yields significant gains in mean squared error when estimating the random effects variance and the longitudinal correlations, while providing estimates of the fixed effects that are just as precise as under a generalized penalized quasi‐likelihood approach. Their method is illustrated using smoking prevention data.  相似文献   
194.
Response-adaptive (RA) allocation designs can skew the allocation of incoming subjects toward the better performing treatment group based on the previously accrued responses. While unstable estimators and increased variability can adversely affect adaptation in early trial stages, Bayesian methods can be implemented with decreasingly informative priors (DIP) to overcome these difficulties. DIPs have been previously used for binary outcomes to constrain adaptation early in the trial, yet gradually increase adaptation as subjects accrue. We extend the DIP approach to RA designs for continuous outcomes, primarily in the normal conjugate family by functionalizing the prior effective sample size to equal the unobserved sample size. We compare this effective sample size DIP approach to other DIP formulations. Further, we considered various allocation equations and assessed their behavior utilizing DIPs. Simulated clinical trials comparing the behavior of these approaches with traditional Frequentist and Bayesian RA as well as balanced designs show that the natural lead-in approaches maintain improved treatment with lower variability and greater power.  相似文献   
195.
In drug development, bioequivalence studies are used to indirectly demonstrate clinical equivalence of a test formulation and a reference formulation of a specific drug by establishing their equivalence in bioavailability. These studies are typically run as crossover studies. In the planning phase of such trials, investigators and sponsors are often faced with a high variability in the coefficients of variation of the typical pharmacokinetic endpoints such as the area under the concentration curve or the maximum plasma concentration. Adaptive designs have recently been considered to deal with this uncertainty by adjusting the sample size based on the accumulating data. Because regulators generally favor sample size re‐estimation procedures that maintain the blinding of the treatment allocations throughout the trial, we propose in this paper a blinded sample size re‐estimation strategy and investigate its error rates. We show that the procedure, although blinded, can lead to some inflation of the type I error rate. In the context of an example, we demonstrate how this inflation of the significance level can be adjusted for to achieve control of the type I error rate at a pre‐specified level. Furthermore, some refinements of the re‐estimation procedure are proposed to improve the power properties, in particular in scenarios with small sample sizes. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   
196.
Relative risks (RRs) are often considered as preferred measures of association in randomized controlled trials especially when the binary outcome of interest is common. To directly estimate RRs, log-binomial regression has been recommended. Although log-binomial regression is a special case of generalized linear models, it does not respect the natural parameter constraints, and maximum likelihood estimation is often subject to numerical instability that leads to convergence problems. Alternative methods for solving log-binomial regression convergence problems have been proposed. A Bayesian approach also was introduced, but the comparison between this method and frequentist methods has not been fully explored. We compared five frequentist and one Bayesian methods for estimating RRs under a variety of scenario. Based on our simulation study, there is not a method that can perform well based on different statistical properties, but COPY 1000 and modified log-Poisson regression can be considered in practice.  相似文献   
197.
This paper considers the problem of making statistical inferences about a parameter when a narrow interval centred at a given value of the parameter is considered special, which is interpreted as meaning that there is a substantial degree of prior belief that the true value of the parameter lies in this interval. A clear justification of the practical importance of this problem is provided. The main difficulty with the standard Bayesian solution to this problem is discussed and, as a result, a pseudo-Bayesian solution is put forward based on determining lower limits for the posterior probability of the parameter lying in the special interval by means of a sensitivity analysis. Since it is not assumed that prior beliefs necessarily need to be expressed in terms of prior probabilities, nor that post-data probabilities must be Bayesian posterior probabilities, hybrid methods of inference are also proposed that are based on specific ways of measuring and interpreting the classical concept of significance. The various methods that are outlined are compared and contrasted at both a foundational level, and from a practical viewpoint by applying them to real data from meta-analyses that appeared in a well-known medical article.  相似文献   
198.
Modelling time-varying and frequency-specific relationships between two brain signals is becoming an essential methodological tool to answer theoretical questions in experimental neuroscience. In this article, we propose to estimate a frequency Granger causality statistic that may vary in time in order to evaluate the functional connections between two brain regions during a task. We use for that purpose an adaptive Kalman filter type of estimator of a linear Gaussian vector autoregressive model with coefficients evolving over time. The estimation procedure is achieved through variational Bayesian approximation and is extended for multiple trials. This Bayesian State Space (BSS) model provides a dynamical Granger-causality statistic that is quite natural. We propose to extend the BSS model to include the à trous Haar decomposition. This wavelet-based forecasting method is based on a multiscale resolution decomposition of the signal using the redundant à trous wavelet transform and allows us to capture short- and long-range dependencies between signals. Equally importantly it allows us to derive the desired dynamical and frequency-specific Granger-causality statistic. The application of these models to intracranial local field potential data recorded during a psychological experimental task shows the complex frequency-based cross-talk between amygdala and medial orbito-frontal cortex.  相似文献   
199.
200.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号