首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   624篇
  免费   28篇
  国内免费   5篇
管理学   60篇
人口学   5篇
丛书文集   11篇
理论方法论   14篇
综合类   199篇
社会学   17篇
统计学   351篇
  2024年   1篇
  2023年   23篇
  2022年   11篇
  2021年   16篇
  2020年   22篇
  2019年   29篇
  2018年   25篇
  2017年   27篇
  2016年   34篇
  2015年   24篇
  2014年   31篇
  2013年   80篇
  2012年   43篇
  2011年   21篇
  2010年   28篇
  2009年   30篇
  2008年   22篇
  2007年   25篇
  2006年   16篇
  2005年   28篇
  2004年   7篇
  2003年   16篇
  2002年   10篇
  2001年   7篇
  2000年   11篇
  1999年   9篇
  1998年   9篇
  1997年   8篇
  1996年   7篇
  1995年   8篇
  1994年   5篇
  1993年   4篇
  1992年   5篇
  1991年   5篇
  1990年   4篇
  1989年   1篇
  1988年   4篇
  1984年   1篇
排序方式: 共有657条查询结果,搜索用时 9 毫秒
101.
102.
Service systems for children and families have been shaped by standard approaches to knowledge-building, which reflect a reductionist approach and assume linearity and/or that individuals and experiences are normally distributed. Yet, these approaches may be inadequate for clients most at-risk, especially those who would be analytic ‘outliers’. A complexity lens focuses on the whole system and seeks to identify patterns, including the dynamic interactions between components of the system. Social work scholars have begun to apply complexity theory to social work research efforts, demonstrating the conceptual potential of incorporating this theoretical approach into social work theories and models such as the person-in-environment framework and the ecosystems perspective. Yet, frameworks informed by complexity theory may require ontological and epistemological shifts in thinking and new methodological approaches in order to fully embody a complexity approach. Complexity theory offers the opportunity to consider social work clients who are most at-risk, as it is better suited for power law distributions. We can, therefore, reconceptualize the most ‘at-risk’ clients as being in a state of transition, which is also the space of most creativity and possibility.  相似文献   
103.
In a clinical trial, sometimes it is desirable to allocate as many patients as possible to the best treatment, in particular, when a trial for a rare disease may contain a considerable portion of the whole target population. The Gittins index rule is a powerful tool for sequentially allocating patients to the best treatment based on the responses of patients already treated. However, its application in clinical trials is limited due to technical complexity and lack of randomness. Thompson sampling is an appealing approach, since it makes a compromise between optimal treatment allocation and randomness with some desirable optimal properties in the machine learning context. However, in clinical trial settings, multiple simulation studies have shown disappointing results with Thompson samplers. We consider how to improve short-run performance of Thompson sampling and propose a novel acceleration approach. This approach can also be applied to situations when patients can only be allocated by batch and is very easy to implement without using complex algorithms. A simulation study showed that this approach could improve the performance of Thompson sampling in terms of average total response rate. An application to a redesign of a preference trial to maximize patient's satisfaction is also presented.  相似文献   
104.
Randomised controlled trials are considered the gold standard in trial design. However, phase II oncology trials with a binary outcome are often single-arm. Although a number of reasons exist for choosing a single-arm trial, the primary reason is that single-arm designs require fewer participants than their randomised equivalents. Therefore, the development of novel methodology that makes randomised designs more efficient is of value to the trials community. This article introduces a randomised two-arm binary outcome trial design that includes stochastic curtailment (SC), allowing for the possibility of stopping a trial before the final conclusions are known with certainty. In addition to SC, the proposed design involves the use of a randomised block design, which allows investigators to control the number of interim analyses. This approach is compared with existing designs that also use early stopping, through the use of a loss function comprised of a weighted sum of design characteristics. Comparisons are also made using an example from a real trial. The comparisons show that for many possible loss functions, the proposed design is superior to existing designs. Further, the proposed design may be more practical, by allowing a flexible number of interim analyses. One existing design produces superior design realisations when the anticipated response rate is low. However, when using this design, the probability of rejecting the null hypothesis is sensitive to misspecification of the null response rate. Therefore, when considering randomised designs in phase II, we recommend the proposed approach be preferred over other sequential designs.  相似文献   
105.
A biosimilar drug is a biological product that is highly similar to and at the same time has no clinically meaningful difference from licensed product in terms of safety, purity, and potency. Biosimilar study design is essential to demonstrate the equivalence between biosimilar drug and reference product. However, existing designs and assessment methods are primarily based on binary and continuous endpoints. We propose a Bayesian adaptive design for biosimilarity trials with time-to-event endpoint. The features of the proposed design are twofold. First, we employ the calibrated power prior to precisely borrow relevant information from historical data for the reference drug. Second, we propose a two-stage procedure using the Bayesian biosimilarity index (BBI) to allow early stop and improve the efficiency. Extensive simulations are conducted to demonstrate the operating characteristics of the proposed method in contrast with some naive method. Sensitivity analysis and extension with respect to the assumptions are presented.  相似文献   
106.
We apply the Abramson principle to define adaptive kernel estimators for the intensity function of a spatial point process. We derive asymptotic expansions for the bias and variance under the regime that n independent copies of a simple point process in Euclidean space are superposed. The method is illustrated by means of a simple example and applied to tornado data.  相似文献   
107.
Summary

The definition and occurrence of traumatic events is expanding and impacts everyone's lives in some way. The degree to which a violent event impacts an individual, a group, a workplace or the community varies. Unfortunately violent events are all too common. Businesses are realizing the significance of violence as a workplace problem and the varying degrees of trauma that has a devastating impact on employee retention, workplace functionality and personal well-being. The events can include industrial or natural disasters; worksite accidents; organizational changes; suicide; homicides; robbery; assault; threats of violence and even terrorism. How prepared an organization is varies and may be correlated with how resilient individuals and the entire workplace are after workplace violence/trauma. This article focuses on what workplace violence and trauma includes, the effects of repeat events, how resilient people are while trying to prevent additional events if possible in the workplace.  相似文献   
108.
This article compares the mean-squared error (or ?2 risk) of ordinary least squares (OLS), James–Stein, and least absolute shrinkage and selection operator (Lasso) shrinkage estimators in simple linear regression where the number of regressors is smaller than the sample size. We compare and contrast the known risk bounds for these estimators, which shows that neither James–Stein nor Lasso uniformly dominates the other. We investigate the finite sample risk using a simple simulation experiment. We find that the risk of Lasso estimation is particularly sensitive to coefficient parameterization, and for a significant portion of the parameter space Lasso has higher mean-squared error than OLS. This investigation suggests that there are potential pitfalls arising with Lasso estimation, and simulation studies need to be more attentive to careful exploration of the parameter space.  相似文献   
109.
Abstract.  In an adaptive clinical trial research, it is common to use certain data-dependent design weights to assign individuals to treatments so that more study subjects are assigned to the better treatment. These design weights must also be used for consistent estimation of the treatment effects as well as the effects of the other prognostic factors. In practice, there are however situations where it may be necessary to collect binary responses repeatedly from an individual over a period of time and to obtain consistent estimates for the treatment effect as well as the effects of the other covariates in such a binary longitudinal set up. In this paper, we introduce a binary response-based longitudinal adaptive design for the allocation of individuals to a better treatment and propose a weighted generalized quasi-likelihood approach for the consistent and efficient estimation of the regression parameters including the treatment effects.  相似文献   
110.
In weighted moment condition models, we show a subtle link between identification and estimability that limits the practical usefulness of estimators based on these models. In particular, if it is necessary for (point) identification that the weights take arbitrarily large values, then the parameter of interest, though point identified, cannot be estimated at the regular (parametric) rate and is said to be irregularly identified. This rate depends on relative tail conditions and can be as slow in some examples as n−1/4. This nonstandard rate of convergence can lead to numerical instability and/or large standard errors. We examine two weighted model examples: (i) the binary response model under mean restriction introduced by Lewbel (1997) and further generalized to cover endogeneity and selection, where the estimator in this class of models is weighted by the density of a special regressor, and (ii) the treatment effect model under exogenous selection (Rosenbaum and Rubin (1983)), where the resulting estimator of the average treatment effect is one that is weighted by a variant of the propensity score. Without strong relative support conditions, these models, similar to well known “identified at infinity” models, lead to estimators that converge at slower than parametric rate, since essentially, to ensure point identification, one requires some variables to take values on sets with arbitrarily small probabilities, or thin sets. For the two models above, we derive some rates of convergence and propose that one conducts inference using rate adaptive procedures that are analogous to Andrews and Schafgans (1998) for the sample selection model.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号