首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10113篇
  免费   1208篇
  国内免费   83篇
管理学   1819篇
劳动科学   1篇
民族学   39篇
人口学   172篇
丛书文集   509篇
理论方法论   1167篇
综合类   4123篇
社会学   2457篇
统计学   1117篇
  2024年   24篇
  2023年   91篇
  2022年   111篇
  2021年   217篇
  2020年   343篇
  2019年   516篇
  2018年   363篇
  2017年   546篇
  2016年   552篇
  2015年   615篇
  2014年   790篇
  2013年   1185篇
  2012年   795篇
  2011年   698篇
  2010年   629篇
  2009年   482篇
  2008年   503篇
  2007年   445篇
  2006年   453篇
  2005年   415篇
  2004年   397篇
  2003年   337篇
  2002年   248篇
  2001年   252篇
  2000年   199篇
  1999年   56篇
  1998年   32篇
  1997年   13篇
  1996年   15篇
  1995年   14篇
  1994年   14篇
  1993年   16篇
  1992年   9篇
  1991年   4篇
  1990年   3篇
  1989年   5篇
  1988年   9篇
  1986年   3篇
  1983年   2篇
  1982年   2篇
  1981年   1篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
991.
Multi‐country randomised clinical trials (MRCTs) are common in the medical literature, and their interpretation has been the subject of extensive recent discussion. In many MRCTs, an evaluation of treatment effect homogeneity across countries or regions is conducted. Subgroup analysis principles require a significant test of interaction in order to claim heterogeneity of treatment effect across subgroups, such as countries in an MRCT. As clinical trials are typically underpowered for tests of interaction, overly optimistic expectations of treatment effect homogeneity can lead researchers, regulators and other stakeholders to over‐interpret apparent differences between subgroups even when heterogeneity tests are insignificant. In this paper, we consider some exploratory analysis tools to address this issue. We present three measures derived using the theory of order statistics, which can be used to understand the magnitude and the nature of the variation in treatment effects that can arise merely as an artefact of chance. These measures are not intended to replace a formal test of interaction but instead provide non‐inferential visual aids, which allow comparison of the observed and expected differences between regions or other subgroups and are a useful supplement to a formal test of interaction. We discuss how our methodology differs from recently published methods addressing the same issue. A case study of our approach is presented using data from the Study of Platelet Inhibition and Patient Outcomes (PLATO), which was a large cardiovascular MRCT that has been the subject of controversy in the literature. An R package is available that implements the proposed methods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   
992.
Opsonophagocytic killing assays (OPKA) are routinely used for the quantification of bactericidal antibodies in blood serum samples. Quantification of the OPKA readout, the titer, provides the basis for the statistical analysis of vaccine clinical trials having functional immune response endpoints. Traditional OPKA titers are defined as the maximum serum dilution yielding a predefined bacterial killing threshold value, and they are estimated by fitting a dose‐response model to the dilution‐killing curve. This paper illustrates a novel definition of titer, the threshold‐free titer, which preserves biological interpretability while not depending on any killing threshold or on a postulated shape of the dose‐response curve. These titers are shown to be more precise than the traditional threshold‐based titers when using simulated and experimental group B streptococcus OPKA experimental data. Also, titer linearity is shown to be not measurable when using threshold‐based titers, whereas it becomes measurable using threshold‐free titers. The biological interpretability and operational characteristics demonstrated here indicate that threshold‐free titers are an appropriate tool for the routine analysis of OPKA data. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
993.
This paper deals with the analysis of data from a HET‐CAMVT experiment. From a statistical perspective, such data yield many challenges. First of all, the data are typically time‐to‐event like data, which are at the same time interval censored and right truncated. In addition, one has to cope with overdispersion as well as clustering. Traditional analysis approaches ignore overdispersion and clustering and summarize the data into a continuous score that can be analysed using simple linear models. In this paper, a novel combined frailty model is developed that simultaneously captures all of the aforementioned statistical challenges posed by the data. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
994.
Concordance correlation coefficient (CCC) is one of the most popular scaled indices used to evaluate agreement. Most commonly, it is used under the assumption that data is normally distributed. This assumption, however, does not apply to skewed data sets. While methods for the estimation of the CCC of skewed data sets have been introduced and studied, the Bayesian approach and its comparison with the previous methods has been lacking. In this study, we propose a Bayesian method for the estimation of the CCC of skewed data sets and compare it with the best method previously investigated. The proposed method has certain advantages. It tends to outperform the best method studied before when the variation of the data is mainly from the random subject effect instead of error. Furthermore, it allows for greater flexibility in application by enabling incorporation of missing data, confounding covariates, and replications, which was not considered previously. The superiority of this new approach is demonstrated using simulation as well as real‐life biomarker data sets used in an electroencephalography clinical study. The implementation of the Bayesian method is accessible through the Comprehensive R Archive Network. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
995.
Relative risks are often considered preferable to odds ratios for quantifying the association between a predictor and a binary outcome. Relative risk regression is an alternative to logistic regression where the parameters are relative risks rather than odds ratios. It uses a log link binomial generalised linear model, or log‐binomial model, which requires parameter constraints to prevent probabilities from exceeding 1. This leads to numerical problems with standard approaches for finding the maximum likelihood estimate (MLE), such as Fisher scoring, and has motivated various non‐MLE approaches. In this paper we discuss the roles of the MLE and its main competitors for relative risk regression. It is argued that reliable alternatives to Fisher scoring mean that numerical issues are no longer a motivation for non‐MLE methods. Nonetheless, non‐MLE methods may be worthwhile for other reasons and we evaluate this possibility for alternatives within a class of quasi‐likelihood methods. The MLE obtained using a reliable computational method is recommended, but this approach requires bootstrapping when estimates are on the parameter space boundary. If convenience is paramount, then quasi‐likelihood estimation can be a good alternative, although parameter constraints may be violated. Sensitivity to model misspecification and outliers is also discussed along with recommendations and priorities for future research.  相似文献   
996.
This study proposes a modified strike‐spread method for hedging barrier options in generalized autoregressive conditional heteroskedasticity (GARCH) models with transaction costs. A simulation study was conducted to investigate the hedging performance of the proposed method in comparison with several well‐known static methods for hedging barrier options. An accurate, easy‐to‐implement and fast scheme for generating the first passage time under the GARCH framework which enhances the accuracy and efficiency of the simulation is also proposed. Simulation results and an empirical study using real data indicate that the proposed approach has a promising performance for hedging barrier options in GARCH models when transaction costs are taken into consideration.  相似文献   
997.
One of the main aims of early phase clinical trials is to identify a safe dose with an indication of therapeutic benefit to administer to subjects in further studies. Ideally therefore, dose‐limiting events (DLEs) and responses indicative of efficacy should be considered in the dose‐escalation procedure. Several methods have been suggested for incorporating both DLEs and efficacy responses in early phase dose‐escalation trials. In this paper, we describe and evaluate a Bayesian adaptive approach based on one binary response (occurrence of a DLE) and one continuous response (a measure of potential efficacy) per subject. A logistic regression and a linear log‐log relationship are used respectively to model the binary DLEs and the continuous efficacy responses. A gain function concerning both the DLEs and efficacy responses is used to determine the dose to administer to the next cohort of subjects. Stopping rules are proposed to enable efficient decision making. Simulation results shows that our approach performs better than taking account of DLE responses alone. To assess the robustness of the approach, scenarios where the efficacy responses of subjects are generated from an E max model, but modelled by the linear log–log model are also considered. This evaluation shows that the simpler log–log model leads to robust recommendations even under this model showing that it is a useful approximation to the difficulty in estimating E max model. Additionally, we find comparable performance to alternative approaches using efficacy and safety for dose‐finding. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
998.
This study describes how transnational second‐generation Mexican bilinguals use a stigmatized variety of Mexican Spanish to communicate on Facebook and construct an identity. The stereotyped features of this variety index a ranchero identity. Historically, ranchero is an ambivalent identity for Mexican society in general. On the one hand, ranchero culture is a positive reminiscence of Mexico's agrarian past, while on the other, rancheros, along with indigenous Mexicans, are at the bottom of the hierarchy in Mexican society. A discourse‐centered, ethnographic analysis of digitally mediated conversations demonstrates how language use allows participants to reminisce about their collective past, maintain Mexican identities tied to their ancestors, fit their identities to contemporary U.S. Mexican culture, and distance themselves from the stigma associated with the ranchero background.  相似文献   
999.
In this work, we develop a method of adaptive non‐parametric estimation, based on ‘warped’ kernels. The aim is to estimate a real‐valued function s from a sample of random couples (X,Y). We deal with transformed data (Φ(X),Y), with Φ a one‐to‐one function, to build a collection of kernel estimators. The data‐driven bandwidth selection is performed with a method inspired by Goldenshluger and Lepski (Ann. Statist., 39, 2011, 1608). The method permits to handle various problems such as additive and multiplicative regression, conditional density estimation, hazard rate estimation based on randomly right‐censored data, and cumulative distribution function estimation from current‐status data. The interest is threefold. First, the squared‐bias/variance trade‐off is automatically realized. Next, non‐asymptotic risk bounds are derived. Lastly, the estimator is easily computed, thanks to its simple expression: a short simulation study is presented.  相似文献   
1000.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号