首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2672篇
  免费   776篇
管理学   691篇
民族学   6篇
人口学   65篇
理论方法论   579篇
社会学   1807篇
统计学   300篇
  2023年   1篇
  2022年   2篇
  2021年   69篇
  2020年   138篇
  2019年   298篇
  2018年   122篇
  2017年   209篇
  2016年   255篇
  2015年   259篇
  2014年   236篇
  2013年   366篇
  2012年   194篇
  2011年   181篇
  2010年   157篇
  2009年   118篇
  2008年   155篇
  2007年   97篇
  2006年   117篇
  2005年   86篇
  2004年   89篇
  2003年   77篇
  2002年   53篇
  2001年   60篇
  2000年   50篇
  1999年   10篇
  1998年   1篇
  1997年   3篇
  1996年   7篇
  1995年   1篇
  1994年   6篇
  1993年   8篇
  1992年   2篇
  1991年   5篇
  1990年   2篇
  1989年   7篇
  1988年   6篇
  1987年   1篇
排序方式: 共有3448条查询结果,搜索用时 31 毫秒
921.
Managing professional and personal identities often belabor upwardly mobile racialized individuals. I examine in this article how Asian American and Latino law students negotiate (pan)ethnic identities while learning to become lawyers. I contend that managing dual identities creates (pan)ethnic duty among Asian American and Latino law students. I focus on those planning to work in law firms, at least initially. While there are many career options for law students, most, irrespective of race, pursue initial careers at law firms. What leads them there? How do racialization and expectations play a role in this career aspiration? And how do students negotiate the pressure to give back, or manage the internally/externally imposed duty they feel to serve respective communities? I find that Asian American and Latino law students draw on a repertoire of strategies (marginal panethnicity, tempered altruism, and instrumental ethnicity) that encompass different accounts, identities, and roles enabling creativity and elasticity for professional and personal identities. The findings suggest that panethnicity remains salient for upwardly mobile individuals of color, even those who do not ostensibly appear to be concerned with panethnic communities and causes.  相似文献   
922.
In Canada, the notion of a heritage language ideology is often conceived of as a natural by‐product of official multiculturalism. By contrast, Germany has long struggled with its status as a multilingual and multicultural country. By comparing two corpora of interviews with immigrants to each of these two countries (Canadians of German heritage and Germans of Vietnamese heritage), this paper aims to explore to what extent these different language ideologies are reconstructed in the interviews. It will be argued that the interviewees construct different sociolinguistic spaces and take up different positions within them in terms of centre and periphery. Our analysis shows that the German‐Canadian interviewees construct public sociolinguistic spaces in which they position themselves as German even when they do not have an active knowledge of their heritage language. By contrast, despite the monolingual habitus in Germany, the German‐Vietnamese respondents endorse a heritage language ideology; the space they claim for speaking Vietnamese, however, is restricted to private or family conversations.  相似文献   
923.
This paper considers the maximin approach for designing clinical studies. A maximin efficient design maximizes the smallest efficiency when compared with a standard design, as the parameters vary in a specified subset of the parameter space. To specify this subset of parameters in a real situation, a four‐step procedure using elicitation based on expert opinions is proposed. Further, we describe why and how we extend the initially chosen subset of parameters to a much larger set in our procedure. By this procedure, the maximin approach becomes feasible for dose‐finding studies. Maximin efficient designs have shown to be numerically difficult to construct. However, a new algorithm, the H‐algorithm, considerably simplifies the construction of these designs. We exemplify the maximin efficient approach by considering a sigmoid Emax model describing a dose–response relationship and compare inferential precision with that obtained when using a uniform design. The design obtained is shown to be at least 15% more efficient than the uniform design. © 2014 The Authors. Pharmaceutical Statistics Published by John Wiley & Sons Ltd.  相似文献   
924.
This paper considers a statistical model for the detection mechanism of qualitative microbiological test methods with a parameter for the detection proportion (the probability to detect a single organism) and a parameter for the false positive rate. It is demonstrated that the detection proportion and the bacterial density cannot be estimated separately, not even in a multiple dilution experiment. Only the product can be estimated, changing the interpretation of the most probable number estimator. The asymptotic power of the likelihood ratio statistic for comparing an alternative method with the compendial method, is optimal for a single dilution experiment. The bacterial density should either be close to two CFUs per test unit or equal to zero, depending on differences in the model parameters between the two test methods. The proposed strategy for method validation is to use these two dilutions and test for differences in the two model parameters, addressing the validation parameters specificity and accuracy. Robustness of these two parameters might still be required, but all other validation parameters can be omitted. A confidence interval‐based approach for the ratio of the detection proportions for the two methods is recommended, since it is most informative and close to the power of the likelihood ratio test. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   
925.
We consider hypothesis testing problems for low‐dimensional coefficients in a high dimensional additive hazard model. A variance reduced partial profiling estimator (VRPPE) is proposed and its asymptotic normality is established, which enables us to test the significance of each single coefficient when the data dimension is much larger than the sample size. Based on the p‐values obtained from the proposed test statistics, we then apply a multiple testing procedure to identify significant coefficients and show that the false discovery rate can be controlled at the desired level. The proposed method is also extended to testing a low‐dimensional sub‐vector of coefficients. The finite sample performance of the proposed testing procedure is evaluated by simulation studies. We also apply it to two real data sets, with one focusing on testing low‐dimensional coefficients and the other focusing on identifying significant coefficients through the proposed multiple testing procedure.  相似文献   
926.
This paper focuses on efficient estimation, optimal rates of convergence and effective algorithms in the partly linear additive hazards regression model with current status data. We use polynomial splines to estimate both cumulative baseline hazard function with monotonicity constraint and nonparametric regression functions with no such constraint. We propose a simultaneous sieve maximum likelihood estimation for regression parameters and nuisance parameters and show that the resultant estimator of regression parameter vector is asymptotically normal and achieves the semiparametric information bound. In addition, we show that rates of convergence for the estimators of nonparametric functions are optimal. We implement the proposed estimation through a backfitting algorithm on generalized linear models. We conduct simulation studies to examine the finite‐sample performance of the proposed estimation method and present an analysis of renal function recovery data for illustration.  相似文献   
927.
In this paper, we consider a mixed compound Poisson process, that is, a random sum of independent and identically distributed (i.i.d.) random variables where the number of terms is a Poisson process with random intensity. We study nonparametric estimators of the jump density by specific deconvolution methods. Firstly, assuming that the random intensity has exponential distribution with unknown expectation, we propose two types of estimators based on the observation of an i.i.d. sample. Risks bounds and adaptive procedures are provided. Then, with no assumption on the distribution of the random intensity, we propose two non‐parametric estimators of the jump density based on the joint observation of the number of jumps and the random sum of jumps. Risks bounds are provided, leading to unusual rates for one of the two estimators. The methods are implemented and compared via simulations.  相似文献   
928.
929.
Observational drug safety studies may be susceptible to confounding or protopathic bias. This bias may cause a spurious relationship between drug exposure and adverse side effect when none exists and may lead to unwarranted safety alerts. The spurious relationship may manifest itself through substantially different risk levels between exposure groups at the start of follow‐up when exposure is deemed too short to have any plausible biological effect of the drug. The restrictive proportional hazards assumption with its arbitrary choice of baseline hazard function renders the commonly used Cox proportional hazards model of limited use for revealing such potential bias. We demonstrate a fully parametric approach using accelerated failure time models with an illustrative safety study of glucose‐lowering therapies and show that its results are comparable against other methods that allow time‐varying exposure effects. Our approach includes a wide variety of models that are based on the flexible generalized gamma distribution and allows direct comparisons of estimated hazard functions following different exposure‐specific distributions of survival times. This approach lends itself to two alternative metrics, namely relative times and difference in times to event, allowing physicians more ways to communicate patient's prognosis without invoking the concept of risks, which some may find hard to grasp. In our illustrative case study, substantial differences in cancer risks at drug initiation followed by a gradual reduction towards null were found. This evidence is compatible with the presence of protopathic bias, in which undiagnosed symptoms of cancer lead to switches in diabetes medication. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
930.
The identification of synergistic interactions between combinations of drugs is an important area within drug discovery and development. Pre‐clinically, large numbers of screening studies to identify synergistic pairs of compounds can often be ran, necessitating efficient and robust experimental designs. We consider experimental designs for detecting interaction between two drugs in a pre‐clinical in vitro assay in the presence of uncertainty of the monotherapy response. The monotherapies are assumed to follow the Hill equation with common lower and upper asymptotes, and a common variance. The optimality criterion used is the variance of the interaction parameter. We focus on ray designs and investigate two algorithms for selecting the optimum set of dose combinations. The first is a forward algorithm in which design points are added sequentially. This is found to give useful solutions in simple cases but can lack robustness when knowledge about the monotherapy parameters is insufficient. The second algorithm is a more pragmatic approach where the design points are constrained to be distributed log‐normally along the rays and monotherapy doses. We find that the pragmatic algorithm is more stable than the forward algorithm, and even when the forward algorithm has converged, the pragmatic algorithm can still out‐perform it. Practically, we find that good designs for detecting an interaction have equal numbers of points on monotherapies and combination therapies, with those points typically placed in positions where a 50% response is expected. More uncertainty in monotherapy parameters leads to an optimal design with design points that are more spread out. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号