首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1730篇
  免费   93篇
管理学   153篇
民族学   24篇
人口学   200篇
丛书文集   5篇
理论方法论   161篇
综合类   24篇
社会学   1003篇
统计学   253篇
  2023年   11篇
  2022年   14篇
  2021年   22篇
  2020年   71篇
  2019年   75篇
  2018年   90篇
  2017年   112篇
  2016年   90篇
  2015年   47篇
  2014年   55篇
  2013年   325篇
  2012年   106篇
  2011年   70篇
  2010年   60篇
  2009年   47篇
  2008年   50篇
  2007年   46篇
  2006年   43篇
  2005年   43篇
  2004年   44篇
  2003年   43篇
  2002年   38篇
  2001年   23篇
  2000年   28篇
  1999年   33篇
  1998年   11篇
  1997年   17篇
  1996年   13篇
  1995年   16篇
  1994年   12篇
  1993年   27篇
  1992年   18篇
  1991年   16篇
  1990年   8篇
  1989年   7篇
  1988年   8篇
  1987年   10篇
  1986年   4篇
  1985年   8篇
  1984年   8篇
  1983年   7篇
  1982年   11篇
  1981年   9篇
  1980年   5篇
  1979年   3篇
  1978年   3篇
  1977年   5篇
  1976年   5篇
  1974年   1篇
  1968年   4篇
排序方式: 共有1823条查询结果,搜索用时 515 毫秒
901.
Randomized controlled trials (RCTs) are the gold standard for evaluation of the efficacy and safety of investigational interventions. If every patient in an RCT were to adhere to the randomized treatment, one could simply analyze the complete data to infer the treatment effect. However, intercurrent events (ICEs) including the use of concomitant medication for unsatisfactory efficacy, treatment discontinuation due to adverse events, or lack of efficacy may lead to interventions that deviate from the original treatment assignment. Therefore, defining the appropriate estimand (the appropriate parameter to be estimated) based on the primary objective of the study is critical prior to determining the statistical analysis method and analyzing the data. The International Council for Harmonisation (ICH) E9 (R1), adopted on November 20, 2019, provided five strategies to define the estimand: treatment policy, hypothetical, composite variable, while on treatment, and principal stratum. In this article, we propose an estimand using a mix of strategies in handling ICEs. This estimand is an average of the “null” treatment difference for those with ICEs potentially related to safety and the treatment difference for the other patients if they would complete the assigned treatments. Two examples from clinical trials evaluating antidiabetes treatments are provided to illustrate the estimation of this proposed estimand and to compare it with the estimates for estimands using hypothetical and treatment policy strategies in handling ICEs.  相似文献   
902.
Objectives. It is widely believed that prison construction offers significant economic benefits to local areas. We review the popular and scholarly literature and provide a quantitative analysis of claims. Methods. We analyze data on all existing and new prisons in the United States since 1960 to assess the impact of these prisons on the pace of public, private, and total employment growth in U.S. counties from 1976 to 2004. Results. Our results suggest that enhanced human capital is associated with employment gains and cast doubt on the assertion that prisons provide economic benefits to local areas. We provide evidence that prison construction impedes economic growth in rural counties, especially in counties that lag behind in educational attainment. Conclusions. Based on empirical results, this research casts further doubt on claims that prisons offer a viable economic development option for struggling rural communities. Possible explanations for the failure of prisons to help local areas are explored, including existing corrections officers moving to fill openings, adverse local impacts of prison labor, and paucity of local multipliers when a prison opens.  相似文献   
903.
The problem of partitioning a partially ordered set into a minimum number of chains is a well-known problem. In this paper we study a generalization of this problem, where we not only assume that the chains have bounded size, but also that a weight w i is given for each element i in the partial order such that w i w j if i j. The problem is then to partition the partial order into a minimum-weight set of chains of bounded size, where the weight of a chain equals the weight of the heaviest element in the chain. We prove that this problem is -hard, and we propose and analyze lower bounds for this problem. Based on these lower bounds, we exhibit a 2-approximation algorithm, and show that it is tight. We report computational results for a number of real-world and randomly generated problem instances.  相似文献   
904.
Microarray experiments are being widely used in medical and biological research. The main features of these studies are the large number of variables (genes) involved and the low number of replicates (arrays). It seems clear that the most appropriate models, when looking for detecting differences in gene expression are those that exploit the most useful information to compensate for the lack of replicates. On the other hand, the control of the error in the decision process plays an important role for the high number of simultaneous statistical tests (one for each gene), so that concepts such as the false discovery rate (FDR) take a special importance. One of the alternatives for the analysis of the data in these experiments is based on the calculation of statistics derived from modifications of the classical methods used in this type of problems (moderated-t, B-statistic). Nonparametric techniques have been also proposed [B. Efron, R. Tibshirani, J.D. Storey, and V. Tusher, Empirical Bayes analysis of a microarray experiment, J. Amer. Stat. Assoc. 96 (2001), pp. 1151–1160; W. Pan, J. Lin, and C.T. Le, A mixture model approach to detecting differentially expressed genes with microarray data, Funct. Integr. Genomics 3 (2003), pp. 117–124], allowing the analysis without assuming any prior condition about the distribution of the data, which make them especially suitable in such situations. This paper presents a new method to detect differentially expressed genes based on non-parametric density estimation by a class of functions that allow us to define a distance between individuals in the sample (characterized by the coordinates of the individual (gene) in the dual space tangent to the manifold of parameters) [A. Miñarro and J.M. Oller, Some remarks on the individuals-score distance and its applications to statistical inference, Qüestiió, 16 (1992), pp. 43–57]. From these distances, we designed the test to determine the rejection region based on the control of FDR.  相似文献   
905.
Testing the order of integration of economic and financial time series has become a conventional procedure prior to any modelling exercise. In this paper, we investigate and compare the finite sample properties of the frequency-domain tests proposed by Robinson [Efficient tests of nonstationary hypotheses, J. Amer. Statist. Assoc. 89(428) (1994), pp. 1420–1437] and the time-domain procedure proposed by Hassler, Rodrigues, and Rubia [Testing for general fractional integration in the time domain, Econometric Theory 25 (2009), pp. 1793–1828] when applied to seasonal data. The results presented are of empirical relevance as they provide some guidance regarding the finite sample properties of these tests.  相似文献   
906.
At the 22nd Annual North Carolina Serials Conference, focused on “Collaboration, Community, and Connection,” Linda Blake and Hilary Fredette of West Virginia University presented, ““Can we Lend?”: Communicating Interlibrary Loan Rights,” reviewing their experiences collaborating across an academic library to achieve the best possible interlibrary loan e-journal access within the bounds of sometimes inscrutable licenses.  相似文献   
907.
EBSCO Publishing     
Abstract

EBSCO Publishing is an innovative company that has its roots in paper publishing. It now produces hundreds of online resources in a state-of-the-art facility located along the banks of the Ipswich River in Massachusetts. Considerable work occurs behind the scenes in Ipswich (and around the world) to produce the online databases in EBSCOhost that appear to the user at the click of a mouse. Jennifer Carroll toured the headquarters in Ipswich to learn about the processes that make these valuable resources available.  相似文献   
908.
We compare a simple ordinary least squares (OLS) with the maximum likelihood estimation of the Tobit I and Tobit II regression models, in the selected sample. We propose a new measure to quantify the performance of OLS.  相似文献   
909.
ABSTRACT

In this study, methods for efficient construction of A-, MV-, D- and E-optimal or near-optimal block designs for two-colour cDNA microarray experiments with array as the block effect are considered. Two algorithms, namely the array exchange and treatment exchange algorithms together with the complete enumeration technique are introduced. For large numbers of arrays or treatments or both, the complete enumeration method is highly computer intensive. The treatment exchange algorithm computes the optimal or near-optimal designs faster than the array exchange algorithm. The two methods however produce optimal or near-optimal designs with the same efficiency under the four optimality criteria.  相似文献   
910.
The speed of convergence of the distribution of the normalized maximum, of a sample of independent and identically distributed random variables, to its asymptotic distribution is considered in this article. Assuming that the cumulative distribution function of the random variables is known, the error committed replacing the actual distribution of the normalized maximum by its asympotic distribution is studied. Instead of using the arithmetical scale of probabilities, we measure the difference between the actual and asympotic distribution in terms of the double-log scale used for building the probability plotting paper for the the latter. We demonstrate that the difference between the double-log values corresponding to two probabilities in the upper tail is almost exactly equal to the logarithm of the distribution may not be uniform in this double-log scale and that the difference between the actual and asymptotic distributions, on the probebility plotting paper, may be a logarithmic, power, or even exponential function in the upper tail when the latter distribution is of Fisher-Tippett type I, but that difference is at most logarithmic in the upper tail for type II and III distributions. This fact is exploited to obtain transformed variables that converge tothe asymptotic distribution faster than the original variable on the probabilites plotting paper  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号