首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5664篇
  免费   97篇
管理学   856篇
民族学   31篇
人才学   2篇
人口学   541篇
丛书文集   37篇
理论方法论   679篇
综合类   48篇
社会学   2857篇
统计学   710篇
  2023年   21篇
  2020年   88篇
  2019年   105篇
  2018年   91篇
  2017年   132篇
  2016年   144篇
  2015年   100篇
  2014年   139篇
  2013年   935篇
  2012年   169篇
  2011年   155篇
  2010年   131篇
  2009年   147篇
  2008年   164篇
  2007年   163篇
  2006年   149篇
  2005年   190篇
  2004年   212篇
  2003年   179篇
  2002年   191篇
  2001年   146篇
  2000年   114篇
  1999年   127篇
  1998年   111篇
  1997年   99篇
  1996年   87篇
  1995年   92篇
  1994年   87篇
  1993年   99篇
  1992年   73篇
  1991年   83篇
  1990年   59篇
  1989年   58篇
  1988年   67篇
  1987年   50篇
  1986年   50篇
  1985年   62篇
  1984年   57篇
  1983年   55篇
  1982年   59篇
  1981年   48篇
  1980年   47篇
  1979年   46篇
  1978年   44篇
  1977年   44篇
  1976年   44篇
  1975年   43篇
  1974年   32篇
  1973年   29篇
  1971年   22篇
排序方式: 共有5761条查询结果,搜索用时 10 毫秒
91.
We describe a first experiment on whether product complexity affects competition and consumers in retail markets. We are unable to detect a significant effect of product complexity on prices, except insofar as the demand elasticity for complex products is higher. However, there is qualified evidence that complex products have the potential to induce consumers to buy more than they would otherwise. In this sense, consumer exploitability in quantities cannot be ruled out. We also find evidence for shaping effects: consumers’ preferences are shaped by past experience with prices, and firms may in principle exploit this to sell more.  相似文献   
92.
This article provides an analysis of my personal experience and research on the need for sub-Saharan African economic transformation. It contains relevant references to econometric modeling which are consistent with this analysis. After presenting the causes of African failures, ranging from the alleged role of slavery and colonization, to trade composition and relative decline, to present day problems of governance, low level of foreign investment in Africa, and mass unemployment, the article concludes with an agenda for the transformation of sub-Saharan Africa.  相似文献   
93.
For a dose finding study in cancer, the most successful dose (MSD), among a group of available doses, is that dose at which the overall success rate is the highest. This rate is the product of the rate of seeing non-toxicities together with the rate of tumor response. A successful dose finding trial in this context is one where we manage to identify the MSD in an efficient manner. In practice we may also need to consider algorithms for identifying the MSD which can incorporate certain restrictions, the most common restriction maintaining the estimated toxicity rate alone below some maximum rate. In this case the MSD may correspond to a different level than that for the unconstrained MSD and, in providing a final recommendation, it is important to underline that it is subject to the given constraint. We work with the approach described in O'Quigley et al. [Biometrics 2001; 57(4):1018-1029]. The focus of that work was dose finding in HIV where both information on toxicity and efficacy were almost immediately available. Recent cancer studies are beginning to fall under this same heading where, as before, toxicity can be quickly evaluated and, in addition, we can rely on biological markers or other measures of tumor response. Mindful of the particular context of cancer, our purpose here is to consider the methodology developed by O'Quigley et al. and its practical implementation. We also carry out a study on the doubly under-parameterized model, developed by O'Quigley et al. but not  相似文献   
94.
Pan  Wei  Connett  John E. 《Lifetime data analysis》2001,7(2):111-123
Weextend Wei and Tanner's (1991) multiple imputation approach insemi-parametric linear regression for univariate censored datato clustered censored data. The main idea is to iterate the followingtwo steps: 1) using the data augmentation to impute for censoredfailure times; 2) fitting a linear model with imputed completedata, which takes into consideration of clustering among failuretimes. In particular, we propose using the generalized estimatingequations (GEE) or a linear mixed-effects model to implementthe second step. Through simulation studies our proposal comparesfavorably to the independence approach (Lee et al., 1993), whichignores the within-cluster correlation in estimating the regressioncoefficient. Our proposal is easy to implement by using existingsoftwares.  相似文献   
95.
Kolassa and Tanner (J. Am. Stat. Assoc. (1994) 89, 697–702) present the Gibbs-Skovgaard algorithm for approximate conditional inference. Kolassa (Ann Statist. (1999), 27, 129–142) gives conditions under which their Markov chain is known to converge. This paper calculates explicity bounds on convergence rates in terms calculable directly from chain transition operators. These results are useful in cases like those considered by Kolassa (1999).  相似文献   
96.
Testing for homogeneity in finite mixture models has been investigated by many researchers. The asymptotic null distribution of the likelihood ratio test (LRT) is very complex and difficult to use in practice. We propose a modified LRT for homogeneity in finite mixture models with a general parametric kernel distribution family. The modified LRT has a χ-type of null limiting distribution and is asymptotically most powerful under local alternatives. Simulations show that it performs better than competing tests. They also reveal that the limiting distribution with some adjustment can satisfactorily approximate the quantiles of the test statistic, even for moderate sample sizes.  相似文献   
97.
In sequential studies, formal interim analyses are usually restricted to a consideration of a single null hypothesis concerning a single parameter of interest. Valid frequentist methods of hypothesis testing and of point and interval estimation for the primary parameter have already been devised for use at the end of such a study. However, the completed data set may warrant a more detailed analysis, involving the estimation of parameters corresponding to effects that were not used to determine when to stop, and yet correlated with those that were. This paper describes methods for setting confidence intervals for secondary parameters in a way which provides the correct coverage probability in repeated frequentist realizations of the sequential design used. The method assumes that information accumulates on the primary and secondary parameters at proportional rates. This requirement will be valid in many potential applications, but only in limited situations in survival analysis.  相似文献   
98.
Summary. We measure trust and trustworthiness in British society with a newly designed experiment using real monetary rewards and a sample of the British population. The study also asks the typical survey question that aims to measure trust, showing that it does not predict 'trust' as measured in the experiment. Overall, about 40% of people were willing to trust a stranger in our experiment, and their trust was rewarded half of the time. Analysis of variation in the trust behaviour in our survey suggests that trusting is more likely if people are older, their financial situation is either 'comfortable' or 'difficult' compared with 'doing alright' or 'just getting by', they are a homeowner or they are divorced, separated or never married compared with those who are married or cohabiting. Trustworthiness also is more likely among subjects who are divorced or separated relative to those who are married or cohabiting, and less likely among subjects who perceive their financial situation as 'just getting by' or 'difficult'. We also analyse the effect of attitudes towards risks on trust.  相似文献   
99.
Summary.  We present a new class of methods for high dimensional non-parametric regression and classification called sparse additive models. Our methods combine ideas from sparse linear modelling and additive non-parametric regression. We derive an algorithm for fitting the models that is practical and effective even when the number of covariates is larger than the sample size. Sparse additive models are essentially a functional version of the grouped lasso of Yuan and Lin. They are also closely related to the COSSO model of Lin and Zhang but decouple smoothing and sparsity, enabling the use of arbitrary non-parametric smoothers. We give an analysis of the theoretical properties of sparse additive models and present empirical results on synthetic and real data, showing that they can be effective in fitting sparse non-parametric models in high dimensional data.  相似文献   
100.
Two-stage designs offer substantial advantages for early phase II studies. The interim analysis following the first stage allows the study to be stopped for futility, or more positively, it might lead to early progression to the trials needed for late phase II and phase III. If the study is to continue to its second stage, then there is an opportunity for a revision of the total sample size. Two-stage designs have been implemented widely in oncology studies in which there is a single treatment arm and patient responses are binary. In this paper the case of two-arm comparative studies in which responses are quantitative is considered. This setting is common in therapeutic areas other than oncology. It will be assumed that observations are normally distributed, but that there is some doubt concerning their standard deviation, motivating the need for sample size review. The work reported has been motivated by a study in diabetic neuropathic pain, and the development of the design for that trial is described in detail.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号