首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2326篇
  免费   49篇
管理学   353篇
民族学   23篇
人口学   192篇
丛书文集   18篇
理论方法论   220篇
现状及发展   1篇
综合类   20篇
社会学   1235篇
统计学   313篇
  2023年   20篇
  2022年   9篇
  2021年   24篇
  2020年   48篇
  2019年   45篇
  2018年   65篇
  2017年   71篇
  2016年   57篇
  2015年   53篇
  2014年   58篇
  2013年   515篇
  2012年   78篇
  2011年   83篇
  2010年   59篇
  2009年   65篇
  2008年   87篇
  2007年   56篇
  2006年   57篇
  2005年   56篇
  2004年   51篇
  2003年   37篇
  2002年   55篇
  2001年   53篇
  2000年   44篇
  1999年   37篇
  1998年   31篇
  1997年   28篇
  1996年   34篇
  1995年   28篇
  1994年   31篇
  1993年   19篇
  1992年   23篇
  1991年   32篇
  1990年   30篇
  1989年   20篇
  1988年   19篇
  1987年   22篇
  1986年   24篇
  1985年   24篇
  1984年   16篇
  1983年   21篇
  1982年   23篇
  1981年   16篇
  1980年   21篇
  1979年   18篇
  1978年   17篇
  1977年   10篇
  1976年   15篇
  1975年   15篇
  1974年   9篇
排序方式: 共有2375条查询结果,搜索用时 15 毫秒
91.
92.
Bridging Scholarship in Management: Epistemological Reflections   总被引:1,自引:1,他引:1  
If the relevance gap in management research is to be narrowed, management scholars must identify and adopt processes of inquiry that simultaneously achieve high rigour and high relevance. Research approaches that strive for relevance emphasize the particular at the expense of the general and approaches that strive for rigour emphasize the general over the particular. Inquiry that attains both rigour and relevance can be found in approaches to knowledge that involve a reasoned relationship between the particular and the general. Prominent among these are the works of Ikujiro Nonaka and John Dewey. Their epistemological foundations indicate the potential for a philosophy of science and a process of inquiry that crosses epistemological lines by synthesizing the particular and the general and by utilizing experience and theory, the implicit and the explicit, and induction and deduction. These epistemologies point to characteristics of a bridging scholarship that is problem-initiated and rests on expanded standards of validity. The present epistemological reflections are in search of new communities of knowing toward the production of relevant and rigorous management knowledge.  相似文献   
93.
In a recent article by Rosenthal, Zydiak, and Chaudhry (1995), a mixed integer linear programming model was introduced to solve the vendor selection problem for the case in which the vendor can sell items individually or as part of a bundle. Each vendor offered only one type of bundle, and the buyer could purchase at most one bundle per vendor. The model employed n(m+ 1) binary variables, where n is the number of vendors and m is the number of products they sell. The existing model can lead to a purchasing paradox: it may force the buyer to pay more to receive less. We suggest a reformulation of the same problem that (i) eliminates this paradox and reveals a more cost-effective purchasing strategy; (ii) uses only n integer variables and significantly reduces the computational workload; and (iii) permits the buyer to purchase more than one bundle per vendor.  相似文献   
94.
Summary.  Non-ignorable missing data, a serious problem in both clinical trials and observational studies, can lead to biased inferences. Quality-of-life measures have become increasingly popular in clinical trials. However, these measures are often incompletely observed, and investigators may suspect that missing quality-of-life data are likely to be non-ignorable. Although several recent references have addressed missing covariates in survival analysis, they all required the assumption that missingness is at random or that all covariates are discrete. We present a method for estimating the parameters in the Cox proportional hazards model when missing covariates may be non-ignorable and continuous or discrete. Our method is useful in reducing the bias and improving efficiency in the presence of missing data. The methodology clearly specifies assumptions about the missing data mechanism and, through sensitivity analysis, helps investigators to understand the potential effect of missing data on study results.  相似文献   
95.
Summary. The determination of evolutionary relationships is a fundamental problem in evolutionary biology. Genome arrangement data are potentially more informative than deoxyribonucleic acid sequence data for inferring evolutionary relationships between distantly related taxa. We describe a Bayesian framework for phylogenetic inference from mitochondrial genome arrangement data using Markov chain Monte Carlo methods. We apply the method to assess evolutionary relationships between eight animal phyla.  相似文献   
96.
In longitudinal survey research, certain questions can be rescinded illogically. For instance, respondents who at Time 1 report having had sexual intercourse may at Time 2 report never having done so. This paper reports measurement techniques and analyses of these types of inconsistencies from an ongoing longitudinal adolescent sexuality project. Inconsistencies in intercourse, masturbation, and other sexual behaviors are reported and compared to rates from other studies and other less sensitive behaviors within the same study. Three conclusions are presented: (1) inconsistencies should be considered a natural part of any longitudinal survey process and should be incorporated into the response model; (2) inconsistency rates in these particular data support the contention that adolescent sexuality data of appropriate quality for analytical purposes can be obtained; and (3) inconsistency rates in fact contain substantive information concerning the processes under consideration.  相似文献   
97.
Large databases of routinely collected data are a valuable source of information for detecting potential associations between drugs and adverse events (AE). A pharmacovigilance system starts with a scan of these databases for potential signals of drug-AE associations that will subsequently be examined by experts to aid in regulatory decision-making. The signal generation process faces some key challenges: (1) an enormous volume of drug-AE combinations need to be tested (i.e. the problem of multiple testing); (2) the results are not in a format that allows the incorporation of accumulated experience and knowledge for future signal generation; and (3) the signal generation process ignores information captured from other processes in the pharmacovigilance system and does not allow feedback. Bayesian methods have been developed for signal generation in pharmacovigilance, although the full potential of these methods has not been realised. For instance, Bayesian hierarchical models will allow the incorporation of established medical and epidemiological knowledge into the priors for each drug-AE combination. Moreover, the outputs from this analysis can be incorporated into decision-making tools to help in signal validation and posterior actions to be taken by the regulators and companies. We discuss in this paper the apparent advantage of the Bayesian methods used in safety signal generation and the similarities and differences between the two widely used Bayesian methods. We will also propose the use of Bayesian hierarchical models to address the three key challenges and discuss the reasons why Bayesian methodology still have not been fully utilised in pharmacovigilance activities.  相似文献   
98.
We introduce health technology assessment and evidence synthesis briefly, and then concentrate on the statistical approaches used for conducting network meta-analysis (NMA) in the development and approval of new health technologies. NMA is an extension of standard meta-analysis where indirect as well as direct information is combined and can be seen as similar to the analysis of incomplete-block designs. We illustrate it with an example involving three treatments, using fixed-effects and random-effects models, and using frequentist and Bayesian approaches. As most statisticians in the pharmaceutical industry are familiar with SAS? software for analyzing clinical trials, we provide example code for each of the methods we illustrate. One issue that has been overlooked in the literature is the choice of constraints applied to random effects, and we show how this affects the estimates and standard errors and propose a symmetric set of constraints that is equivalent to most current practice. Finally, we discuss the role of statisticians in planning and carrying out NMAs and the strategy for dealing with important issues such as heterogeneity.  相似文献   
99.
The aim of this paper is to develop a Bayesian local influence method (Zhu et al. 2009, submitted) for assessing minor perturbations to the prior, the sampling distribution, and individual observations in survival analysis. We introduce a perturbation model to characterize simultaneous (or individual) perturbations to the data, the prior distribution, and the sampling distribution. We construct a Bayesian perturbation manifold to the perturbation model and calculate its associated geometric quantities including the metric tensor to characterize the intrinsic structure of the perturbation model (or perturbation scheme). We develop local influence measures based on several objective functions to quantify the degree of various perturbations to statistical models. We carry out several simulation studies and analyze two real data sets to illustrate our Bayesian local influence method in detecting influential observations, and for characterizing the sensitivity to the prior distribution and hazard function.  相似文献   
100.
Art auction catalogs provide a pre-sale prediction interval for the price each item is expected to fetch. When the owner consigns art work to the auction house, a reserve price is agreed upon, which is not announced to the bidders. If the highest bid does not reach it, the item is brought in. Since only the prices of the sold items are published, analysts only have a biased sample to examine due to the selective sale process. Relying on the published data leads to underestimating the forecast error of the pre-sale estimates. However, we were able to obtain several art auction catalogs with the highest bids for the unsold items as well as those of the sold items. With these data we were able to evaluate the accuracy of the predictions of the sale prices or highest bids for all item obtained from the original Heckman selection model that assumed normal error distributions as well as those derived from an alternative model using the t(2) distribution, which yielded a noticeably better fit to several sets of auction data. The measures of prediction accuracy are of more than academic interest as they are used by auction participants to guide their bidding or selling strategy, and similar appraisals are accepted by the US Internal Revenue Services to justify the deductions for charitable contributions donors make on their tax returns.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号