首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   432篇
  免费   17篇
管理学   90篇
民族学   3篇
人才学   2篇
人口学   31篇
丛书文集   2篇
理论方法论   46篇
综合类   5篇
社会学   230篇
统计学   40篇
  2021年   2篇
  2020年   2篇
  2019年   7篇
  2018年   6篇
  2017年   12篇
  2016年   9篇
  2015年   12篇
  2014年   17篇
  2013年   54篇
  2012年   13篇
  2011年   11篇
  2010年   11篇
  2009年   6篇
  2008年   10篇
  2007年   17篇
  2006年   11篇
  2005年   16篇
  2004年   11篇
  2003年   10篇
  2002年   13篇
  2001年   8篇
  2000年   5篇
  1999年   3篇
  1998年   13篇
  1997年   11篇
  1996年   12篇
  1995年   8篇
  1994年   13篇
  1993年   10篇
  1992年   8篇
  1991年   10篇
  1990年   7篇
  1989年   6篇
  1988年   6篇
  1987年   7篇
  1986年   5篇
  1985年   7篇
  1984年   9篇
  1983年   5篇
  1982年   2篇
  1981年   6篇
  1980年   2篇
  1979年   3篇
  1978年   7篇
  1977年   9篇
  1976年   2篇
  1975年   3篇
  1974年   3篇
  1973年   3篇
  1972年   2篇
排序方式: 共有449条查询结果,搜索用时 15 毫秒
441.
Summary.  We report the results of a period change analysis of time series observations for 378 pulsating variable stars. The null hypothesis of no trend in expected periods is tested for each of the stars. The tests are non-parametric in that potential trends are estimated by local linear smoothers. Our testing methodology has some novel features. First, the null distribution of a test statistic is defined to be the distribution that results in repeated sampling from a population of stars. This distribution is estimated by means of a bootstrap algorithm that resamples from the collection of 378 stars. Bootstrapping in this way obviates the problem that the conditional sampling distribution of a statistic, given a particular star, may depend on unknown parameters of that star. Another novel feature of our test statistics is that one-sided cross-validation is used to choose the smoothing parameters of the local linear estimators on which they are based. It is shown that doing so results in tests that are tremendously more powerful than analogous tests that are based on the usual version of cross-validation. The positive false discovery rate method of Storey is used to account for the fact that we simultaneously test 378 hypotheses. We ultimately find that 56 of the 378 stars have changes in mean pulsation period that are significant when controlling the positive false discovery rate at the 5% level.  相似文献   
442.
We study a weighted least squares estimator for Aalen's additive risk model with right-censored survival data which allows for a very flexible handling of covariates. We divide the follow-up period into intervals and assume a constant hazard rate in each interval. The model is motivated as a piecewise approximation of a hazard function composed of three parts: arbitrary nonparametric functions for some covariate effects, smoothly varying functions for others, and known (or constant) functions for yet others. The proposed estimator is an extension of the grouped data version of the Huffer and McKeague (1991 Huffer , F. W. , McKeague , I. W. ( 1991 ). Weighted least squares estimation for Aalen's additive risk model . Journal of the American Statistical Association 86 : 114129 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) estimator. For our model, since the number of parameters is finite (although large), conventional approaches (such as maximum likelihood) are easy to formulate and implement. The approach is illustrated by simulations, and is compared to the previous studies. The method is also applied to the Framingham study data.  相似文献   
443.
Service designers predict market share and sales for their new designs by estimating consumer utilities. The service's technical features (for example, overnight parcel delivery), its price, and the nature of consumer interactions with the service delivery system influence those utilities. Price and the service's technical features are usually quite objective and readily ascertained by the consumer. However, consumer perceptions about their interactions with the service delivery system are usually far more subjective. Furthermore, service designers can only hope to influence those perceptions indirectly through their decisions about nonlinear processes such as employee recruiting, training, and scheduling policies. Like the service's technical features, these process choices affect quality perceptions, market share, revenues, costs, and profits. We propose a heuristic for the NP‐hard service design problem that integrates realistic service delivery cost models with conjoint analysis. The resulting seller's utility function links expected profits to the intensity of a service's influential attributes and also reveals an ideal setting or level for each service attribute. In tests with simulated service design problems, our proposed configurations compare quite favorably with the designs suggested by other normative service design heuristics.  相似文献   
444.
The problem of maximizing diversity deals with selecting a set of elements from some larger collection such that the selected elements exhibit the greatest variety of characteristics. A new model is proposed in which the concept of diversity is quantifiable and measurable. A quadratic zero-one model is formulated for diversity maximization. Based upon the formulation, it is shown that the maximum diversity problem is NP-hard. Two equivalent linear integer programs are then presented that offer progressively greater computational efficiency. Another formulation is also introduced which involves a different diversity objective. An example is given to illustrate how additional considerations can be incorporated into the maximum diversity model.  相似文献   
445.
It is timely and appropriate to examine both philosophical and pragmatical issues associated with formalizing the adoption of artificial intelligence as a reference discipline for decision support systems research. This paper reflects on where we were when the first special issue of Decision Sciences on expert systems and decision support systems was published, addresses the dynamics of what has taken place subsequent to the publication of that first special issue, sets forth a proposition to stimulate ongoing dialog with respect to synergies between the decision support system research agenda and the research agenda of the artificial intelligence discipline, and demonstrates how the papers appearing in this follow-up special issue of Decision Sciences are representative of an emerging, challenging, and exciting new decision support systems era.  相似文献   
446.
Despite the development of increasingly sophisticated and refined multicriteria decision-making (MCDM) methods, an examination of the experimental evidence indicates that users most often prefer relatively unsophisticated methods. In this paper, we synthesize theories and empirical findings from the psychology of judgment and choice to provide a new theoretical explanation for such user preferences. Our argument centers on the assertion that the MCDM method preferred by decision makers is a function of the degree to which the method tends to introduce decisional conflict. The model we develop relates response mode, decision strategy, and the salience of decisional conflict to user preferences among decision aids. We then show that the model is consistent with empirical results in MCDM studies. Next, the role of decisional conflict in problem formulation aids is briefly discussed. Finally, we outline future research needed to thoroughly test the theoretical mechanisms we have proposed.  相似文献   
447.
448.
Managers and analysts increasingly need to master the hands‐on use of computer‐based decision technologies including spreadsheet models. Effective training can prevent the lack of skill from impeding potential effectiveness gains from decision technologies. Among the wide variety of software training approaches in use today, recent research indicates that techniques based on behavior modeling, which consists of computer skill demonstration and hands‐on practice, are among the most effective for achieving positive training outcomes. The present research examines whether the established behavior‐modeling approach to software training can be improved by adding a retention enhancement intervention as a substitute for, or complement to, hands‐on practice. One hundred and eleven trainees were randomly assigned to one of three versions of a training program for spreadsheets: retention enhancement only, practice only, and retention enhancement plus practice. Results obtained while controlling for total training time indicate that a combination of retention enhancement and practice led to significantly better cognitive learning than practice alone. The initial difference in cognitive achievement was still evident one week after training. Implications for future computer training research and practice are discussed.  相似文献   
449.
In certain settings, difficulties arise that limit the effectiveness of LP formulations for the discriminant problem. Explanations and possible remedies have been offered, but these have had only limited success. We provide a simple way to overcome these problems based on an appropriate use and interpretation of normalizations. In addition, we demonstrate a normalization that is invariant under all translations of the problem data, providing a stability property not shared by previous approaches. Finally, we discuss the possibility of using more general models to improve discrimination.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号