首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   942篇
  免费   81篇
  国内免费   3篇
管理学   192篇
民族学   3篇
人才学   1篇
人口学   45篇
丛书文集   27篇
理论方法论   41篇
综合类   206篇
社会学   92篇
统计学   419篇
  2024年   2篇
  2023年   8篇
  2022年   4篇
  2021年   24篇
  2020年   30篇
  2019年   41篇
  2018年   36篇
  2017年   50篇
  2016年   41篇
  2015年   48篇
  2014年   62篇
  2013年   151篇
  2012年   68篇
  2011年   60篇
  2010年   40篇
  2009年   45篇
  2008年   63篇
  2007年   28篇
  2006年   41篇
  2005年   32篇
  2004年   26篇
  2003年   12篇
  2002年   22篇
  2001年   15篇
  2000年   10篇
  1999年   10篇
  1998年   9篇
  1997年   10篇
  1996年   9篇
  1995年   9篇
  1994年   2篇
  1993年   1篇
  1992年   4篇
  1991年   2篇
  1990年   3篇
  1989年   3篇
  1988年   3篇
  1983年   1篇
  1982年   1篇
排序方式: 共有1026条查询结果,搜索用时 15 毫秒
71.
When using data envelopment analysis (DEA) as a benchmarking technique for nursing homes, it is essential to include measures of the quality of care. We survey applications where quality has been incorporated into DEA models and consider the concerns that arise when the results show that quality measures have been effectively ignored. Three modeling techniques are identified that address these concerns. Each of these techniques requires some input from management as to the proper emphasis to be placed on the quality aspect of performance. We report the results of a case study in which we apply these techniques to a DEA model of nursing home performance. We examine in depth not only the resulting efficiency scores, but also the benchmark sets and the weights given to the input and output measures. We find that two of the techniques are effective in insuring that DEA results discriminate between high and low quality performance.  相似文献   
72.
Most models for incomplete data are formulated within the selection model framework. Pattern-mixture models are increasingly seen as a viable alternative, both from an interpretational as well as from a computational point of view (Little 1993, Hogan and Laird 1997, Ekholm and Skinner 1998). Whereas most applications are either for continuous normally distributed data or for simplified categorical settings such as contingency tables, we show how a multivariate odds ratio model (Molenberghs and Lesaffre 1994, 1998) can be used to fit pattern-mixture models to repeated binary outcomes with continuous covariates. Apart from point estimation, useful methods for interval estimation are presented and data from a clinical study are analyzed to illustrate the methods.  相似文献   
73.
“One method of error analysis (not the one we will use) is based upon the principles of mathematical statistics. Unfortunately, statistical methods can only be meaningfully applied when one has large amounts of data for a given system. In many cases … these large quantities of data are not available … then statistical methods are not applicable, and some other methods must be devised.”  相似文献   
74.
The Consistent System (CS) is an interactive computer system for researchers in the behavioral and policy sciences and in fields with similar requirements for data management and statistical analysis. The researcher is not expected to be a programmer. The system offers a wide range of facilities and permits the user to combine them in novel ways. In particular, tools for statistical analysis may be used in combination with a powerful relational subsystem for data base management. This paper gives an overview of the objectives, capabilities, status, and availability of the system.  相似文献   
75.
A Methodology for Seismic Risk Analysis of Nuclear Power Plants   总被引:1,自引:0,他引:1  
This methodology begins by quantifying the fragility of all key components and structures in the plant. By means of the logic encoded in the plant event trees and fault trees, the component fragilities are combined to form fragilities for the occurrence of plant damage states or release categories. Combining these, in turn, with the seismicity curves yields the frequencies of those states or releases. Uncertainty is explicitly included at each step of the process.  相似文献   
76.
Summary: We describe depth–based graphical displays that show the interdependence of multivariate distributions. The plots involve one–dimensional curves or bivariate scatterplots, so they are easier to interpret than correlation matrices. The correlation curve, modelled on the scale curve of Liu et al. (1999), compares the volume of the observed central regions with the volume under independence. The correlation DD–plot is the scatterplot of depth values under a reference distribution against depth values under independence. The area of the plot gives a measure of distance from independence. Correlation curve and DD-plot require an independence model as a baseline: Besides classical parametric specifications, a nonparametric estimator, derived from the randomization principle, is used. Combining data depth and the notion of quadrant dependence, quadrant correlation trajectories are obtained which allow simultaneous representation of subsets of variables. The properties of the plots for the multivariate normal distribution are investigated. Some real data examples are illustrated. *This work was completed with the support of Ca Foscari University.  相似文献   
77.
In this paper, we consider the problem of enumerating all maximal motifs in an input string for the class of repeated motifs with wild cards. A maximal motif is such a representative motif that is not properly contained in any larger motifs with the same location lists. Although the enumeration problem for maximal motifs with wild cards has been studied in Parida et al. (2001), Pisanti et al. (2003) and Pelfrêne et al. (2003), its output-polynomial time computability has been still open. The main result of this paper is a polynomial space polynomial delay algorithm for the maximal motif enumeration problem for the repeated motifs with wild cards. This algorithm enumerates all maximal motifs in an input string of length n in O(n 3) time per motif with O(n) space, in particular O(n 3) delay. The key of the algorithm is depth-first search on a tree-shaped search route over all maximal motifs based on a technique called prefix-preserving closure extension. We also show an exponential lower bound and a succinctness result on the number of maximal motifs, which indicate the limit of a straightforward approach. The results of the computational experiments show that our algorithm can be applicable to huge string data such as genome data in practice, and does not take large additional computational cost compared to usual frequent motif mining algorithms. This work is done during the Hiroki Arimura’s visit in LIRIS, University Claude-Bernard Lyon 1, France.  相似文献   
78.
79.
A survey was conducted in order to evaluate the responses of biotechnology executives concerning their perceptions of the frequency of questionable R & D studies performed in their companies and to assess the extent to which their companies voluntarily conducted data audits. Data audit was found to be commonly practiced on a voluntary basis by biotechnology companies, in contrast to its almost lack of practice by universities. However, public companies were more likely to practice data auditing than privately held companies. Moreover, the likelihood that managers suspected or detected questionable studies in their companies was significantly increased if the company practiced data auditing.  相似文献   
80.
Revisions of the early GNP estimates may contain elements of measurement errors as well as forecast errors. These types of error behave differently but need to satisfy a common set of criteria for well-behavedness. This article tests these criteria for U.S. GNP revisions. The tests are similar to tests of rationality and are based on the generalized method of moments estimator. The flash, 15-day, and 45-day estimates are found to be ill behaved, but the 75-day estimate satisfies the criteria for well-behavedness.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号