首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   479篇
  免费   23篇
管理学   63篇
民族学   4篇
人口学   42篇
丛书文集   5篇
理论方法论   45篇
综合类   4篇
社会学   276篇
统计学   63篇
  2023年   7篇
  2021年   5篇
  2020年   9篇
  2019年   16篇
  2018年   16篇
  2017年   15篇
  2016年   23篇
  2015年   14篇
  2014年   21篇
  2013年   70篇
  2012年   9篇
  2011年   15篇
  2010年   10篇
  2009年   10篇
  2008年   12篇
  2007年   14篇
  2006年   8篇
  2005年   12篇
  2004年   10篇
  2003年   13篇
  2002年   16篇
  2001年   7篇
  2000年   7篇
  1999年   10篇
  1998年   11篇
  1997年   10篇
  1996年   5篇
  1995年   6篇
  1994年   3篇
  1993年   8篇
  1992年   8篇
  1991年   10篇
  1989年   5篇
  1988年   6篇
  1987年   8篇
  1986年   5篇
  1985年   6篇
  1984年   5篇
  1983年   6篇
  1982年   5篇
  1980年   3篇
  1979年   9篇
  1978年   4篇
  1977年   2篇
  1976年   3篇
  1975年   3篇
  1974年   6篇
  1973年   4篇
  1971年   3篇
  1965年   2篇
排序方式: 共有502条查询结果,搜索用时 584 毫秒
191.
This article challenges often unquestioned understandings within human resource development (HRD) of leadership as comprising knowledge and skills and leadership development as involving the transfer of such knowledge and skills from formal interventions to workplace performance. Using the notions of leadership as identity and learning as a process of identity formation, the article reports qualitative research showing how a case-study group of middle managers in a sector of the economy undergoing unprecedented turbulence, UK local government, developed a sense of themselves as leaders and how a key HRD intervention, a corporate MBA, facilitated such identity development. In particular, the article uses situated learning theory to examine how informal communities of practice associated with Master of Business Administration (MBA) study provided a forum for identity building of equal developmental value to the formal MBA curriculum. The implications for future HRD research are established and suggestions made for the re-design of HRD interventions to best enable identity-work.  相似文献   
192.
Improvements in information technologies provide new opportunities to control and improve business processes based on real‐time performance data. A class of data we call individualized trace data (ITD) identifies the real‐time status of individual entities as they move through execution processes, such as an individual product passing through a supply chain or a uniquely identified mortgage application going through an approval process. We develop a mathematical framework which we call the State‐Identity‐Time (SIT) Framework to represent and manipulate ITD at multiple levels of aggregation for different managerial purposes. Using this framework, we design a pair of generic quality measures—timeliness and correctness—for the progress of entities through a supply chain. The timeliness and correctness metrics provide behavioral visibility that can help managers to grasp the dynamics of supply chain behavior that is distinct from asset visibility such as inventory. We develop special quality control methods using this framework to address the issue of overreaction that is common among managers faced with a large volume of fast‐changing data. The SIT structure and its associated methods inform managers on if, when, and where to react. We illustrate our approach using simulations based on real RFID data from a Walmart RFID pilot project.  相似文献   
193.
An estimator of the ratio of scale parameters of the distributions of two positive random variables is developed for the case where the only difference between the distributions is a difference in scale. Simulation studies demonstrate that the estimator performs much better, in terms of mean squared error, than the most popular one among those estimators currently available.  相似文献   
194.
A model for survival analysis is studied that is relevant for samples which are subject to multiple types of failure. In comparison with a more standard approach, through the appropriate use of hazard functions and transition probabilities, the model allows for a more accurate study of cause-specific failure with regard to both the timing and type of failure. A semiparametric specification of a mixture model is employed that is able to adjust for concomitant variables and allows for the assessment of their effects on the probabilities of eventual causes of failure through a generalized logistic model, and their effects on the corresponding conditional hazard functions by employing the Cox proportional hazards model. A carefully formulated estimation procedure is presented that uses an EM algorithm based on a profile likelihood construction. The methods discussed, which could also be used for reliability analysis, are applied to a prostate cancer data set.  相似文献   
195.
In the literature studying recurrent event data, a large amount of work has been focused on univariate recurrent event processes where the occurrence of each event is treated as a single point in time. There are many applications, however, in which univariate recurrent events are insufficient to characterize the feature of the process because patients experience nontrivial durations associated with each event. This results in an alternating event process where the disease status of a patient alternates between exacerbations and remissions. In this paper, we consider the dynamics of a chronic disease and its associated exacerbation-remission process over two time scales: calendar time and time-since-onset. In particular, over calendar time, we explore population dynamics and the relationship between incidence, prevalence and duration for such alternating event processes. We provide nonparametric estimation techniques for characteristic quantities of the process. In some settings, exacerbation processes are observed from an onset time until death; to account for the relationship between the survival and alternating event processes, nonparametric approaches are developed for estimating exacerbation process over lifetime. By understanding the population dynamics and within-process structure, the paper provide a new and general way to study alternating event processes.  相似文献   
196.
Pressure is often placed on statistical analysts to improve the accuracy of their population estimates. In response to this pressure, analysts have long exploited the potential to combine surveys in various ways. This paper develops a framework for combining surveys when data items from one of the surveys is mass imputed. The estimates from the surveys are combined using a composite estimator (CE). The CE accounts for the variability due to the imputation model and the surveys’ sampling schemes. Diagnostics for the validity of the imputation model are also discussed. We describe an application of combining the Australian Labour Force Survey and the National Aboriginal and Torres Strait Islander Health Survey to estimate employment characteristics about the Indigenous population. The findings suggest that combining these surveys is beneficial.  相似文献   
197.
In the last few decades, payday lending has mushroomed in many developed countries. The arguments for and against an industry which provides small, short‐term loans at very high interest rates have also blossomed. This article presents findings from an Australian study to contribute to the international policy and practice debate about a sector which orients to those on a low income. At the heart of this debate lies a conundrum: Borrowing from payday lenders exacerbates poverty, yet many low‐income households rely on these loans. We argue that the key problem is the restricted framework within which the debate currently oscillates. Key Practitioner Message: ● Framing payday borrowing as a problem of market failure leads to one‐sided and ineffective regulatory responses; ● Until governments instigate real alternatives for cheap and readily available credit, and broader anti‐poverty measures, curbing access to payday lenders can have the perverse effect of increasing privation; ● For practitioners seeking to abolish payday lending, campaigns for higher wages and a liveable social welfare income are central.  相似文献   
198.
Six methods of obtaining estimates of treatment effects in a row-column design are considered. Five methods use estimates of inter-row and inter-column variation, and the remaining method is Ordinary Least Squares. Using simulation, these methods are examined to see which are most appropriate for minimising the sum of the squared differences between the estimates of the elementary treatment contrasts and their true values. Recommendations are made of which methods to use.  相似文献   
199.
Physiologically-based toxicokinetic (PBTK) models are widely used to quantify whole-body kinetics of various substances. However, since they attempt to reproduce anatomical structures and physiological events, they have a high number of parameters. Their identification from kinetic data alone is often impossible, and other information about the parameters is needed to render the model identifiable. The most commonly used approach consists of independently measuring, or taking from literature sources, some of the parameters, fixing them in the kinetic model, and then performing model identification on a reduced number of less certain parameters. This results in a substantial reduction of the degrees of freedom of the model. In this study, we show that this method results in final estimates of the free parameters whose precision is overestimated. We then compared this approach with an empirical Bayes approach, which takes into account not only the mean value, but also the error associated with the independently determined parameters. Blood and breath 2H8-toluene washout curves, obtained in 17 subjects, were analyzed with a previously presented PBTK model suitable for person-specific dosimetry. Model parameters with the greatest effect on predicted levels were alveolar ventilation rate QPC, fat tissue fraction VFC, blood-air partition coefficient Kb, fraction of cardiac output to fat Qa/co and rate of extrahepatic metabolism Vmax-p. Differences in the measured and Bayesian-fitted values of QPC, VFC and Kb were significant (p < 0.05), and the precision of the fitted values Vmax-p and Qa/co went from 11 ± 5% to 75 ± 170% (NS) and from 8 ± 2% to 9 ± 2% (p < 0.05) respectively. The empirical Bayes approach did not result in less reliable parameter estimates: rather, it pointed out that the precision of parameter estimates can be overly optimistic when other parameters in the model, either directly measured or taken from literature sources, are treated as known without error. In conclusion, an empirical Bayes approach to parameter estimation resulted in a better model fit, different final parameter estimates, and more realistic parameter precisions.  相似文献   
200.
A review of the literature indicates that the traditional approach for evaluating quantity discount offerings for purchased items has not adequately considered the effect that transportation costs may have on the optimal order quantity; despite the general fact that purchased materials must bear transportation charges. The transportation cost structure for less-than-truckload (LTL) shipments reflects sizable reductions in freight rates when the shipment size exceeds one of the nominal rate breakpoints. However, the shipper must also be aware of the opportunity to reduce total freight costs by artificially inflating the actual shipping weight to the next rate breakpoint, in order that a lower marginal tariff is achieved for the entire shipment. Such over-declared shipments result in an effective freight rate schedule that is characterized by constant fixed charge segments in addition to the nominal marginal rates. Over-declared shipments are economical when the shipment volume is less than the rate breakpoint, but greater than a cost indifference point between the two adjacent marginal rates. This paper presents a simple analytical procedure for finding the order quantity that minimizes total purchase costs which reflect both transportation economies and quantity discounts. After first solving for the series of indifference points that apply to a particular freight rate schedule, a total purchase cost expression is presented that properly accounts for the actual transportation cost structure. The optimal purchase order quantity will be one of the four following possibilities: (1) the valid economic order quantity (EOQ), QC; (2) a purchase price breakpoint in excess of QC; (3) a transportation rate breakpoint in excess of QC; and (4) a modified EOQ which provides an over-declared shipment in excess of QC. Finally, an algorithm which systematically explores these four possibilities is presented and illustrated with a numerical example.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号