首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1032篇
  免费   16篇
  国内免费   2篇
管理学   223篇
民族学   1篇
人才学   1篇
人口学   16篇
丛书文集   16篇
理论方法论   6篇
综合类   374篇
社会学   23篇
统计学   390篇
  2024年   1篇
  2023年   3篇
  2022年   2篇
  2021年   7篇
  2020年   6篇
  2019年   11篇
  2018年   19篇
  2017年   62篇
  2016年   29篇
  2015年   45篇
  2014年   36篇
  2013年   224篇
  2012年   67篇
  2011年   39篇
  2010年   23篇
  2009年   48篇
  2008年   37篇
  2007年   37篇
  2006年   26篇
  2005年   35篇
  2004年   26篇
  2003年   20篇
  2002年   22篇
  2001年   18篇
  2000年   21篇
  1999年   12篇
  1998年   18篇
  1997年   13篇
  1996年   15篇
  1995年   18篇
  1994年   5篇
  1993年   13篇
  1992年   25篇
  1991年   8篇
  1990年   6篇
  1989年   9篇
  1988年   11篇
  1987年   8篇
  1986年   4篇
  1985年   3篇
  1984年   4篇
  1983年   5篇
  1982年   4篇
  1981年   4篇
  1980年   1篇
排序方式: 共有1050条查询结果,搜索用时 453 毫秒
911.
In medical studies we are often confronted with complex longitudinal data. During the follow-up period, which can be ended prematurely by a terminal event (e.g. death), a subject can experience recurrent events of multiple types. In addition, we collect repeated measurements from multiple markers. An adverse health status, represented by ‘bad’ marker values and an abnormal number of recurrent events, is often associated with the risk of experiencing the terminal event. In this situation, the missingness of the data is not at random and, to avoid bias, it is necessary to model all data simultaneously using a joint model. The correlations between the repeated observations of a marker or an event type within an individual are captured by normally distributed random effects. Because the joint likelihood contains an analytically intractable integral, Bayesian approaches or quadrature approximation techniques are necessary to evaluate the likelihood. However, when the number of recurrent event types and markers is large, the dimensionality of the integral is high and these methods are too computationally expensive. As an alternative, we propose a simulated maximum-likelihood approach based on quasi-Monte Carlo integration to evaluate the likelihood of joint models with multiple recurrent event types and markers.  相似文献   
912.
In this paper, a two-parameter discrete distribution named Misclassified Size Biased Discrete Lindley distribution is defined under the situation of misclassification where some of the observations corresponding to x = c + 1 are reported as x = c with misclassification errorα. Different estimation methods like maximum likelihood estimation, moment estimation, and Bayes Estimation are considered to estimate the parameters of Misclassified Size Biased Discrete Lindley distribution. These methods are compared by using mean square error through simulation study with varying sample sizes. Further general form of factorial moment is also obtained for Misclassified Size Biased Discrete Lindley distribution. Real life data set is used to fit Misclassified Size Biased Discrete Lindley distribution.  相似文献   
913.
The application of subset correspondence analysis is a relatively new technique to deal with the analysis of categorical data with missingness. A simulation study is used to test the effects of Little and Rubin's missingness mechanisms, as well as missingness up to 50% on subset correspondence analysis. Missingness was simulated across 18 different scenarios and each scenario was repeated 10 times, with outcomes averaged across the 10 simulations. In this application, it was found that while missingness in excess of 30% has some effect on certain outcomes, there is no evidence to suggest that the missingness mechanism significantly affects results.  相似文献   
914.
Classification models can demonstrate apparent prediction accuracy even when there is no underlying relationship between the predictors and the response. Variable selection procedures can lead to false positive variable selections and overestimation of true model performance. A simulation study was conducted using logistic regression with forward stepwise, best subsets, and LASSO variable selection methods with varying total sample sizes (20, 50, 100, 200) and numbers of random noise predictor variables (3, 5, 10, 15, 20, 50). Using our critical values can help reduce needless follow-up on variables having no true association with the outcome.  相似文献   
915.
This paper investigates optimal lot-splitting policies in a multiprocess flow shop environment with the objective of minimizing either mean flow time or makespan. Using a quadratic programming approach to the mean flow time problem, we determine the optimal way of splitting a job into smaller sublots under various setup times to run time ratios, number of machines in the flow shop, and number of allowed sublots. Our results come from a deterministic flow shop environment, but also provide insights into the repetitive lots scheme using equal lot splits for job shop scheduling in a stochastic environment. We indicate those conditions in which managers should implement the repetitive lots scheme and where other lot-splitting schemes should work better.  相似文献   
916.
This paper considers the application of cellular manufacturing (CM) to batch production by exploring the shop floor performance trade‐offs associated with shops employing different levels of CM. The literature has alluded to a continuum that exists between the purely departmentalized job shop and the completely cellular shop. However, the vast majority of CM research exists at the extremes of this continuum. Here, we intend to probe performance relationships by comparing shops that exist at different stages of CM adoption. Specifically, we begin with a hypothetical departmentalized shop found in the CM literature, and in a stepwise fashion, form independent cells. At each stage, flow time and tardiness performance is recorded. Modeling results indicate that, depending on shop conditions and managerial objectives, superior shop performance may be recorded by the job shop, the cell shop, or by one of the shops between these extreme points. In fact, under certain conditions, shops that contain partially formed cells perform better than shops that use completely formed cells. Additional results demonstrate that in order to achieve excellent performance, managers investigating specific layouts need to pay especially close attention to changes in machine utilization as machine groups are partitioned into cells.  相似文献   
917.
A dynamic modeling approach to management of multiechelon, multi-indenture inventory systems with repair is addressed. The structure of the model follows that of the U.S. Air Force Reparable Asset Management System. The model is used as a vehicle to discuss the structure of typical multiechelon systems and to illustrate the advantages of a dynamic modeling approach to such systems.  相似文献   
918.
A new sequencing method for mixed‐model assembly lines is developed and tested. This method, called the Evolutionary Production Sequencer (EPS) is designed to maximize production on an assembly line. The performance of EPS is evaluated using three measures: minimum cycle time necessary to achieve 100% completion without rework, percent of items completed without rework for a given cycle time, and sequence “smoothness.” The first two of these measures are based on a simulated production system. Characteristics of the system, such as assembly line station length, assembly time and cycle time, are varied to better gauge the performance of EPS. More fundamental variation is studied by modeling two production systems. In one set of tests, the system consists of an assembly line in isolation (i.e., a single‐level system). In another set of tests, the production system consists of the assembly line and the fabrication system supplying components to the line (i.e., a two‐level system). Sequence smoothness is measured by the mean absolute deviation (MAD) between actual component usage and the ideal usage at each point in the production sequence. The performance of EPS is compared to those of well‐known assembly line sequencing techniques developed by Miltenburg (1989), Okamura and Yamashina (1979), and Yano and Rachamadugu (1991). EPS performed very well under all test conditions when the criterion of success was either minimum cycle time necessary to achieve 100% production without rework or percent of items completed without rework for a given cycle time. When MAD was the criterion of success, EPS was found inferior to the Miltenburg heuristic but better than the other two production‐oriented techniques.  相似文献   
919.
Research relating to sequencing rules in simple job shops has proliferated, but there has not been a corresponding proliferation of research evaluating similar sequencing rules in more complex assembly job shops. In a simple job shop, all operations are performed serially; but an assembly shop encompasses both serial and parallel operations. As a result of the increased complexity of assembly shops, the results associated with the performance of sequencing rules in simple job shops cannot be expected for an assembly shop. In this paper, 11 sequencing rules (some of which are common to simple job shops and some decigned specifically for assembly shops) are evaluated using a simulation analysis of a hypothetical assembly shop. The simulation results are analyzed using an ANOVA procedure that identifies significant differences in the results of several performance measures. Sensitivity analysis also is performed to determine the effect of job structure on the performance of the sequencing rules.  相似文献   
920.
This paper addresses the problem of sequencing in decentralized kanban-controlled flow shops. The kanban production control system considered uses two card types and a constant withdrawal period. The flow shops are decentralized in the sense that sequencing decisions are made at the local workstation level rather than by a centralized scheduling system. Further, there are no material requirements planning (MRP)-generated due dates available to drive dispatching rules such as earliest due date, slack, and critical ratio. Local sequencing rules suitable for the decentralized kanban production-control environment are proposed and tested in a simulation experiment. These rules are designed so that they can be implemented with only the information available at the workstation level. Example sequencing problems are used to show why the shortest processing time rule minimizes neither average work-in-process inventory nor average finished-goods withdrawal kanban waiting time. Further, it is shown how work station supervisors can use the withdrawal period, in addition to the number of kanbans, to manage work-in-process inventories.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号