首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   114篇
  免费   0篇
管理学   54篇
民族学   4篇
人口学   2篇
理论方法论   2篇
社会学   14篇
统计学   38篇
  2023年   1篇
  2022年   1篇
  2021年   1篇
  2020年   3篇
  2019年   3篇
  2018年   4篇
  2017年   7篇
  2016年   2篇
  2015年   9篇
  2014年   5篇
  2013年   26篇
  2012年   4篇
  2011年   3篇
  2010年   4篇
  2009年   2篇
  2008年   5篇
  2007年   3篇
  2006年   1篇
  2005年   1篇
  2004年   3篇
  2003年   1篇
  2002年   2篇
  2001年   1篇
  1998年   1篇
  1996年   2篇
  1993年   1篇
  1991年   3篇
  1990年   3篇
  1989年   2篇
  1987年   1篇
  1986年   1篇
  1984年   1篇
  1980年   1篇
  1979年   1篇
  1978年   2篇
  1976年   1篇
  1975年   2篇
排序方式: 共有114条查询结果,搜索用时 31 毫秒
71.
Collapsibility with respect to a measure of association implies that the measure of association can be obtained from the marginal model. We first discuss model collapsibility and collapsibility with respect to regression coefficients for linear regression models. For parallel regression models, we give simple and different proofs of some of the known results and obtain also certain new results. For random coefficient regression models, we define (average) AA-collapsibility and obtain conditions under which it holds. We consider Poisson regression and logistic regression models also, and derive conditions for collapsibility and AA-collapsibility, respectively. These results generalize some of the results available in the literature. Some suitable examples are also discussed.  相似文献   
72.
We study the problem of optimally sequencing the creation of elements in a software project to optimize a time‐weighted value objective. As elements are created, certain parts of the system (referred to as “groups”) become functional and provide value, even though the entire system has not been completed. The main tradeoff in the sequencing problem arises from elements that belong to multiple groups. On the one hand, creating groups with common elements early in the project reduces the effort required to build later functionality that uses these elements. On the other hand, the early creation of such groups can delay the release of some critical functionality. We formulate the element sequencing problem and propose a heuristic to solve it. This heuristic is compared against a lower bound developed for the problem. Next, we study a more general version of the element sequencing problem in which an element requires some effort to be made reusable. When a reusable element is used in another group, some more effort is needed to specialize the element to work as desired in that group. We study reuse decisions under a weighted completion time objective (i.e., the sum of the completion time of each group weighted by its value is minimized), and show how these decisions differ from those under a traditional makespan objective (i.e., only the final completion time of the project is minimized). A variety of analytical and numerical results are presented. The model is also implemented on data obtained from a real software project. A key finding of this work is that the optimal effort on reuse is never increased (typically lowered) when a weighted completion time objective is used. This finding has implications for managing reuse in projects in which user value influences the order in which functionality is created.  相似文献   
73.
Rationality is a fundamental concept to several models of IT planning and implementation. Though the importance of following rational processes in making strategic IT decisions is well acknowledged, there is not much understanding on why discrepancies occur in the IT decision‐making process and what factors affect rationality. Drawing upon structural and resource‐based perspectives of strategy, this study examines the influence of shared domain knowledge and IT unit structure on rationality in strategic IT decisions. Data were gathered from 223 senior IT executives using a survey to examine the relationships among the research constructs. The results suggest a positive impact of shared domain knowledge and formalization of IT unit structure on rationality in strategic IT decisions. Further, a highly centralized IT unit structure was found to negatively influence shared domain knowledge. On the other hand, formalization of IT structure positively influenced shared domain knowledge. The implications of the findings for research and practice are presented.  相似文献   
74.
This study seeks to highlight construct measurement issues in information systems (IS) research. It describes the normative process of construct measurement and identifies the difficult problems involved in measurement and some ways in which these difficulties may be overcome. An illustrative construct-operationalization study in the area of strategic systems outlines how the normative guidelines may be applied to IS. Some specific recommendations for IS include developing a preliminary model of the construct even if there is little previous measurement research, devoting greater attention to predictive validity because a lack of theories in IS precludes the examination of nomological validity, verifying the assumptions underlying the computation of an overall index, and examining the measurement properties of the index.  相似文献   
75.
In this paper, Virtual Cellular Manufacturing (VCM), an alternative approach to implementing cellular manufacturing, is investigated. VCM combines the setup efficiency typically obtained by Group Technology (GT) cellular manufacturing (CM) systems with the routing flexibility of a job shop. Unlike traditional CM systems in which the shop is physically designed as a series of cells, family-based scheduling criteria are used to form logical cells within a shop using a process layout. The result is the formation of temporary, virtual cells as opposed to the more traditional, permanent, physical cells present in GT systems. Virtual cells allow the shop to be more responsive to changes in demand and workload patterns. Production using VCM is compared to production using traditional cellular and job shop approaches. Results indicate that VCM yields significantly better flow time and due date performance over a wide range of common operating conditions, as well as being more robust to demand variability.  相似文献   
76.
This paper presents network models to determine the optimal replacement policy for a fleet of vehicles over a finite planning horizon. First, a minimum cost-flow model is developed to determine the optimal replacement schedule for a fleet of fixed size consisting of a single type of vehicle of various ages. The model is then extended to allow for restrictions on capital expenditures that limit the purchase of new vehicles in any time period and to allow for fluctuations in the fleet size due to planned expansion or retrenchment. Finally, a multi-commodity network model is developed for a fleet consisting of multiple vehicle types and ages.  相似文献   
77.
Objectives To examine the prevalence of gambling and types of gambling activities in a sample of undocumented Mexican immigrants. Design Non-probability cross-sectional design. Setting New York City. Sample The 431 respondents ranged in age from 18 to 80 (mean age 32), 69.7% were male. Results More than half (53.8%) reported gambling in their lifetime and of those most (43.9%) played scratch and win tickets or the lottery. In multivariate analyses men reported gambling more than women [2.13, 95% CI = (1.03, 4.38)]. The odds of gambling in their lifetime were higher among those reporting sending money to family or friends in the home country [2.65, 95% CI = 1.10, 6.38)], and those who reported 1–5 days as compared to no days of poor mental health in the past 30 days [2.44, 95% CI = 1.22, 4.89)]. Conversely, those who reported entering the U.S. to live after 1996 were less likely to report gambling [0.44, 95% CI = (0.22, 0.89)] as compared to those who had lived in the U.S. longer. Conclusion There is a need to further explore both the prevalence and the severity of gambling amongst the growing population of undocumented Mexican immigrants in the U.S.  相似文献   
78.
In this paper we present a methodology for the study of longitudinal aspects of monetary and multi-dimensional poverty, and apply this in a multi-country comparative context. The conventional poor/non-poor dichotomy is replaced by defining poverty as a matter of degree, determined by the place of the individual in the income distribution. The same methodology facilitates the inclusion of other dimensions of deprivation into the analysis: by appropriately weighting indicators of deprivation to reflect their dispersion and correlation, we can construct measures of non-monetary deprivation in its various dimensions. An important contribution of the paper is to identify rules for the intersection and union of fuzzy sets appropriate for the study of poverty and deprivation. These rules allow us to meaningfully study the persistence of poverty and deprivation over time. We establish the consistency of the approach when applied to a time sequence of any length. We can thus study longitudinally over time a whole range of indicators of poverty and deprivation, from cross-sectional monetary poverty rates to multi-dimensional indicators of deprivation: in particular, we propose the new “Fuzzy At-persistent-risk-of-poverty rate”, and compare it with the corresponding Laeken indicator adopted by Eurostat.  相似文献   
79.
In the analysis of experiments with mixtures, quadratic models have been widely used. The optimum designs for the estimation of optimum mixing proportions in a quadratic mixture model have been studied by Pal and Mandal [Optimum designs for optimum mixtures. Statist Probab Lett. 2006;76:1369–1379] and Mandal et al. [Optimum mixture designs: a pseudo-Bayesian approach. J Ind Soc Agric Stat. 2008;62(2):174–182; Optimum mixture designs under constraints on mixing components. Statist Appl. 2008;6(1&2) (New Series): 189–205], using a pseudo-Bayesian approach. In this paper, a similar approach has been employed to obtain the A-optimal designs for the estimation of optimum proportions in an additive quadratic mixture model, proposed by Darroch and Waller [Additivity and interaction in three-component experiments with mixture. Biometrika. 1985;72:153–163], when the number of components is 3, 4 and 5. It has been shown that the vertices of the simplex are necessarily the support points of the optimum design, and the other support points include barycentres of depth at most 2.  相似文献   
80.
This paper presents a scheduling algorithm to solve flowshop problems with a common job sequence on all machines. This algorithm uses makespan as its criterion. Initially, it chooses a preferred sequence by scanning the processing times matrix and making a few calculations. The makespan time of the preferred job-sequence is further reduced by using an improvement routine that allows interchanges between adjacent jobs. Solutions of 1200 problems are compared with the best solutions previously reported for corresponding size problems in the Campbell-Dudek-Smith (C-D-S) paper [1]. This algorithm offers up to 1% average improvement in reducing the makespan of nearly 50% of the problem sets over the results of the existing algorithms, and its computational time requirements are about one-fifth of that of the C-D-S algorithm.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号