首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   26篇
  免费   2篇
管理学   17篇
人口学   2篇
社会学   4篇
统计学   5篇
  2022年   1篇
  2019年   1篇
  2018年   1篇
  2017年   1篇
  2016年   1篇
  2015年   1篇
  2014年   3篇
  2013年   6篇
  2012年   1篇
  2010年   6篇
  2008年   1篇
  2007年   1篇
  2006年   1篇
  2005年   1篇
  2001年   1篇
  1993年   1篇
排序方式: 共有28条查询结果,搜索用时 31 毫秒
1.
2.
Industrial welfare history presents important challenges to developmental state theories in “late” industrialization. This article expands the debate by examining how nation-states create statutory welfare by addressing institutional variety beyond markets. It is simplistic to argue linear growth of national welfare or of states autonomously regulating markets to achieve risk-mitigation. I contend that welfare institutions emerge from the state’s essential conflict and collaboration with various alternate institutions in cities and regions. Using histories of Europe, India, and Karnataka, I propose a place-based, work-based, and work-place based welfare typology evolving at differential rates. Although economic imperatives exist to expand local risk-pools, it is precisely the alternate institutional diversity that makes late industrial nation-states unable or unwilling to do so. This results in institutionally “thin,” top-down industrial welfare. Ultimately, theories that overly depend on histories of small nations, homogenous nations, or city-states, provide weak tests of the economics of industrial welfare.  相似文献   
3.
The objective of this article is to evaluate the performance of the COM‐Poisson GLM for analyzing crash data exhibiting underdispersion (when conditional on the mean). The COM‐Poisson distribution, originally developed in 1962, has recently been reintroduced by statisticians for analyzing count data subjected to either over‐ or underdispersion. Over the last year, the COM‐Poisson GLM has been evaluated in the context of crash data analysis and it has been shown that the model performs as well as the Poisson‐gamma model for crash data exhibiting overdispersion. To accomplish the objective of this study, several COM‐Poisson models were estimated using crash data collected at 162 railway‐highway crossings in South Korea between 1998 and 2002. This data set has been shown to exhibit underdispersion when models linking crash data to various explanatory variables are estimated. The modeling results were compared to those produced from the Poisson and gamma probability models documented in a previous published study. The results of this research show that the COM‐Poisson GLM can handle crash data when the modeling output shows signs of underdispersion. Finally, they also show that the model proposed in this study provides better statistical performance than the gamma probability and the traditional Poisson models, at least for this data set.  相似文献   
4.
Discrete‐choice models are widely used to model consumer purchase behavior in assortment optimization and revenue management. In many applications, each customer segment is associated with a consideration set that represents the set of products that customers in this segment consider for purchase. The firm has to make a decision on what assortment to offer at each point in time without the ability to identify the customer's segment. A linear program called the Choice‐based Deterministic Linear Program (CDLP) has been proposed to determine these offer sets. Unfortunately, its size grows exponentially in the number of products and it is NP‐hard to solve when the consideration sets of the segments overlap. The Segment‐based Deterministic Concave Program with some additional consistency equalities (SDCP+) is an approximation of CDLP that provides an upper bound on CDLP's optimal objective value. SDCP+ can be solved in a fraction of the time required to solve CDLP and often achieves the same optimal objective value. This raises the question under what conditions can one guarantee equivalence of CDLP and SDCP+. In this study, we obtain a structural result to this end, namely that if the segment consideration sets overlap with a certain tree structure or if they are fully nested, CDLP can be equivalently replaced with SDCP+. We give a number of examples from the literature where this tree structure arises naturally in modeling customer behavior.  相似文献   
5.
The hyper‐Poisson distribution can handle both over‐ and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation‐specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper‐Poisson distribution in analyzing motor vehicle crash count data. The hyper‐Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway‐highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness‐of‐fit measures indicated that the hyper‐Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper‐Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway‐Maxwell‐Poisson model previously developed for the same data set. The advantages of the hyper‐Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper‐Poisson model can handle both over‐ and underdispersed crash data. Although not a major issue for the Conway‐Maxwell‐Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model.  相似文献   
6.
7.
Interoutsourcing is a round‐way process in which the vendor is its customer's customer and the customer is its vendor's vendor. While interoutsourcing is emerging as a prominent outsourcing strategy in many industries, there are no rigorous analytical studies focusing on this mechanism. In this article, we analytically demonstrate the efficacy of interoutsourcing by comparing it with normal outsourcing. Our results show that, compared with normal outsourcing, interoutsourcing acts as a self‐enforcer of vendor firms' behaviors toward increasing outsourcing service value. However, in situations where there is a mismatch of outsourcing activities, a high degree of incentive that is based on outsourcing service value, and a high cost of capital, interoutsourcing is not preferred to normal outsourcing. We discuss these results in detail and provide managerial implications for firms involved in interoutsourcing decisions.  相似文献   
8.
In industrial purchasing contexts firms often procure a set of products from the same suppliers to benefit from economies of scale and scope. These products are often at different stages of their respective product life cycles (PLCs). Firms consider multiple criteria in purchasing such products, and the relative importance of these criteria varies depending on the PLC stage of a given product. Therefore, a firm should select suppliers and choose sourcing arrangements such that product requirements across multiple criteria are satisfied over time. The extant models in sourcing literature for evaluating and selecting suppliers for a portfolio of products have not considered this important and practical issue faced by firms. This article proposes a mathematical model that effectively addresses this issue and contributes to the sourcing literature by demonstrating an approach for optimally selecting suppliers and supplier bids given the relative importance of multiple criteria across multiple products over their PLC. The application of the model on a hypothetical data set illustrates the strategic and tactical significance of such considerations.  相似文献   
9.
Array-based comparative genomic hybridization (aCGH) is a high-resolution high-throughput technique for studying the genetic basis of cancer. The resulting data consists of log fluorescence ratios as a function of the genomic DNA location and provides a cytogenetic representation of the relative DNA copy number variation. Analysis of such data typically involves estimation of the underlying copy number state at each location and segmenting regions of DNA with similar copy number states. Most current methods proceed by modeling a single sample/array at a time, and thus fail to borrow strength across multiple samples to infer shared regions of copy number aberrations. We propose a hierarchical Bayesian random segmentation approach for modeling aCGH data that utilizes information across arrays from a common population to yield segments of shared copy number changes. These changes characterize the underlying population and allow us to compare different population aCGH profiles to assess which regions of the genome have differential alterations. Our method, referred to as BDSAcgh (Bayesian Detection of Shared Aberrations in aCGH), is based on a unified Bayesian hierarchical model that allows us to obtain probabilities of alteration states as well as probabilities of differential alteration that correspond to local false discovery rates. We evaluate the operating characteristics of our method via simulations and an application using a lung cancer aCGH data set.  相似文献   
10.
Consumer delinquencies are a major problem for banks and other credit card issuers. These firms have collection centers across the country to collect outstanding balances from delinquent accounts. Their main strategy is to first send reminder notices and, if that does not work, to telephone delinquent customers and request payment. The latter often becomes necessary, resulting in high costs of collection. Automated dialers are used to make the calls, and when the call goes through, it is directed to one of several hundred associates manning computer workstations. In this operation, it is important to contact the account holder in order to discuss payment options. Simply getting someone on the line is not sufficient, because such calls would require follow‐up calls. The objective of efficient collections is to maximize dollars collected while minimizing costs, which generally translates to making a “right party contact (RPC)” in the minimum number of attempts. We developed and tested an algorithm that increased the RPC rates by over 10%. This increase translates to annual savings of several million dollars for an average credit card company. Although the focus of our paper is collections, the methodology developed is equally applicable for improving telemarketing efficiency.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号