首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   17篇
  免费   1篇
管理学   7篇
社会学   3篇
统计学   8篇
  2023年   1篇
  2021年   1篇
  2019年   1篇
  2016年   1篇
  2015年   1篇
  2013年   2篇
  2012年   1篇
  2011年   1篇
  2010年   1篇
  2003年   1篇
  1999年   1篇
  1993年   1篇
  1992年   1篇
  1990年   1篇
  1989年   1篇
  1987年   2篇
排序方式: 共有18条查询结果,搜索用时 312 毫秒
1.
2.
Aris Accornero 《LABOUR》1990,4(1):59-96
Abstract. The aim of this paper is to outline the main transformations in the firm's organization, with special reference to the consequences of the flexibility issue on work in a post-Taylor-Fordistic age. In examining the new features of jobs on the labour market, the paper underlines the ongoing changes in skills and careers, both on the demand and supply side. Moreover, the author sketches some important innovations in the remuneration systems at firm level, which compel unions to give uneasy, strategic answers in terms of industrial relations.  相似文献   
3.
Cox’s proportional hazards model is the most common way to analyze survival data. The model can be extended in the presence of collinearity to include a ridge penalty, or in cases where a very large number of coefficients (e.g. with microarray data) has to be estimated. To maximize the penalized likelihood, optimal weights of the ridge penalty have to be obtained. However, there is no definite rule for choosing the penalty weight. One approach suggests maximization of the weights by maximizing the leave-one-out cross validated partial likelihood, however this is time consuming and computationally expensive, especially in large datasets. We suggest modelling survival data through a Poisson model. Using this approach, the log-likelihood of a Poisson model is maximized by standard iterative weighted least squares. We will illustrate this simple approach, which includes smoothing of the hazard function and move on to include a ridge term in the likelihood. We will then maximize the likelihood by considering tools from generalized mixed linear models. We will show that the optimal value of the penalty is found simply by computing the hat matrix of the system of linear equations and dividing its trace by a product of the estimated coefficients.  相似文献   
4.
ABC inventory classifications are widely used in practice, with demand value and demand volume as the most common ranking criteria. The standard approach in ABC applications is to set the same service level for all stock keeping units (SKUs) in a class. In this paper, we show (for three large real life datasets) that the application of both demand value and demand volume as ABC ranking criteria, with fixed service levels per class, leads to solutions that are far from cost optimal. An alternative criterion proposed by Zhang et al. performs much better, but is still considerably outperformed by a new criterion proposed in this paper. The new criterion is also more general in that it can take criticality of SKUs into account. Managerial insights are obtained into what class should have the highest/lowest service level, a topic that has been disputed in the literature.  相似文献   
5.
6.
Given a set of N sequence, the Multiple Sequence Alignment problem is to align these N sequences, possibly with gaps, that brings out the best commonality of the N sequences. The quality of the alignment is usually measured by penalizing the mis-matches and gaps, and rewarding the matches with appropriate weight functions. However for larger values of N, additional constraints are required to give meaningful alignments. We identify a user-controlled parameter, an alignment number K (2 K N): this additional requirement constrains the alignment to have at least K sequences agree on a character, whenever possible, in the alignment. We identify a natural optimization problem for this approach called the K-MSA problem. We show that the problem is MAX SNP hard. We give a natural extension of this problem that incorporates biological relevance by using motifs (common patterns in the sequences) and give an approximation algorithm for this problem in terms of the motifs in the data. MUSCA is an implementation of this approach and our experimental results indicate that this approach is efficient, particularly on large numbers of long sequences, and gives good alignments when tested on biological data such as DNA and protein sequences.  相似文献   
7.
The OECD's Paris Declaration (2005) discouraged donor project implementation units (PIUs) which are not integrated into government. But little is known about the behaviour and effects of PIUs integrated into government. Are they efficient? Do they support country systems? This article responds to these questions using research comparing two streams of implementation of schools investments in Malaysia's Eighth Plan 2001–2005, one via government alone and the other via a World Bank PIU integrated into government. It emphasises the conflicts which can exist within country systems, and how these affect the choices which a donor faces in seeking efficient implementation of a project while supporting country systems.  相似文献   
8.
Statistical Methods & Applications - The current literature views Simpson’s paradox as a probabilistic conundrum by taking the premises (probabilities/parameters/ frequencies) as known....  相似文献   
9.
10.
This paper illustrates a new approach to the statistical modeling of non-linear dependence and leptokurtosis in exchange rate data. The student's t autoregressive model withdynamic heteroskedasticity (STAR) of spanos (1992) is shown to provide a parsimonious and statistically adequate representation of the probabilistic information in exchange rate data. For the STAR model, volatility predictions are formed via a sequentially updated weighting scheme which uses all the past history of the series. The estimated STAR models are shown to statistically dominate alternative ARCH-type formulations and suggest that volatility predictions are not necessarily as large or as variable as other models indicate.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号