首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   11篇
  免费   0篇
管理学   1篇
理论方法论   1篇
社会学   1篇
统计学   8篇
  2018年   1篇
  2013年   1篇
  2011年   1篇
  2008年   1篇
  2002年   1篇
  2001年   1篇
  1999年   2篇
  1997年   1篇
  1991年   1篇
  1989年   1篇
排序方式: 共有11条查询结果,搜索用时 31 毫秒
1.
In the pursuit of faster product development, product design teams are a growing phenomenon in many organizations. In order to be successful, these teams must be composed of people who work well together. However, despite the benefit of selecting the optimal combination of team members, this topic has received little attention. Personality has been identified as a potentially helpful selection variable in the determination of optimal team composition. This study examines the relationships between the ‘Big Five’ personality factors (Conscientiousness, Extraversion, Neuroticism, Agreeableness, and Openness to Experience) and objective team performance for three-member product design teams. In addition to this, the potential incremental contribution of personality to the variance in team performance over that accounted for by established selection measures such as general cognitive ability was investigated. In the short duration of the study, it became apparent that some teams were capable of success, and some were not. Successful teams were characterized by higher levels of general cognitive ability, higher extraversion, higher agreeableness, and lower neuroticism than their unsuccessful counterparts. In successful teams, the heterogeneity of conscientiousness was negatively related to increments in product performance. Implications for the selection of product design teams and future directions for research are discussed.  相似文献   
2.
3.
Most approaches to applying knowledge-based techniques for data analyses concentrate on the context-independent statistical support. EXPLORA however is developed for the subject-specific interpretation with regard to the contents of the data to be analyzed (i.e. content interpretation). Therefore its knowledge base includes also the objects and semantic relations of the real system that produces the data. In this paper we describe the functional model representing the process of content interpretation, summarize the software architecture of the system and give some examples of its applications by pilot-users in survey analysis. EXPLORA addresses applications with data produced regularly which have to be analyzed in a routine way. The system systematically searches for statistical results (facts) to detect relations which possibly could be overlooked by a human analyst. On the other hand EXPLORA will help overcome the large bulk of information which today is usually still produced when presenting the data. Therefore a second knowledge process of content interpretation consists in discovering messages about the data by condensing the facts. Approaches for inductive generalization which have been developed for machine learning are utilized to identify common values of attributes of the objects to which the facts relate. At a later stage the system searches for interesting facts by applying redundancy rules and domaindependent selection rules. EXPLORA formulates the messages in terms of the domain, groups and orders them and even provides flexible navigations in the fact spaces.  相似文献   
4.
In many studies a large number of variables is measured and the identification of relevant variables influencing an outcome is an important task. For variable selection several procedures are available. However, focusing on one model only neglects that there usually exist other equally appropriate models. Bayesian or frequentist model averaging approaches have been proposed to improve the development of a predictor. With a larger number of variables (say more than ten variables) the resulting class of models can be very large. For Bayesian model averaging Occam’s window is a popular approach to reduce the model space. As this approach may not eliminate any variables, a variable screening step was proposed for a frequentist model averaging procedure. Based on the results of selected models in bootstrap samples, variables are eliminated before deriving a model averaging predictor. As a simple alternative screening procedure backward elimination can be used. Through two examples and by means of simulation we investigate some properties of the screening step. In the simulation study we consider situations with fifteen and 25 variables, respectively, of which seven have an influence on the outcome. With the screening step most of the uninfluential variables will be eliminated, but also some variables with a weak effect. Variable screening leads to more applicable models without eliminating models, which are more strongly supported by the data. Furthermore, we give recommendations for important parameters of the screening step.  相似文献   
5.
To be useful to clinicians, prognostic and diagnostic indices must be derived from accurate models developed by using appropriate data sets. We show that fractional polynomials, which extend ordinary polynomials by including non-positive and fractional powers, may be used as the basis of such models. We describe how to fit fractional polynomials in several continuous covariates simultaneously, and we propose ways of ensuring that the resulting models are parsimonious and consistent with basic medical knowledge. The methods are applied to two breast cancer data sets, one from a prognostic factors study in patients with positive lymph nodes and the other from a study to diagnose malignant or benign tumours by using colour Doppler blood flow mapping. We investigate the problems of biased parameter estimates in the final model and overfitting using cross-validation calibration to estimate shrinkage factors. We adopt bootstrap resampling to assess model stability. We compare our new approach with conventional modelling methods which apply stepwise variables selection to categorized covariates. We conclude that fractional polynomial methodology can be very successful in generating simple and appropriate models.  相似文献   
6.
The number of variables in a regression model is often too large and a more parsimonious model may be preferred. Selection strategies (e.g. all-subset selection with various penalties for model complexity, or stepwise procedures) are widely used, but there are few analytical results about their properties. The problems of replication stability, model complexity, selection bias and an over-optimistic estimate of the predictive value of a model are discussed together with several proposals based on resampling methods. The methods are applied to data from a case–control study on atopic dermatitis and a clinical trial to compare two chemotherapy regimes by using a logistic regression and a Cox model. A recent proposal to use shrinkage factors to reduce the bias of parameter estimates caused by model building is extended to parameterwise shrinkage factors and is discussed as a further possibility to illustrate problems of models which are too complex. The results from the resampling approaches favour greater simplicity of the final regression model.  相似文献   
7.
The article shows how meta-evaluation leads to evidence about the differential effectiveness of the parent effectiveness training (PET) by Thomas Gordon. It shows that the training has a high general effectiveness. There are large effects for the directly trained communication skills, medium effects for the change of parental attitudes toward child-rearing and the parent-child-communication and small effects in parental behavior and the children’s self-concept. The training also shows long-term effects. Moreover, PET seems to be better suited for gender homogenous parent groups and for parents of older children.  相似文献   
8.
Interest in confirmatory adaptive combined phase II/III studies with treatment selection has increased in the past few years. These studies start comparing several treatments with a control. One (or more) treatment(s) is then selected after the first stage based on the available information at an interim analysis, including interim data from the ongoing trial, external information and expert knowledge. Recruitment continues, but now only for the selected treatment(s) and the control, possibly in combination with a sample size reassessment. The final analysis of the selected treatment(s) includes the patients from both stages and is performed such that the overall Type I error rate is strictly controlled, thus providing confirmatory evidence of efficacy at the final analysis. In this paper we describe two approaches to control the Type I error rate in adaptive designs with sample size reassessment and/or treatment selection. The first method adjusts the critical value using a simulation-based approach, which incorporates the number of patients at an interim analysis, the true response rates, the treatment selection rule, etc. We discuss the underlying assumptions of simulation-based procedures and give several examples where the Type I error rate is not controlled if some of the assumptions are violated. The second method is an adaptive Bonferroni-Holm test procedure based on conditional error rates of the individual treatment-control comparisons. We show that this procedure controls the Type I error rate, even if a deviation from a pre-planned adaptation rule or the time point of such a decision is necessary.  相似文献   
9.
10.
If a number of candidate variables are available, variable selection is a key task aiming to identify those candidates which influence the outcome of interest. Methods as backward elimination, forward selection, etc. are often implemented, despite their drawbacks. One of these drawbacks is the instability of their results with respect to small perturbations in the data. To handle this issue, resampling-based procedures have been introduced; using a resampling technique, e.g. bootstrap, these procedures generate several pseudo-samples that are used to compute the inclusion frequency of each variable, i.e. the proportion of pseudo-samples in which the variable is selected. Based on the inclusion frequencies, it is possible to discriminate between relevant and irrelevant variables. These procedures may fail in case of correlated variables. To deal with this issue, two procedures based on 2×2 tables of inclusion frequencies have been developed in the literature. In this paper we analyse the behaviours of these two procedures and the role of their tuning parameters in an extensive simulation study.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号