首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   11篇
  免费   1篇
管理学   1篇
社会学   3篇
统计学   8篇
  2020年   1篇
  2019年   2篇
  2017年   3篇
  2015年   3篇
  2013年   1篇
  2009年   1篇
  1992年   1篇
排序方式: 共有12条查询结果,搜索用时 31 毫秒
1.
We model the effect of a road safety measure on a set of target sites with a control area for each site, and we suppose that the accident data recorded at each site are classified in different mutually exclusive types. We adopt the before–after technique and we assume that at any one target site the total number of accidents recorded is multinomially distibuted between the periods and types of accidents. In this article, we propose a minorization–majorization (MM) algorithm for obtaining the constrained maximum likelihood estimates of the parameter vector. We compare it with a gradient projection–expectation maximization (GP-EM) algorithm, based on gradient projections. The performance of the algorithms is examined through a simulation study of road safety data.  相似文献   
2.
ABSTRACT

Identifying homogeneous subsets of predictors in classification can be challenging in the presence of high-dimensional data with highly correlated variables. We propose a new method called cluster correlation-network support vector machine (CCNSVM) that simultaneously estimates clusters of predictors that are relevant for classification and coefficients of penalized SVM. The new CCN penalty is a function of the well-known Topological Overlap Matrix whose entries measure the strength of connectivity between predictors. CCNSVM implements an efficient algorithm that alternates between searching for predictors’ clusters and optimizing a penalized SVM loss function using Majorization–Minimization tricks and a coordinate descent algorithm. This combining of clustering and sparsity into a single procedure provides additional insights into the power of exploring dimension reduction structure in high-dimensional binary classification. Simulation studies are considered to compare the performance of our procedure to its competitors. A practical application of CCNSVM on DNA methylation data illustrates its good behaviour.  相似文献   
3.
During the past decade, Saudi Arabia experienced a significant social, economic, and organizational change. The rapid economic growth created a need for seasoned management professionals and necessitated the development of human capital. Psychological capital, a newly developed construct by academics and practitioners, is defined as the extent to which an individual operates in a positive psychological state, and this state is characterized by high self-efficacy, optimism, hope, and resiliency. By measuring the positive psychological constructs, an organization can learn about employees’ positive psychological states and how training and support can promote positive psychological states. Improving the positive psychological capital can lead to better organizational commitment, favorable organizational citizenship behaviors, lower employee absenteeism, and higher job satisfaction. This quantitative study examined the relationship among psychological capital, job satisfaction, and organizational commitment through a sample of managers in the Saudi Arabian oil and petrochemical industries.  相似文献   
4.
A method of regularized discriminant analysis for discrete data, denoted DRDA, is proposed. This method is related to the regularized discriminant analysis conceived by Friedman (1989) in a Gaussian framework for continuous data. Here, we are concerned with discrete data and consider the classification problem using the multionomial distribution. DRDA has been conceived in the small-sample, high-dimensional setting. This method has a median position between multinomial discrimination, the first-order independence model and kernel discrimination. DRDA is characterized by two parameters, the values of which are calculated by minimizing a sample-based estimate of future misclassification risk by cross-validation. The first parameter is acomplexity parameter which provides class-conditional probabilities as a convex combination of those derived from the full multinomial model and the first-order independence model. The second parameter is asmoothing parameter associated with the discrete kernel of Aitchison and Aitken (1976). The optimal complexity parameter is calculated first, then, holding this parameter fixed, the optimal smoothing parameter is determined. A modified approach, in which the smoothing parameter is chosen first, is discussed. The efficiency of the method is examined with other classical methods through application to data.  相似文献   
5.
ABSTRACT

For the rating process of Collateralized Debt Obligations', Moody's suggests the Diversity Score as a measure of diversification in the collateral pool. This measure is used in Moody's Binomial Expansion Technique to infer the probability of default and thus the expected Loss in the portfolio. In this paper, we examine the appropriateness of this approach to assess the reality of defaults using a copula approach and lower tail dependence.  相似文献   
6.
The computation of penalized quantile regression estimates is often computationally intensive in high dimensions. In this paper we propose a coordinate descent algorithm for computing the penalized smooth quantile regression (cdaSQR) with convex and nonconvex penalties. The cdaSQR approach is based on the approximation of the objective check function, which is not differentiable at zero, by a modified check function which is differentiable at zero. Then, using the maximization-minimization trick of the gcdnet algorithm (Yang and Zou in, J Comput Graph Stat 22(2):396–415, 2013), we update each coefficient simply and efficiently. In our implementation, we consider the convex penalties \(\ell _1+\ell _2\) and the nonconvex penalties SCAD (or MCP) \(+ \ell _2\). We establishe the convergence property of the csdSQR with \(\ell _1+\ell _2\) penalty. The numerical results show that our implementation is an order of magnitude faster than its competitors. Using simulations we compare the speed of our algorithm to its competitors. Finally, the performance of our algorithm is illustrated on three real data sets from diabetes, leukemia and Bardet–Bidel syndrome gene expression studies.  相似文献   
7.
Abstract

In longitudinal studies data are collected on the same set of units for more than one occasion. In medical studies it is very common to have mixed Poisson and continuous longitudinal data. In such studies, for different reasons, some intended measurements might not be available resulting in a missing data setting. When the probability of missingness is related to the missing values, the missingness mechanism is termed nonrandom. The stochastic expectation-maximization (SEM) algorithm and the parametric fractional imputation (PFI) method are developed to handle nonrandom missingness in mixed discrete and continuous longitudinal data assuming different covariance structures for the continuous outcome. The proposed techniques are evaluated using simulation studies. Also, the proposed techniques are applied to the interstitial cystitis data base (ICDB) data.  相似文献   
8.
Although researchers in business and management are becoming increasingly aware of the importance of endogeneity affecting regression analysis, they frequently do not have the right methodological toolkit to adjust for this issue. In this paper we discuss such a toolkit. There are also areas in business and management research which to date seem to be mostly oblivious about the endogeneity issue. We highlight such an area, which studies the question of whether firms that are cross‐listed on a foreign stock exchange are charged premium fees by their auditors. When the same methodology (pooled ordinary least squares) as in the existing literature is used, the existence of an audit fee premium for cross‐listed firms seems to be confirmed. However, once methodologies are used which adjust for the various types of endogeneity (i.e. omitted variable bias, simultaneous and dynamic endogeneity) there is no longer support for the existence of such a generalized premium. Hence, not only do we illustrate that failure to adjust for endogeneity has severe consequences such as drawing the wrong inferences, but we also review various ways to control for the different types of endogeneity.  相似文献   
9.
The parameters of Downton's bivariate exponential distribution are estimated based on a ranked set sample. Parametric and nonparametric methods are considered. The suggested estimators are compared to the corresponding ones based on simple random sampling. It turns out that some of the suggested estimators are significantly more efficient than the ones based on simple random sampling.  相似文献   
10.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号