首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   64篇
  免费   0篇
管理学   5篇
人口学   2篇
理论方法论   1篇
社会学   5篇
统计学   51篇
  2022年   2篇
  2020年   2篇
  2019年   4篇
  2018年   2篇
  2017年   11篇
  2016年   1篇
  2015年   3篇
  2014年   2篇
  2013年   22篇
  2012年   4篇
  2011年   2篇
  2008年   2篇
  2007年   2篇
  2006年   1篇
  2004年   1篇
  2003年   1篇
  2001年   2篇
排序方式: 共有64条查询结果,搜索用时 15 毫秒
31.

Composite indicators are widely used to determine the ranking of countries, organizations or individuals in terms of overall performance on multiple criteria. Their calculation requires standardization of the individual statistical criteria and aggregation of the standardized indicators. These operations introduce a potential propagation effect of extreme values on the calculation of the composite indicator of all entities. In this paper, we propose robust composite indicators for which this propagation effect is limited. The approach uses winsorization based on a robust estimate of the distribution of the sub-indicators. It is designed such that the winsorization affects only the composite indicator rank but has no effect on the entities ranking in each sub-indicator. The simulation study documents the benefits of distribution-based winsorization in the presence of outliers. It leads to a ranking that is closer to the clean data ranking when compared to the ranking obtained using either no winsorization or the traditional winsorization based on empirical quantiles. In the empirical application, we illustrate the use of winsorization for ranking countries based on the United Nations Industrial Development Organization’s Competitive Industrial Performance index. We show that even though the sub-indicator ranking does not change, the robust winsorization approach has a material impact on the ranking of the composite indicator for countries with large discrepancies in the scores of the sub-indicators.

  相似文献   
32.
It is cleared in recent researches that the raising of missing values in datasets is inevitable. Imputation of missing data is one of the several methods which have been introduced to overcome this issue. Imputation techniques are trying to answer the case of missing data by covering missing values with reasonable estimates permanently. There are a lot of benefits for these procedures rather than their drawbacks. The operation of these methods has not been clarified, which means that they provide mistrust among analytical results. One approach to evaluate the outcomes of the imputation process is estimating uncertainty in the imputed data. Nonparametric methods are appropriate to estimating the uncertainty when data are not followed by any particular distribution. This paper deals with a nonparametric method for estimation and testing the significance of the imputation uncertainty, which is based on Wilcoxon test statistic, and which could be employed for estimating the precision of the imputed values created by imputation methods. This proposed procedure could be employed to judge the possibility of the imputation process for datasets, and to evaluate the influence of proper imputation methods when they are utilized to the same dataset. This proposed approach has been compared with other nonparametric resampling methods, including bootstrap and jackknife to estimate uncertainty in the imputed data under the Bayesian bootstrap imputation method. The ideas supporting the proposed method are clarified in detail, and a simulation study, which indicates how the approach has been employed in practical situations, is illustrated.  相似文献   
33.
Gray markets, also known as parallel imports, have created fierce competition for manufacturers in many industries. We analyze the impact of parallel importation on a price‐setting manufacturer that serves two markets with uncertain demand, and characterize her policy against parallel importation. We show that ignoring demand uncertainty can take a significant toll on the manufacturer's profit, highlighting the value of making price and quantity decisions jointly. We find that adjusting prices is more effective in controlling gray market activity than reducing product availability, and that parallel importation forces the manufacturer to reduce her price gap while demand uncertainty forces her to lower prices. Furthermore, we explore the impact of market conditions (such as market base, price sensitivity, and demand uncertainty) and product characteristics (“fashion” vs. “commodity”) on the manufacturer's policy towards parallel importation. We also provide managerial insights about the value of strategic decision‐making by comparing the optimal policy to the uniform pricing policy that has been adopted by some companies to eliminate gray markets entirely. The comparison indicates that the value of making price and quantity decisions strategically is highest for moderately different market conditions and non‐commodity products.  相似文献   
34.
For a truncation-invariant copula, truncation does not change the dependence structure as well as all nonparametric measures of association such as Kendall's tau and Spearman's rho. In this article, we show that the products of algebraically independent Archimedean multivariate Clayton copulas and standard uniform distributions are the only truncation-invariant copulas.  相似文献   
35.
In this article, we develop an empirical Bayesian approach for the Bayesian estimation of parameters in four bivariate exponential (BVE) distributions. We have opted for gamma distribution as a prior for the parameters of the model in which the hyper parameters have been estimated based on the method of moments and maximum likelihood estimates (MLEs). A simulation study was conducted to compute empirical Bayesian estimates of the parameters and their standard errors. We use moment estimators or MLEs to estimate the hyper parameters of the prior distributions. Furthermore, we compare the posterior mode of parameters obtained by different prior distributions and the Bayesian estimates based on gamma priors are very close to the true values as compared to improper priors. We use MCMC method to obtain the posterior mean and compared the same using the improper priors and the classical estimates, MLEs.  相似文献   
36.
We consider the progressively Type-II censored competing risks model based on sequential order statistics. It is assumed that the latent failure times are independent and the failure of each unit influences the lifetime distributions of the latent failure times of surviving units. We provide explicit expressions for the likelihood function of the available data under the conditional proportional hazard rate (CPHR) and the power trend conditional proportional hazard rate (PTCPHR) models. Under CPHR and PTCPHR models and assumption that the baseline distributions of the latent failure times are exponential, classical and Bayesian estimates of the unknown parameters are provided. Monte Carlo simulations are then performed for illustrative purposes. Finally, two datasets are analyzed.  相似文献   
37.
In this article, we consider some problems of estimation and prediction when progressive Type-I interval censored competing risks data are from the proportional hazards family. The maximum likelihood estimators of the unknown parameters are obtained. Based on gamma priors, the Lindely's approximation and importance sampling methods are applied to obtain Bayesian estimators under squared error and linear–exponential loss functions. Several classical and Bayesian point predictors of censored units are provided. Also, based on given producer's and consumer's risks accepting sampling plans are considered. Finally, the simulation study is given by Monte Carlo simulations to evaluate the performances of the different methods.  相似文献   
38.
In this article, a new economical acceptance sampling model is proposed based on Taguchi loss function. The objective function of the model consists of inspection cost, scrap cost, and Taguchi loss function including producer loss and consumer loss. The expected total cost includes the loss for an inspected item plus the loss for an accepted item which has not been inspected. Decision-making is based on conforming run length. It is assumed that the quality characteristics follow normal distribution. A numerical example is solved for illustrating application of this model. Sensitivity analysis is proposed for illustrating the effect of some important parameters on the objective function. Finally, we compared the results of the proposed method with classical Dodge–Romig sampling plans tables based on average outgoing quality limit. The results confirmed the superiority of proposed model.  相似文献   
39.
In a number of situations such as industrial quality control experiments the only observations are record-breaking data. In this paper, two sampling schemes are used to collect record data: single sample and multisample. The aim of this paper is to investigate which one of them is more efficient in the sense of Shannon information. Several general results are established and it is shown that there is a connection between some reliability properties of the parent distribution and the considered comparison criterion. A number of examples illustrating the results are given.  相似文献   
40.
Bayesian prediction of order statistics as well as the mean of a future sample based on observed record values from an exponential distribution are discussed. Several Bayesian prediction intervals and point predictors are derived. Finally, some numerical computations are presented for illustrating all the proposed inferential procedures.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号