首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   22篇
  免费   0篇
管理学   2篇
人口学   1篇
理论方法论   2篇
社会学   3篇
统计学   14篇
  2023年   1篇
  2021年   5篇
  2019年   3篇
  2018年   3篇
  2017年   2篇
  2016年   1篇
  2014年   1篇
  2013年   2篇
  2012年   3篇
  2011年   1篇
排序方式: 共有22条查询结果,搜索用时 15 毫秒
1.
This article focuses on the clustering problem based on Dirichlet process (DP) mixtures. To model both time invariant and temporal patterns, different from other existing clustering methods, the proposed semi-parametric model is flexible in that both the common and unique patterns are taken into account simultaneously. Furthermore, by jointly clustering subjects and the associated variables, the intrinsic complex shared patterns among subjects and among variables are expected to be captured. The number of clusters and cluster assignments are directly inferred with the use of DP. Simulation studies illustrate the effectiveness of the proposed method. An application to wheal size data is discussed with an aim of identifying novel temporal patterns among allergens within subject clusters.  相似文献   
2.
The excess of zeros is not a rare feature in count data. Statisticians advocate the Poisson-type hurdle model (among other techniques) as an interesting approach to handle this data peculiarity. However, the frequency of gross errors and the complexity intrinsic to some considered phenomena may render this classical model unreliable and too limiting. In this paper, we develop a robust version of the Poisson hurdle model by extending the robust procedure for GLM of Cantoni and Ronchetti (2001) to the truncated Poisson regression model. The performance of the new robust approach is then investigated via a simulation study, a real data application and a sensitivity analysis. The results show the reliability of the new technique in the neighborhood of the truncated Poisson model. This robust modelling approach is therefore a valuable complement to the classical one, providing a tool for reliable statistical conclusions and to take more effective decisions.  相似文献   
3.
Efficient, accurate, and fast Markov Chain Monte Carlo estimation methods based on the Implicit approach are proposed. In this article, we introduced the notion of Implicit method for the estimation of parameters in Stochastic Volatility models.

Implicit estimation offers a substantial computational advantage for learning from observations without prior knowledge and thus provides a good alternative to classical inference in Bayesian method when priors are missing.

Both Implicit and Bayesian approach are illustrated using simulated data and are applied to analyze daily stock returns data on CAC40 index.  相似文献   

4.
A control chart for monitoring process variation by using multiple dependent state (MDS) sampling is constructed in the present article. The operational formulas for in-control and out-of-control average run lengths (ARLs) are derived. Control constants are established by considering the target in-control ARL at a normal process. The extensive ARL tables are reported for various parameters and shifted values of process parameters. The performance of the proposed control chart has been evaluated with several existing charts in regard of ARLs, which empowered the presented chart and proved far better for timely detection of assignable causes. The application of the proposed concept is illustrated with a real-life industrial example and a simulation-based study to elaborate strength of the proposed chart over the existing concepts.  相似文献   
5.
We propose a recursive distribution estimator using Robbins-Monro's algorithm and Bernstein polynomials. We study the properties of the recursive estimator, as a competitor of Vitale's distribution estimator. We show that, with optimal parameters, our proposal dominates Vitale's estimator in terms of the mean integrated squared error. Finally, we confirm theoretical result throught a simulation study.  相似文献   
6.
The article presents the Bayesian inference for the parameters of randomly censored Burr-type XII distribution with proportional hazards. The joint conjugate prior of the proposed model parameters does not exist; we consider two different systems of priors for Bayesian estimation. The explicit forms of the Bayes estimators are not possible; we use Lindley's method to obtain the Bayes estimates. However, it is not possible to obtain the Bayesian credible intervals with Lindley's method; we suggest the Gibbs sampling procedure for this purpose. Numerical experiments are performed to check the properties of the different estimators. The proposed methodology is applied to a real-life data for illustrative purposes. The Bayes estimators are compared with the Maximum likelihood estimators via numerical experiments and real data analysis. The model is validated using posterior predictive simulation in order to ascertain its appropriateness.  相似文献   
7.
Abstract

A major concern in the social sciences is lack of replication of previous studies. An important methodological concern in the social sciences is the ability to determine effect sizes in addition to statistical significance levels. Effect sizes cannot be easily calculated in the absence of sufficient data; usually standard deviations are needed. If standard deviations are not available, how can they be estimated? Various proposals have been offered to solve this question. One solution is to divide the range (maximum–minimum) by four; a variety of more complicated solutions, based on sample size or the skew of the variable’s distribution, have been suggested (Schumm, Higgins, et al., 2017). Here, 30 cases involving the demographic variable of age, from 23 articles published in Marriage & Family Review between 2016 and 2017, are assessed to replicate the previous report of Schumm, Higgins et al. (2017). Our results indicated that both linear and power functions significantly predicted the size of standard deviations, with larger samples featuring smaller standard deviations. Aside from sample size, the best solution appears to be to divide the range by 4.5–5.0; although for very small samples (N?<?50), it is probably better to divide by 3.5–4.0 whereas for larger samples, especially those that involve higher levels of skew, it may be better to divide by 5.0 or higher. The Wan et al. (2014) estimation procedure appears to be approximately a power function of sample size. For samples up to several thousand in size, the range of divisors appears to run between 3.0 and 8.0, extremes that could be used to determine the largest or smallest possible standard deviations, respectively. Values far below 3.0 or above 8.0 may reflect typographical errors in data reports or possibly be evidence of artificially generated data, if not scientific fraud. When a variable is split into subsamples, its standard deviations should usually increase for the subsamples compared with the total sample. Similar assessments remain in progress for non-demographic variables in social sciences.  相似文献   
8.
This paper focuses on the issues of coalition formation and cost allocation in a joint replenishment system involving a set of independent and freely interacting retailers purchasing an item from one supplier to meet a deterministic demand. The papers dealing with this problem are mainly focused on supperadditive games, where the cost savings associated with a coalition increase with the number of players in the coalition. The most relevant question addressed then is how to allocate the savings to the players. In this paper, we propose to go further by dealing with a non‐supperadditive game, where a set of independent retailers have the common understanding to share the cost savings according to the cost‐based proportional rule. In this setting, the global cost optimization is no longer a relevant approach to identify appealing coalitions for any retailer. Here, we provide an iterative procedure to form the so‐called efficient coalition structure and we show that this coalition structure has the nice properties of being (i) weakly stable in the sense of the coalition structure core and (ii) strongly stable under a given assumption. An exact fractional programming based solution is also given to generate such efficient coalitions.  相似文献   
9.
Let π1, …, πk be k (? 2) independent populations, where πi denotes the uniform distribution over the interval (0, θi) and θi > 0 (i = 1, …, k) is an unknown scale parameter. The population associated with the largest scale parameter is called the best population. For selecting the best population, We use a selection rule based on the natural estimators of θi, i = 1, …, k, for the case of unequal sample sizes. Consider the problem of estimating the scale parameter θL of the selected uniform population when sample sizes are unequal and the loss is measured by the squared log error (SLE) loss function. We derive the uniformly minimum risk unbiased (UMRU) estimator of θL under the SLE loss function and two natural estimators of θL are also studied. For k = 2, we derive a sufficient condition for inadmissibility of an estimator of θL. Using these condition, we conclude that the UMRU estimator and natural estimator are inadmissible. Finally, the risk functions of various competing estimators of θL are compared through simulation.  相似文献   
10.
Social Indicators Research - Debt is beneficiary to individuals and households when their consumption can be extended with credit. However, the benefits gained from availability of credit have...  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号