首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   59篇
  免费   2篇
管理学   5篇
人口学   4篇
理论方法论   2篇
社会学   4篇
统计学   46篇
  2023年   1篇
  2022年   3篇
  2021年   1篇
  2020年   1篇
  2019年   5篇
  2018年   7篇
  2017年   9篇
  2016年   3篇
  2015年   3篇
  2014年   4篇
  2013年   17篇
  2011年   2篇
  2010年   2篇
  2008年   1篇
  2000年   1篇
  1994年   1篇
排序方式: 共有61条查询结果,搜索用时 140 毫秒
21.
We analyze the benefits of inventory pooling in a multi‐location newsvendor framework. Using a number of common demand distributions, as well as the distribution‐free approximation, we compare the centralized (pooled) system with the decentralized (non‐pooled) system. We investigate the sensitivity of the absolute and relative reduction in costs to the variability of demand and to the number of locations (facilities) being pooled. We show that for the distributions considered, the absolute benefit of risk pooling increases with variability, and the relative benefit stays fairly constant, as long as the coefficient of variation of demand stays in the low range. However, under high‐variability conditions, both measures decrease to zero as the demand variability is increased. We show, through analytical results and computational experiments, that these effects are due to the different operating regimes exhibited by the system under different levels of variability: as the variability is increased, the system switches from the normal operation to the effective and then complete shutdown regimes; the decrease in the benefits of risk pooling is associated with the two latter stages. The centralization allows the system to remain in the normal operation regime under higher levels of variability compared to the decentralized system.  相似文献   
22.
The geometric-arithmetic index was introduced in the chemical graph theory and it has shown to be applicable. The aim of this paper is to obtain the extremal graphs with respect to the geometric-arithmetic index among all graphs with minimum degree 2. Let G(2, n) be the set of connected simple graphs on n vertices with minimum degree 2. We use linear programming formulation and prove that the minimum value of the first geometric-arithmetic \((GA_{1})\) index of G(2, n) is obtained by the following formula:
$$\begin{aligned} GA_1^* = \left\{ \begin{array}{ll} n&{}\quad n \le 24, \\ \mathrm{{24}}\mathrm{{.79}}&{}\quad n = 25, \\ \frac{{4\left( {n - 2} \right) \sqrt{2\left( {n - 2} \right) } }}{n}&{}\quad n \ge 26. \\ \end{array} \right. \end{aligned}$$
  相似文献   
23.
In this article, we introduce a new extension of Burr XII distribution called Topp Leone Generated Burr XII distribution. We derive some of its properties. Useful characterizations are presented. Simulation study is performed to assess the performance of the maximum likelihood estimators. Censored maximum likelihood estimation is presented in the general case of multi-censored data. The new location-scale regression model based on the proposed distribution is introduced. The usefulness of the proposed models is illustrated empirically by means of three real datasets.  相似文献   
24.
In this paper, we consider some results on distribution theory of multivariate progressively Type‐II censored order statistics. We also establish some characterizations of Freund's bivariate exponential distribution based on the lack of memory property.  相似文献   
25.
In the usual credibility model, observations are made of a risk or group of risks selected from a population, and claims are assumed to be independent among different risks. However, there are some problems in practical applications and this assumption may be violated in some situations. Some credibility models allow for one source of claim dependence only, that is, across time for an individual insured risk or a group of homogeneous insured risks. Some other credibility models have been developed on a two-level common effects model that allows for two possible sources of dependence, namely, across time for the same individual risk and between risks. In this paper, we argue for the notion of modeling claim dependence on a three-level common effects model that allows for three possible sources of dependence, namely, across portfolios, across individuals and simultaneously across time within individuals. We also obtain the corresponding credibility premiums hierarchically using the projection method. Then we derive the general hierarchical structure or multi-level credibility premiums for the models with h-level of common effects.  相似文献   
26.
Real-time polymerase chain reaction (PCR) is reliable quantitative technique in gene expression studies. The statistical analysis of real-time PCR data is quite crucial for results analysis and explanation. The statistical procedures of analyzing real-time PCR data try to determine the slope of regression line and calculate the reaction efficiency. Applications of mathematical functions have been used to calculate the target gene relative to the reference gene(s). Moreover, these statistical techniques compare Ct (threshold cycle) numbers between control and treatments group. There are many different procedures in SAS for real-time PCR data evaluation. In this study, the efficiency of calibrated model and delta delta Ct model have been statistically tested and explained. Several methods were tested to compare control with treatment means of Ct. The methods tested included t-test (parametric test), Wilcoxon test (non-parametric test) and multiple regression. Results showed that applied methods led to similar results and no significant difference was observed between results of gene expression measurement by the relative method.  相似文献   
27.
Sample size determination is one of the most important considerations in the design of a control chart. The optimal sample size will provide control over both type I and type II errors. The optimal sample size for an S2 chart can be determined exactly using an iterative procedure. Duncan presented a procedure to approximate the required sample size. The accuracy of Duncan's approximation is examined and an improved approximation is proposed.  相似文献   
28.

This paper proposes a new problem by integrating the job shop scheduling, the part feeding, and the automated storage and retrieval problems. These three problems are intertwined and the performance of each of these problems influences and is influenced by the performance of the other problems. We consider a manufacturing environment composed of a set of machines (production system) connected by a transport system and a storage/retrieval system. Jobs are retrieved from storage and delivered to a load/unload area (LU) by the automated storage retrieval system. Then they are transported to and between the machines where their operations are processed on by the transport system. Once all operations of a job are processed, the job is taken back to the LU and then returned to the storage cell. We propose a mixed-integer linear programming (MILP) model that can be solved to optimality for small-sized instances. We also propose a hybrid simulated annealing (HSA) algorithm to find good quality solutions for larger instances. The HSA incorporates a late acceptance hill-climbing algorithm and a multistart strategy to promote both intensification and exploration while decreasing computational requirements. To compute the optimality gap of the HSA solutions, we derive a very fast lower bounding procedure. Computational experiments are conducted on two sets of instances that we also propose. The computational results show the effectiveness of the MILP on small-sized instances as well as the effectiveness, efficiency, and robustness of the HSA on medium and large-sized instances. Furthermore, the computational experiments clearly shown that importance of optimizing the three problems simultaneous. Finally, the importance and relevance of including the storage/retrieval activities are empirically demonstrated as ignoring them leads to wrong and misleading results.

  相似文献   
29.
Survival models are used to examine data in the event of an occurrence. These are discussed in various types including parametric, non-parametric and semi-parametric models. Parametric models require a clear distribution of survival time, and semi-parametric models assume proportional hazards. Among these models, the non-parametric model of artificial neural network has the fewest assumptions and can be often replaced by other models. Given the importance of distribution Weibull survival models in this study of simulation shape parameter of the Weibull distribution have been assumed as 1, 2 and 3, and also the average rate at levels of 0%–75% have been censored. The values predicted by the neural network forecasting model with parametric survival and Cox regression models were compared. This comparison considering levels of complexity due to the hazard model using the ROC curve and the corresponding tests have been carried out.  相似文献   
30.
Abstract

In this paper, we consider a k-out-of-n system consisting of n identical components with independent lifetimes. We show that when the underlying distribution function F(t) is absolutely continuous, then it can be univocally determined by some particular mean residual lives or mean inactivity times of the system. It is then shown that these results may be extended to coherent (or mixed) systems.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号