全文获取类型
收费全文 | 594篇 |
免费 | 10篇 |
专业分类
管理学 | 199篇 |
民族学 | 1篇 |
人才学 | 1篇 |
人口学 | 11篇 |
丛书文集 | 6篇 |
理论方法论 | 4篇 |
综合类 | 38篇 |
社会学 | 20篇 |
统计学 | 324篇 |
出版年
2024年 | 1篇 |
2023年 | 2篇 |
2021年 | 5篇 |
2020年 | 2篇 |
2019年 | 8篇 |
2018年 | 14篇 |
2017年 | 53篇 |
2016年 | 17篇 |
2015年 | 15篇 |
2014年 | 11篇 |
2013年 | 180篇 |
2012年 | 44篇 |
2011年 | 12篇 |
2010年 | 15篇 |
2009年 | 20篇 |
2008年 | 14篇 |
2007年 | 13篇 |
2006年 | 8篇 |
2005年 | 8篇 |
2004年 | 8篇 |
2003年 | 8篇 |
2002年 | 12篇 |
2001年 | 6篇 |
2000年 | 12篇 |
1999年 | 9篇 |
1998年 | 10篇 |
1997年 | 5篇 |
1996年 | 8篇 |
1995年 | 8篇 |
1994年 | 3篇 |
1993年 | 11篇 |
1992年 | 16篇 |
1991年 | 5篇 |
1990年 | 4篇 |
1989年 | 2篇 |
1988年 | 8篇 |
1987年 | 3篇 |
1986年 | 4篇 |
1985年 | 2篇 |
1984年 | 4篇 |
1983年 | 5篇 |
1982年 | 4篇 |
1981年 | 4篇 |
1980年 | 1篇 |
排序方式: 共有604条查询结果,搜索用时 0 毫秒
51.
传统历史模拟法(THS,Tranditional Historical Simulation)和滤波历史模拟法(FHS,Filtered Historical Simulation)是国际商业银行使用最多的VaR预测方法。首先,论文在已有研究成果的基础上,构造了BHW(Bootstraped Hull and White)历史模拟法;然后,以国内黄金交易数据为样本,采用6种严谨的后验分析(Backtesting analysis)方法,对BHW方法及几种主流历史模拟法、GARCH模型方法的VaR预测精确性进行了实证分析。论文的主要结论包括:(1)综合论文所采用的几种金融风险测度方法来看,BHW方法表现出了相对较好的精确性,而实务界中广泛使用的THS方法则表现出了最差的精确性;(2)不同的历史模拟法受样本规模因素影响的程度显著不同,例如THS方法和HW方法均不同程度的受到了样本规模因素影响;(3)总体来看,BHW方法表现出了相对较好的风险预测精确性,可以作为测度黄金市场风险的选择之一。 相似文献
52.
Joint modified block replacement and production/inventory control policy for a failure-prone manufacturing cell 总被引:1,自引:0,他引:1
This paper considers a joint preventive maintenance (PM) and production/inventory control policy of an unreliable single machine, mono-product manufacturing cell with stochastic non-negligible corrective and preventive delays. The production/inventory control policy, which is based on the hedging point policy (HPP), consists in building and maintaining a safety stock of finished products in order to respond to demand and to avoid shortages during maintenance actions. Without considering the impact of preventive and corrective actions on the overall performance of the production system, most authors working in the reliability and maintainability domains confirm that the age-based preventive maintenance policy (ARP) outperforms the classical block-replacement policy (BRP). In order to reduce wastage incurred by the classical BRP, we consider a modified block replacement policy (MBRP), which consists in canceling a preventive maintenance action if the time elapsed since the last maintenance action exceeds a specified time threshold. The main objective of this paper is to determine the joint optimal policy that minimizes the overall cost, which is composed of corrective and preventive maintenance costs as well as inventory holding and backlog costs. A simulation model mimicking the dynamic and stochastic behavior of the manufacturing cell, based on more realistic considerations of the real behavior of industrial manufacturing cells, is proposed. Based on simulation results, the joint optimal MBRP/HPP parameters are obtained through a numerical approach that combines design of experiment, analysis of variance and response surface methodologies. The joint optimal MBRP/HPP policy is compared to classical joint ARP/HPP and BRP/HPP optimal policies, and the results show that the proposed MBRP/HPP outperforms the latter. Sensitivity analyses are also carried out in order to confirm the superiority of the proposed MBRP/HPP, and it is observed that for practitioners, the proposed joint MBRP/HPP offers not only cost savings, but is also easy to manage, as compared to the ARP/HPP policy. 相似文献
53.
Eugene D. Hahn 《决策科学》2003,34(3):443-466
In the analytic hierarchy process (AHP), priorities are derived via a deterministic method, the eigenvalue decomposition. However, judgments may be subject to error. A stochastic characterization of the pairwise comparison judgment task is provided and statistical models are introduced for deriving the underlying priorities. Specifically, a weighted hierarchical multinomial logit model is used to obtain the priorities. Inference is then conducted from the Bayesian viewpoint using Markov chain Monte Carlo methods. The stochastic methods are found to give results that are congruent with those of the eigenvector method in matrices of different sizes and different levels of inconsistency. Moreover, inferential statements can be made about the priorities when the stochastic approach is adopted, and these statements may be of considerable value to a decision maker. The methods described are fully compatible with judgments from the standard version of AHP and can be used to construct a stochastic formulation of it. 相似文献
54.
This research examines the use of both frozen and replanning intervals for planning the master production schedule (MPS) for a capacity-constrained job shop. The results show that forecast error, demand lumpiness, setup time, planned lead time, and order size have a greater impact on the mean total backlog, total inventory, and number of setups than the frozen and replanning intervals. The study also shows that a repetitive lot dispatching rule reduces the importance of lot sizing, and a combination of repetitive lot dispatching rule and single-period order size consistently produces the lowest mean total backlog and total inventory. The results also indicate that rescheduling the open orders every period produces a lower mean total backlog and total inventory when the forecast errors are large relative to the order sizes. This result suggests that the due date of an open order should be updated only when a significant portion of the order is actually needed on the new due date. 相似文献
55.
《Journal of Statistical Computation and Simulation》2012,82(3-4):227-236
The widely-used Tietjen—Moore multiple outlier statistic has a defect as originally proposed in that it may test the wrong observations as outliers. The defect is corrected by redefinition and the statistic extended to make use of possible additional information on underlying variance. Results of simulation of the revised statistic are presented. 相似文献
56.
《Journal of Statistical Computation and Simulation》2012,82(4):229-248
Identical numerical integration experiments are performed on a CYBER 205 and an IBM 3081 in order to gauge the relative performance of several methods of integration. The methods employed are the general methods of Gauss-Legendre, iterated Gauss-Legendre, Newton-Cotes, Romberg and Monte Carlo as well as three methods, due to Owen, Dutt, and Clark respectively, for integrating the normal density. The bi- and trivariate normal densities and four other functions are integrated; the latter four have integrals expressible in closed form and some of them can be parameterized to exhibit singularities or highly periodic behavior. The various Gauss-Legendre methods tend to be most accurate (when applied to the normal density they are even more accurate than the special purpose methods designed for the normal) and while they are not the fastest, they are at least competitive. In scalar mode the CYBER is about 2-6 times faster than the IBM 3081 and the speed advantage of vectorised to scalar mode ranges from 6 to 15. Large scale econometric problems of the probit type should now be routinely soluble. 相似文献
57.
58.
59.
《Journal of Statistical Computation and Simulation》2012,82(1):37-73
This study compares empirical type I error and power of different permutation techniques that can be used for partial correlation analysis involving three data vectors and for partial Mantel tests. The partial Mantel test is a form of first-order partial correlation analysis involving three distance matrices which is widely used in such fields as population genetics, ecology, anthropology, psychometry and sociology. The methods compared are the following: (1) permute the objects in one of the vectors (or matrices); (2) permute the residuals of a null model; (3) correlate residualized vector 1 (or matrix A) to residualized vector 2 (or matrix B); permute one of the residualized vectors (or matrices); (4) permute the residuals of a full model. In the partial correlation study, the results were compared to those of the parametric t-test which provides a reference under normality. Simulations were carried out to measure the type I error and power of these permutatio methods, using normal and non-normal data, without and with an outlier. There were 10 000 simulations for each situation (100 000 when n = 5); 999 permutations were produced per test where permutations were used. The recommended testing procedures are the following:(a) In partial correlation analysis, most methods can be used most of the time. The parametric t-test should not be used with highly skewed data. Permutation of the raw data should be avoided only when highly skewed data are combined with outliers in the covariable. Methods implying permutation of residuals, which are known to only have asymptotically exact significance levels, should not be used when highly skewed data are combined with small sample size. (b) In partial Mantel tests, method 2 can always be used, except when highly skewed data are combined with small sample size. (c) With small sample sizes, one should carefully examine the data before partial correlation or partial Mantel analysis. For highly skewed data, permutation of the raw data has correct type I error in the absence of outliers. When highly skewed data are combined with outliers in the covariable vector or matrix, it is still recommended to use the permutation of raw data. (d) Method 3 should never be used. 相似文献
60.
《Journal of Statistical Computation and Simulation》2012,82(3):231-255
Correlated binary data arise frequently in medical as well as other scientific disciplines; and statistical methods, such as generalized estimating equation (GEE), have been widely used for their analysis. The need for simulating correlated binary variates arises for evaluating small sample properties of the GEE estimators when modeling such data. Also, one might generate such data to simulate and study biological phenomena such as tooth decay or periodontal disease. This article introduces a simple method for generating pairs of correlated binary data. A simple algorithm is also provided for generating an arbitrary dimensional random vector of non-negatively correlated binary variates. The method relies on the idea that correlations among the random variables arise as a result of their sharing some common components that induce such correlations. It then uses some properties of the binary variates to represent each variate in terms of these common components in addition to its own elements. Unlike most previous approaches that require solving nonlinear equations or use some distributional properties of other random variables, this method uses only some properties of the binary variate. As no intermediate random variables are required for generating the binary variates, the proposed method is shown to be faster than the other methods. To verify this claim, we compare the computational efficiency of the proposed method with those of other procedures. 相似文献