首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2060篇
  免费   59篇
  国内免费   27篇
管理学   358篇
民族学   2篇
人才学   1篇
人口学   17篇
丛书文集   47篇
理论方法论   20篇
综合类   838篇
社会学   54篇
统计学   809篇
  2024年   3篇
  2023年   9篇
  2022年   28篇
  2021年   18篇
  2020年   31篇
  2019年   56篇
  2018年   45篇
  2017年   93篇
  2016年   75篇
  2015年   91篇
  2014年   98篇
  2013年   382篇
  2012年   232篇
  2011年   120篇
  2010年   69篇
  2009年   93篇
  2008年   75篇
  2007年   96篇
  2006年   81篇
  2005年   83篇
  2004年   55篇
  2003年   51篇
  2002年   31篇
  2001年   43篇
  2000年   26篇
  1999年   27篇
  1998年   15篇
  1997年   17篇
  1996年   22篇
  1995年   17篇
  1994年   11篇
  1993年   11篇
  1992年   15篇
  1991年   5篇
  1990年   7篇
  1989年   4篇
  1988年   2篇
  1987年   5篇
  1986年   1篇
  1985年   1篇
  1984年   1篇
  1982年   1篇
排序方式: 共有2146条查询结果,搜索用时 0 毫秒
131.
ABSTRACT

In this paper, we seek to analyse the reliability of k-out-of-n cold-standby system with components having Weibull time-to-failure distribution in view of Bayes theory. At first, we review the existing methods exhaustively and find that all these methods have not considered Bayes theory. Then we modify the simplest method and propose new methods based on Monte Carlo simulation. Next, we combine all the information to derive the posterior distribution of Weibull parameters. A robust and universal sample-based method is proposed according to the Monte Carlo Markov Chain method to draw the sample of parameters to obtain the Bayes estimate of reliability. The drawn samples are proved to be rather satisfactory. Conducting a simulation study to compare all the methods in terms of accuracy and computational time, we have presented some useful recommendations from the simulation results. These conclusions would provide insight on the application for k-out-of-n cold-standby system.  相似文献   
132.
A large-scale study, in which two million random Voronoi polygons (with respect to a homogeneous Poisson point process) were generated and mensurated, is described. The polygon characteristics recorded are number of sides (or vertices), perimeter, area and interior angles. A feature is the efficient “quantile” method of replicating Poisson-type random structures, which it is hoped may find useful application elsewhere.  相似文献   
133.
A simulation study was carried out to compare the performances of two different simple estimators of the location parameter for a three-parameter Weibull distribution Both Estimators have been suggested by recent paper in the literature. Bras and mean square error are examined for many different sample-size and shape-parameter-value combinations. Strong evidence of the domination of one estimator over the other is found.  相似文献   
134.
In estimating a multiple integral, it is known that Monte Carlo methods are more efficient than analytical techniques when the number of dimensions is beyond seven. In general, the sample-mean method is better than the hit-or-miss Monte Carlo method. However, when the volume of a domain in a high-dimensional space is of interest, the hit-or-miss method is usually preferred. It is because of the difficulty in generalizing the sample-mean method for the computation of the volume of a domain. This paper develops a technique to make such a generalization possible. The technique can be interpreted as a volume-preserving transformation procedure. A volume-preserving transformation is first performed to transform the concerned domain into a hypersphere. The volume of the domain is then evaluated by computing the volume of the hypersphere.  相似文献   
135.
In this paper, procedures for all pairwise comparisons of location parameters of negative exponential populations are developed when the common scale parameter is known or unknown using large sample distributional approximations of the relevant random variables. The small sample performance of these procedures are then examined using Monte Carlo simulation.  相似文献   
136.
Minimum information bivariate distributions with uniform marginals and a specified rank correlation are studied in this paper. These distributions play an important role in a particular way of modeling dependent random variables which has been used in the computer code UNICORN for carrying out uncertainty analyses. It is shown that these minimum information distributions have a particular form which makes simulation of conditional distributions very simple. Approximations to the continuous distributions are discussed and explicit formulae are determined. Finally a relation is discussed to DAD theorems, and a numerical algorithm is given (which has geometric rate of covergence) for determining the minimum information distributions.  相似文献   
137.
The estimation of earthquakes’ occurrences prediction in seismic areas is a challenging problem in seismology and earthquake engineering. Indeed, the prevention and the quantification of possible damage provoked by destructive earthquakes are directly linked to this kind of prevision. In our paper, we adopt a parametric semi-Markov approach. This model assumes that a sequence of earthquakes is seen as a Markov process and besides it permits to take into consideration the more realistic assumption of events’ dependence in space and time. The elapsed time between two consecutive events is modeled as a general Weibull distribution. We determine then the transition probabilities and the so-called crossing states probabilities. We conclude then with a Monte Carlo simulation and the model is validated through a large database containing real data.  相似文献   
138.
Bias-corrected random forests in regression   总被引:1,自引:0,他引:1  
It is well known that random forests reduce the variance of the regression predictors compared to a single tree, while leaving the bias unchanged. In many situations, the dominating component in the risk turns out to be the squared bias, which leads to the necessity of bias correction. In this paper, random forests are used to estimate the regression function. Five different methods for estimating bias are proposed and discussed. Simulated and real data are used to study the performance of these methods. Our proposed methods are significantly effective in reducing bias in regression context.  相似文献   
139.
Data envelopment analysis (DEA) is the most commonly used approach for evaluating healthcare efficiency [B. Hollingsworth, The measurement of efficiency and productivity of health care delivery. Health Economics 17(10) (2008), pp. 1107–1128], but a long-standing concern is that DEA assumes that data are measured without error. This is quite unlikely, and DEA and other efficiency analysis techniques may yield biased efficiency estimates if it is not realized [B.J. Gajewski, R. Lee, M. Bott, U. Piamjariyakul, and R.L. Taunton, On estimating the distribution of data envelopment analysis efficiency scores: an application to nursing homes’ care planning process. Journal of Applied Statistics 36(9) (2009), pp. 933–944; J. Ruggiero, Data envelopment analysis with stochastic data. Journal of the Operational Research Society 55 (2004), pp. 1008–1012]. We propose to address measurement error systematically using a Bayesian method (Bayesian DEA). We will apply Bayesian DEA to data from the National Database of Nursing Quality Indicators® to estimate nursing units’ efficiency. Several external reliability studies inform the posterior distribution of the measurement error on the DEA variables. We will discuss the case of generalizing the approach to situations where an external reliability study is not feasible.  相似文献   
140.
For the two-sample location and scale problem we propose an adaptive test which is based on so called Lepage type tests. The well known test of Lepage (1971) is a combination of the Wilcoxon test for location alternatives and the Ansari-Bradley test for scale alternatives and it behaves well for symmetric and medium-tailed distributions. For the cae of short-, medium- and long-tailed distributions we replace the Wilcoxon test and the .Ansari-Bradley test by suitable other two-sample tests for location and scale, respectively, in oder to get higher power than the classical Lepage test for such distribotions. These tests here are called Lepage type tests. in practice, however, we generally have no clear idea about the distribution having generated our data. Thus, an adaptive test should be applied which takes the the given data set inio consideration. The proposed adaptive test is based on the concept of Hogg (1974), i.e., first, to classify the unknown symmetric distribution function with respect to a measure for tailweight and second, to apply an appropriate Lepage type test for this classified type of distribution. We compare the adaptive test with the three Lepage type tests in the adaptive scheme and with the classical Lepage test as well as with other parametric and nonparametric tests. The power comparison is carried out via Monte Carlo simulation. It is shown that the adaptive test is the best one for the broad class of distributions considered.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号