首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   618篇
  免费   10篇
管理学   203篇
民族学   1篇
人才学   1篇
人口学   11篇
丛书文集   6篇
理论方法论   8篇
综合类   41篇
社会学   33篇
统计学   324篇
  2024年   1篇
  2023年   3篇
  2021年   5篇
  2020年   5篇
  2019年   9篇
  2018年   16篇
  2017年   54篇
  2016年   17篇
  2015年   16篇
  2014年   15篇
  2013年   183篇
  2012年   47篇
  2011年   13篇
  2010年   18篇
  2009年   20篇
  2008年   14篇
  2007年   14篇
  2006年   8篇
  2005年   8篇
  2004年   8篇
  2003年   8篇
  2002年   12篇
  2001年   6篇
  2000年   12篇
  1999年   9篇
  1998年   10篇
  1997年   5篇
  1996年   8篇
  1995年   8篇
  1994年   3篇
  1993年   11篇
  1992年   16篇
  1991年   5篇
  1990年   4篇
  1989年   2篇
  1988年   8篇
  1987年   3篇
  1986年   4篇
  1985年   2篇
  1984年   4篇
  1983年   5篇
  1982年   4篇
  1981年   4篇
  1980年   1篇
排序方式: 共有628条查询结果,搜索用时 31 毫秒
71.
Robust methods are proposed for testing whether several directional distributions on the unit p-sphere have comparable dispersions. The families of distributions considered are the Langevin for random vectors, and the Generalised Scheidegger-Watson for random axes, with specific interest in the Fisher and Watson distributions on the sphere. The methods are analogues of Levene's procedure for comparing variances of normal distributions.  相似文献   
72.
This paper presents a decision support methodology for strategic planning in tramp and industrial shipping. The proposed methodology combines simulation and optimization, where a Monte Carlo simulation framework is built around an optimization-based decision support system for short-term routing and scheduling. The simulation proceeds by considering a series of short-term routing and scheduling problems using a rolling horizon principle where information is revealed as time goes by. The approach is flexible in the sense that it can easily be configured to provide decision support for a wide range of strategic planning problems, such as fleet size and mix problems, analysis of long-term contracts and contract terms. The methodology is tested on a real case for a major Norwegian shipping company. The methodology provided valuable decision support on important strategic planning problems for the shipping company.  相似文献   
73.
In this article, an efficient Bayesian meta-modeling approach is proposed for Gaussian stochastic process models in computer experiments. Different prior densities and particularly, a non informative hyper prior have been employed on the parameters involved in the correlation matrix. And the estimation of related parameters is obtained by the expectation-maximization algorithm. Compared with the recent work of Li and Sudjianto (2005 Li , R. , Sudjianto , A. ( 2005 ). Analysis of computer experiments using penalized likelihood in Kriging models . Technometrics 47 : 111120 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]), the proposed approach is not only of higher prediction accuracy but also of lower computational cost, due to the utilization of the non informative prior and the absence of tuning parameters. Experimental results demonstrate that our approach yields state-of-the-art performance.  相似文献   
74.
M-quantile regression is defined as a “quantile-like” generalization of robust regression based on influence functions. This article outlines asymptotic properties for the M-quantile regression coefficients estimators in the case of i.i.d. data with stochastic regressors, paying attention to adjustments due to the first-step scale estimation. A variance estimator of the M-quantile regression coefficients based on the sandwich approach is proposed. Empirical results show that this estimator appears to perform well under different simulated scenarios. The sandwich estimator is applied in the small area estimation context for the estimation of the mean squared error of an estimator for the small area means. The results obtained improve previous findings, especially in the case of heteroskedastic data.  相似文献   
75.
ABSTRACT

We derive an analytic expression for the bias of the maximum likelihood estimator of the parameter in a doubly-truncated Poisson distribution, which proves highly effective as a means of bias correction. For smaller sample sizes, our method outperforms the alternative of bias correction via the parametric bootstrap. Bias is of little concern in the positive Poisson distribution, the most common form of truncation in the applied literature. Bias appears to be the most severe in the doubly-truncated Poisson distribution, when the mean of the distribution is close to the right (upper) truncation.  相似文献   
76.
ABSTRACT

Classification of data consisting of both categorical and continuous variables between two groups is often handled by the sample location linear discriminant function confined to each of the locations specified by the observed values of the categorical variables. Homoscedasticity of across-location conditional dispersion matrices of the continuous variables is often assumed. Quite often, interactions between continuous and categorical variables cause across-location heteroscedasticity. In this article, we examine the effect of heterogeneous across-location conditional dispersion matrices on the overall expected and actual error rates associated with the sample location linear discriminant function. Performance of the sample location linear discriminant function is evaluated against the results for the restrictive classifier adjusted for across-location heteroscedasticity. Conclusions based on a Monte Carlo study are reported.  相似文献   
77.
For Canada's boreal forest region, the accurate modelling of the timing of the appearance of aspen leaves is important to forest fire management, as it signifies the end of the spring fire season that occurs after snowmelt. This article compares two methods, a midpoint rule and a conditional expectation method used to estimate the true flush date for interval-censored data from a large set of fire-weather stations in Alberta, Canada. The conditional expectation method uses the interval censored kernel density estimator of Braun et al. (2005 Braun , J. , Duchesne , T. , Stafford , J. E. ( 2005 ). Local likelihood density estimation for interval censored data . Canadian Journal of Statistics 33 : 3960 .[Crossref], [Web of Science ®] [Google Scholar]). The methods are compared via simulation, where true flush dates were generated from a normal distribution and then converted into intervals by adding and subtracting exponential random variables. The simulation parameters were estimated from the data set and several scenarios were considered. The study reveals that the conditional expectation method is never worse than the midpoint method, and that there is a significant advantage to this method when the intervals are large. An illustration of the methodology applied to the Alberta data set is also provided.  相似文献   
78.
The two well-known and widely used multinomial selection procedures Bechhofor, Elmaghraby, and Morse (BEM) and all vector comparison (AVC) are critically compared in applications related to simulation optimization problems.

Two configurations of population probability distributions in which the best system has the greatest probability p i of yielding the largest value of the performance measure and has or does not have the largest expected performance measure were studied.

The numbers achieved by our simulations clearly show that none of the studied procedures outperform the other in all situations. The user must take into consideration the complexity of the simulations and the performance measure probability distribution properties when deciding which procedure to employ.

An important discovery was that the AVC does not work in populations in which the best system has the greatest probability p i of yielding the largest value of the performance measure but does not have the largest expected performance measure.  相似文献   
79.
In this article, we present the problem of selecting a good stochastic system with high probability and minimum total simulation cost when the number of alternatives is very large. We propose a sequential approach that starts with the Ordinal Optimization procedure to select a subset that overlaps with the set of the actual best m% systems with high probability. Then we use Optimal Computing Budget Allocation to allocate the available computing budget in a way that maximizes the Probability of Correct Selection. This is followed by a Subset Selection procedure to get a smaller subset that contains the best system among the subset that is selected before. Finally, the Indifference-Zone procedure is used to select the best system among the survivors in the previous stage. The numerical test involved with all these procedures shows the results for selecting a good stochastic system with high probability and a minimum number of simulation samples, when the number of alternatives is large. The results also show that the proposed approach is able to identify a good system in a very short simulation time.  相似文献   
80.
Przystalski and Krajewski (2007 Przystalski , M. , Krajewski , P. ( 2007 ). Constrained estimators of treatment parameters in semiparametric models . Statist. Probab. Lett. 77 : 914919 .[Crossref], [Web of Science ®] [Google Scholar]) proposed the restricted backfitting (RBCF) estimator and restricted Speckman (RSPC) estimator for the treatment effects in a partially linear model when some additional exact linear restrictions are assumed to hold. In this article, we introduce the preliminary test backfitting (PTBCF) estimator and preliminary test Speckman (PTSPC) estimator when the validity of the restrictions is suspected. Performances of the proposed estimators are examined with respect to the mean squared error (MSE) criterion. In addition, numerical behaviors of the proposed estimators are illustrated and compared via a Monte Carlo simulation study.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号