首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   687篇
  免费   18篇
管理学   51篇
丛书文集   2篇
理论方法论   1篇
综合类   51篇
社会学   7篇
统计学   593篇
  2024年   3篇
  2023年   6篇
  2022年   8篇
  2021年   9篇
  2020年   20篇
  2019年   34篇
  2018年   31篇
  2017年   54篇
  2016年   25篇
  2015年   19篇
  2014年   24篇
  2013年   156篇
  2012年   70篇
  2011年   25篇
  2010年   20篇
  2009年   17篇
  2008年   13篇
  2007年   23篇
  2006年   11篇
  2005年   14篇
  2004年   17篇
  2003年   9篇
  2002年   7篇
  2001年   10篇
  2000年   15篇
  1999年   13篇
  1998年   7篇
  1997年   6篇
  1996年   11篇
  1995年   3篇
  1994年   1篇
  1993年   2篇
  1992年   10篇
  1991年   1篇
  1990年   1篇
  1989年   3篇
  1988年   2篇
  1987年   3篇
  1984年   1篇
  1981年   1篇
排序方式: 共有705条查询结果,搜索用时 0 毫秒
31.
多车场集送一体化车辆调度问题及其遗传算法研究   总被引:2,自引:0,他引:2  
针对物流配送中的多车场一体化车辆调度问题提出了智能处理方法,采用了基于自然数的一体化配送对路径表示方式,用里程约束来控制车场的插入,以增加惩罚的方式加入时间窗约束。并且根据具体约束情况设计了改进的遗传算法,采用了动态染色体、改进的交叉和变异法、内部扰动和外部扰动等技术,提高了遗传算法的优化效率和优化效果。介绍了此算法的原理,给出了具有一个代表性算例试验结果和结果分析。试验结果表明了此方法对优化有里程和时间窗约束的多车场一体化车辆调度问题的有效性。  相似文献   
32.
We consider the problem of estimating the maximum posterior probability (MAP) state sequence for a finite state and finite emission alphabet hidden Markov model (HMM) in the Bayesian setup, where both emission and transition matrices have Dirichlet priors. We study a training set consisting of thousands of protein alignment pairs. The training data is used to set the prior hyperparameters for Bayesian MAP segmentation. Since the Viterbi algorithm is not applicable any more, there is no simple procedure to find the MAP path, and several iterative algorithms are considered and compared. The main goal of the paper is to test the Bayesian setup against the frequentist one, where the parameters of HMM are estimated using the training data.  相似文献   
33.
本文以供应链质量成本为主要研究对象,分析归纳供应链质量成本研究现状,结合卓越绩效模式方法,提出质量成本核算体系,建立基于卓越绩效模式的供应链质量成本模型。通过一个应用案例,采用遗传算法求解,证明该方法使供应链成本得到了优化。  相似文献   
34.
We present a new Immune Algorithm, IMMALG, that incorporates a Stochastic Aging operator and a simple local search procedure to improve the overall performances in tackling the chromatic number problem (CNP) instances. We characterize the algorithm and set its parameters in terms of Kullback Entropy. Experiments will show that the IA we propose is very competitive with the state-of-art evolutionary algorithms.  相似文献   
35.
A problem of testing of hypotheses on the mean vector of a multivariate normal distribution with unknown and positive definite covariance matrix is considered when a sample with a special, though not unusual, pattern of missing observations from that population is available. The approximate percentage points of the test statistic are obtained and their accuracy has been checked by comparing them with some exact percentage points which are calculated for complete samples and some special incomplete samples. The approximate percentage points are in good agreement with exact percentage points. The above work is extended to the problem of testing the hypothesis of equality of two mean vectors of two multivariate normal distributions with the same, unknown covariance matrix  相似文献   
36.
Monte Carlo methods are used to compare the methods of maximum likelihood and least squares to estimate a cumulative distribution function. When the probabilistic model used is correct or nearly correct, the two methods produce similar results with the MLE usually slightly superior When an incorrect model is used, or when the data is contaminated, the least squares technique often gives substantially superior results.  相似文献   
37.
Count data often contain many zeros. In parametric regression analysis of zero-inflated count data, the effect of a covariate of interest is typically modelled via a linear predictor. This approach imposes a restrictive, and potentially questionable, functional form on the relation between the independent and dependent variables. To address the noted restrictions, a flexible parametric procedure is employed to model the covariate effect as a linear combination of fixed-knot cubic basis splines or B-splines. The semiparametric zero-inflated Poisson regression model is fitted by maximizing the likelihood function through an expectation–maximization algorithm. The smooth estimate of the functional form of the covariate effect can enhance modelling flexibility. Within this modelling framework, a log-likelihood ratio test is used to assess the adequacy of the covariate function. Simulation results show that the proposed test has excellent power in detecting the lack of fit of a linear predictor. A real-life data set is used to illustrate the practicality of the methodology.  相似文献   
38.
This paper investigates on the problem of parameter estimation in statistical model when observations are intervals assumed to be related to underlying crisp realizations of a random sample. The proposed approach relies on the extension of likelihood function in interval setting. A maximum likelihood estimate of the parameter of interest may then be defined as a crisp value maximizing the generalized likelihood function. Using the expectation-maximization (EM) to solve such maximizing problem therefore derives the so-called interval-valued EM algorithm (IEM), which makes it possible to solve a wide range of statistical problems involving interval-valued data. To show the performance of IEM, the following two classical problems are illustrated: univariate normal mean and variance estimation from interval-valued samples, and multiple linear/nonlinear regression with crisp inputs and interval output.  相似文献   
39.
Motivated by classification issues that arise in marine studies, we propose a latent-class mixture model for the unsupervised classification of incomplete quadrivariate data with two linear and two circular components. The model integrates bivariate circular densities and bivariate skew normal densities to capture the association between toroidal clusters of bivariate circular observations and planar clusters of bivariate linear observations. Maximum-likelihood estimation of the model is facilitated by an expectation maximization (EM) algorithm that treats unknown class membership and missing values as different sources of incomplete information. The model is exploited on hourly observations of wind speed and direction and wave height and direction to identify a number of sea regimes, which represent specific distributional shapes that the data take under environmental latent conditions.  相似文献   
40.
Bone mineral density decreases naturally as we age because existing bone tissue is reabsorbed by the body faster than new bone tissue is synthesized. When this occurs, bones lose calcium and other minerals. What is normal bone mineral density for men 50 years and older? Suitable diagnostic cutoff values for men are less well defined than for women. In this paper, we propose using normal mixture models to estimate the prevalence of low-lumbar spine bone mineral density in men 50 years and older with or at risk for human immunodeficiency virus infection when normal values of bone mineral density are not generally known. The Box–Cox power transformation is used to determine which transformation best suits normal mixture distributions. Parametric bootstrap tests are used to determine the number of mixture components and to determine whether the mixture components are homoscedastic or heteroscedastic.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号