首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   548篇
  免费   13篇
管理学   13篇
理论方法论   1篇
综合类   20篇
社会学   2篇
统计学   525篇
  2023年   1篇
  2022年   7篇
  2021年   4篇
  2020年   12篇
  2019年   26篇
  2018年   28篇
  2017年   42篇
  2016年   19篇
  2015年   16篇
  2014年   20篇
  2013年   136篇
  2012年   51篇
  2011年   15篇
  2010年   20篇
  2009年   12篇
  2008年   10篇
  2007年   16篇
  2006年   9篇
  2005年   12篇
  2004年   14篇
  2003年   9篇
  2002年   7篇
  2001年   9篇
  2000年   14篇
  1999年   12篇
  1998年   7篇
  1997年   6篇
  1996年   10篇
  1995年   3篇
  1994年   1篇
  1993年   2篇
  1992年   7篇
  1991年   1篇
  1990年   1篇
  1988年   1篇
  1984年   1篇
排序方式: 共有561条查询结果,搜索用时 15 毫秒
71.
This paper investigates on the problem of parameter estimation in statistical model when observations are intervals assumed to be related to underlying crisp realizations of a random sample. The proposed approach relies on the extension of likelihood function in interval setting. A maximum likelihood estimate of the parameter of interest may then be defined as a crisp value maximizing the generalized likelihood function. Using the expectation-maximization (EM) to solve such maximizing problem therefore derives the so-called interval-valued EM algorithm (IEM), which makes it possible to solve a wide range of statistical problems involving interval-valued data. To show the performance of IEM, the following two classical problems are illustrated: univariate normal mean and variance estimation from interval-valued samples, and multiple linear/nonlinear regression with crisp inputs and interval output.  相似文献   
72.
In this article, we consider a parametric survival model that is appropriate when the population of interest contains long-term survivors or immunes. The model referred to as the cure rate model was introduced by Boag 1 Boag, J. W. 1949. Maximum likelihood estimates of the proportion of patients cured by cancer therapy. J. R. Stat. Soc. Ser. B, 11: 1553.  [Google Scholar] in terms of a mixture model that included a component representing the proportion of immunes and a distribution representing the life times of the susceptible population. We propose a cure rate model based on the generalized exponential distribution that incorporates the effects of risk factors or covariates on the probability of an individual being a long-time survivor. Maximum likelihood estimators of the model parameters are obtained using the the expectation-maximisation (EM) algorithm. A graphical method is also provided for assessing the goodness-of-fit of the model. We present an example to illustrate the fit of this model to data that examines the effects of different risk factors on relapse time for drug addicts.  相似文献   
73.
本文在基金整体业绩评价研究领域对以往经典基金业绩评价指标詹森alpha指数以及基金资产投资的系统风险指标beta的估计方法进行了修正和改良。以往的詹森指数和beta值的估计是将其视为常系数,然而实际中基金的詹森指数和系统风险beta具备时变性。在对常系数下詹森alpha和系统风险beta值的分解式中,本文证明了传统估计值由詹森alpha和系统风险beta的期望值与一系列协方差项组成。随后本文构建了反映动态指标变化的SSM模型,并利用Particle EM算法来估计动态詹森alpha和系统风险beta在各期的估计值,并以此来计算基金在评价时期内平均詹森指数水平和系统风险水平。此外由于获取了各期的系统风险beta,根据择时能力的定义本文构建了反映基金在时期内的择时能力指标。  相似文献   
74.
近几十年以来,国际上在对"风险的处理和效益的优化"这两个现代金融学的中心议题的分析和处理过程中,金融时间序列的计量学模型及其相应的分析越来越起到非常重要的作用.对于线性时间序列模型如AR(p),MA(q),ARMA(p,q)等,已经为我们所熟知.具体到模型的参数估计在数据没有缺失时,也有很多经典的办法,如最小二乘法、极大似然法等.但是当数据在中间有缺失时,上述方法将无能为力.本文将详细讨论在数据有缺失时的ARMA(1,1)模型,即Zt=αZt-1t-βεt-1的参数的估计方法.  相似文献   
75.
Jammalamadaka and Mangalam introduced middle censoring which refers to data arising in situations, where the exact lifetime becomes unobservable if it falls within a random censoring interval. In the present article, we propose an additive risks regression model for a lifetime data subject to middle censoring, where the lifetimes are assumed to follow exponentiated exponential distribution. The regression parameters are estimated using the Expectation-Maximization algorithm. Asymptotic normality of the estimator is proposed. We report a simulation study to assess the finite sample properties of the estimator. We then analyze a real-life data on survival times of larynx cancer patients studied by Karduan.  相似文献   
76.
In applications of multivariate finite mixture models, estimating the number of unknown components is often difficult. We propose a bootstrap information criterion, whereby we calculate the expected log-likelihood at maximum a posteriori estimates for model selection. Accurate estimation using the bootstrap requires a large number of bootstrap replicates. We accelerate this computation by employing parallel processing with graphics processing units (GPUs) on the Compute Unified Device Architecture (CUDA) platform. We conducted a runtime comparison of CUDA algorithms between implementation on the GPU and that on a CPU. The results showed significant performance gains in the proposed CUDA algorithms over multithread CPUs.  相似文献   
77.
Finite mixture models have provided a reasonable tool to model various types of observed phenomena, specially those which are random in nature. In this article, a finite mixture of Weibull and Pareto (IV) distribution is considered and studied. Some structural properties of the resulting model are discussed including estimation of the model parameters via expectation maximization (EM) algorithm. A real-life data application exhibits the fact that in certain situations, this mixture model might be a better alternative than the rival popular models.  相似文献   
78.
Bone mineral density decreases naturally as we age because existing bone tissue is reabsorbed by the body faster than new bone tissue is synthesized. When this occurs, bones lose calcium and other minerals. What is normal bone mineral density for men 50 years and older? Suitable diagnostic cutoff values for men are less well defined than for women. In this paper, we propose using normal mixture models to estimate the prevalence of low-lumbar spine bone mineral density in men 50 years and older with or at risk for human immunodeficiency virus infection when normal values of bone mineral density are not generally known. The Box–Cox power transformation is used to determine which transformation best suits normal mixture distributions. Parametric bootstrap tests are used to determine the number of mixture components and to determine whether the mixture components are homoscedastic or heteroscedastic.  相似文献   
79.
Zero inflation means that the proportion of 0's of a model is greater than the proportion of 0's of the corresponding Poisson model, which is a common phenomenon in count data. To model the zero-inflated characteristic of time series of counts, we propose zero-inflated Poisson and negative binomial INGARCH models, which are useful and flexible generalizations of the Poisson and negative binomial INGARCH models, respectively. The stationarity conditions and the autocorrelation function are given. Based on the EM algorithm, the estimating procedure is simple and easy to be implemented. A simulation study shows that the estimation method is accurate and reliable as long as the sample size is reasonably large. A real data example leads to superior performance of the proposed models compared with other competitive models in the literature.  相似文献   
80.
We review the Fisher scoring and EM algorithms for incomplete multivariate data from an estimating function point of view, and examine the corresponding quasi-score functions under second-moment assumptions. A bias-corrected REML-type estimator for the covariance matrix is derived, and the Fisher, Godambe and empirical sandwich information matrices are compared. We make a numerical investigation of the two algorithms, and compare with a hybrid algorithm, where Fisher scoring is used for the mean vector and the EM algorithm for the covariance matrix.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号