全文获取类型
收费全文 | 2183篇 |
免费 | 148篇 |
国内免费 | 43篇 |
专业分类
管理学 | 443篇 |
民族学 | 1篇 |
人口学 | 1篇 |
丛书文集 | 34篇 |
理论方法论 | 4篇 |
综合类 | 778篇 |
社会学 | 8篇 |
统计学 | 1105篇 |
出版年
2024年 | 15篇 |
2023年 | 21篇 |
2022年 | 53篇 |
2021年 | 34篇 |
2020年 | 44篇 |
2019年 | 78篇 |
2018年 | 76篇 |
2017年 | 132篇 |
2016年 | 74篇 |
2015年 | 63篇 |
2014年 | 72篇 |
2013年 | 307篇 |
2012年 | 146篇 |
2011年 | 99篇 |
2010年 | 85篇 |
2009年 | 77篇 |
2008年 | 112篇 |
2007年 | 113篇 |
2006年 | 106篇 |
2005年 | 99篇 |
2004年 | 97篇 |
2003年 | 64篇 |
2002年 | 50篇 |
2001年 | 49篇 |
2000年 | 61篇 |
1999年 | 52篇 |
1998年 | 37篇 |
1997年 | 27篇 |
1996年 | 30篇 |
1995年 | 21篇 |
1994年 | 17篇 |
1993年 | 8篇 |
1992年 | 20篇 |
1991年 | 6篇 |
1990年 | 5篇 |
1989年 | 10篇 |
1988年 | 6篇 |
1987年 | 3篇 |
1985年 | 2篇 |
1984年 | 1篇 |
1981年 | 2篇 |
排序方式: 共有2374条查询结果,搜索用时 10 毫秒
41.
《统计学通讯:模拟与计算》2013,42(3):685-701
Abstract Constrained M (CM) estimates of multivariate location and scatter [Kent, J. T., Tyler, D. E. (1996). Constrained M-estimation for multivariate location and scatter. Ann. Statist. 24:1346–1370] are defined as the global minimum of an objective function subject to a constraint. These estimates combine the good global robustness properties of the S estimates and the good local robustness properties of the redescending M estimates. The CM estimates are not explicitly defined. Numerical methods have to be used to compute the CM estimates. In this paper, we give an algorithm to compute the CM estimates. Using the algorithm, we give a small simulation study to demonstrate the capability of the algorithm finding the CM estimates, and also to explore the finite sample behavior of the CM estimates. We also use the CM estimators to estimate the location and scatter parameters of some multivariate data sets to see the performance of the CM estimates dealing with the real data sets that may contain outliers. 相似文献
42.
《随机性模型》2013,29(2):193-227
The Double Chain Markov Model is a fully Markovian model for the representation of time-series in random environments. In this article, we show that it can handle transitions of high-order between both a set of observations and a set of hidden states. In order to reduce the number of parameters, each transition matrix can be replaced by a Mixture Transition Distribution model. We provide a complete derivation of the algorithms needed to compute the model. Three applications, the analysis of a sequence of DNA, the song of the wood pewee, and the behavior of young monkeys show that this model is of great interest for the representation of data that can be decomposed into a finite set of patterns. 相似文献
43.
When an appropriate parametric model and a prior distribution of its parameters are given to describe clinical time courses of a dynamic biological process, Bayesian approaches allow us to estimate the entire profiles from a few or even a single observation per subject. The goodness of the estimation depends on the measurement points at which the observations were made. The number of measurement points per subject is generally limited to one or two. The limited measurement points have to be selected carefully. This paper proposes an approach to the selection of the optimum measurement point for Bayesian estimations of clinical time courses. The selection is made among given candidates, based on the goodness of estimation evaluated by the Kullback-Leibler information. This information measures the discrepancy of an estimated time course from the true one specified by a given appropriate model. The proposed approach is applied to a pharmacokinetic analysis, which is a typical clinical example where the selection is required. The results of the present study strongly suggest that the proposed approach is applicable to pharmacokinetic data and has a wide range of clinical applications. 相似文献
44.
It is well-known that the nonparametric maximum likelihood estimator (NPMLE) of a survival function may severely underestimate the survival probabilities at very early times for left truncated data. This problem might be overcome by instead computing a smoothed nonparametric estimator (SNE) via the EMS algorithm. The close connection between the SNE and the maximum penalized likelihood estimator is also established. Extensive Monte Carlo simulations demonstrate the superior performance of the SNE over that of the NPMLE, in terms of either bias or variance, even for moderately large Samples. The methodology is illustrated with an application to the Massachusetts Health Care Panel Study dataset to estimate the probability of being functionally independent for non-poor male and female groups rcspectively. 相似文献
45.
The expectation-maximization (EM) method facilitates computation of max¬imum likelihood (ML) and maximum penalized likelihood (MPL) solutions. The procedure requires specification of unobservabie complete data which augment the measured or incomplete data. This specification defines a conditional expectation of the complete data log-likelihood function which is computed in the E-stcp. The EM algorithm is most effective when maximizing the iunction Q{0) denned in the F-stnp is easier than maximizing the likelihood function. The Monte Carlo EM (MCEM) algorithm of Wei & Tanner (1990) was introduced for problems where computation of Q is difficult or intractable. However Monte Carlo can he computationally expensive, e.g. in signal processing applications involving large numbers of parameters. We provide another approach: a modification of thc standard EM algorithm avoiding computation of conditional expectations. 相似文献
46.
Harry Joe 《统计学通讯:理论与方法》2013,42(11):3677-3685
Theorerms are proved for the maxima and minima of IIRi!/IICj!/T!IIyij ! over r× c contingcncy tables Y=(yij) with row sums R1,…,Rr, column sums C1,…,Cc, and grand total T. These results are imlplemented into the network algorithm of Mehta and Patel (1983) for computing the P-value of Fisher's exact test for unordered r×c contingency tables. The decrease in the amount of computing time can be substantial when the column sums are very different. 相似文献
47.
Several models for studies related to tensile strength of materials are proposed in the literature where the size or length
component has been taken to be an important factor for studying the specimens’ failure behaviour. An important model, developed
on the basis of cumulative damage approach, is the three-parameter extension of the Birnbaum–Saunders fatigue model that incorporates
size of the specimen as an additional variable. This model is a strong competitor of the commonly used Weibull model and stands
better than the traditional models, which do not incorporate the size effect. The paper considers two such cumulative damage
models, checks their compatibility with a real dataset, compares them with some of the recent toolkits, and finally recommends
a model, which appears an appropriate one. Throughout the study is Bayesian based on Markov chain Monte Carlo simulation. 相似文献
48.
Estimating the propagation rate of a viral infection of potato plants via mixtures of regressions 总被引:2,自引:0,他引:2
T. Rolf Turner 《Journal of the Royal Statistical Society. Series C, Applied statistics》2000,49(3):371-384
A problem arising from the study of the spread of a viral infection among potato plants by aphids appears to involve a mixture of two linear regressions on a single predictor variable. The plant scientists studying the problem were particularly interested in obtaining a 95% confidence upper bound for the infection rate. We discuss briefly the procedure for fitting mixtures of regression models by means of maximum likelihood, effected via the EM algorithm. We give general expressions for the implementation of the M-step and then address the issue of conducting statistical inference in this context. A technique due to T. A. Louis may be used to estimate the covariance matrix of the parameter estimates by calculating the observed Fisher information matrix. We develop general expressions for the entries of this information matrix. Having the complete covariance matrix permits the calculation of confidence and prediction bands for the fitted model. We also investigate the testing of hypotheses concerning the number of components in the mixture via parametric and 'semiparametric' bootstrapping. Finally, we develop a method of producing diagnostic plots of the residuals from a mixture of linear regressions. 相似文献
49.
M. Jamshidian & R. I. Jennrich 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2000,62(2):257-270
The EM algorithm is a popular method for computing maximum likelihood estimates. One of its drawbacks is that it does not produce standard errors as a by-product. We consider obtaining standard errors by numerical differentiation. Two approaches are considered. The first differentiates the Fisher score vector to yield the Hessian of the log-likelihood. The second differentiates the EM operator and uses an identity that relates its derivative to the Hessian of the log-likelihood. The well-known SEM algorithm uses the second approach. We consider three additional algorithms: one that uses the first approach and two that use the second. We evaluate the complexity and precision of these three and the SEM in algorithm seven examples. The first is a single-parameter example used to give insight. The others are three examples in each of two areas of EM application: Poisson mixture models and the estimation of covariance from incomplete data. The examples show that there are algorithms that are much simpler and more accurate than the SEM algorithm. Hopefully their simplicity will increase the availability of standard error estimates in EM applications. It is shown that, as previously conjectured, a symmetry diagnostic can accurately estimate errors arising from numerical differentiation. Some issues related to the speed of the EM algorithm and algorithms that differentiate the EM operator are identified. 相似文献
50.
本文研究了随机回收率的分布,建立了回收率的双Beta分布密度模型,它具有双峰分布的特点,这与Moody公司的最新研究相吻合,弥补了现有回收率分布模型均为单峰的不足。利用基于数论网格的序贯优化算法对所建模型的参数做出了估计,借助于核密度估计的工具,进行了实证分析,结果表明双Beta模型的拟合误差很小,远小于Beta模型的误差,它是表示回收率理想的模型。最后给出了抽取双Beta分布随机数的方法。 相似文献