首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   733篇
  免费   6篇
  国内免费   3篇
管理学   31篇
民族学   1篇
丛书文集   3篇
理论方法论   6篇
综合类   17篇
社会学   6篇
统计学   678篇
  2023年   1篇
  2022年   2篇
  2020年   10篇
  2019年   14篇
  2018年   27篇
  2017年   46篇
  2016年   11篇
  2015年   16篇
  2014年   32篇
  2013年   284篇
  2012年   51篇
  2011年   13篇
  2010年   19篇
  2009年   31篇
  2008年   25篇
  2007年   19篇
  2006年   20篇
  2005年   14篇
  2004年   19篇
  2003年   9篇
  2002年   8篇
  2001年   7篇
  2000年   8篇
  1999年   8篇
  1998年   10篇
  1997年   6篇
  1996年   4篇
  1995年   1篇
  1994年   5篇
  1993年   3篇
  1992年   2篇
  1990年   1篇
  1989年   1篇
  1988年   1篇
  1987年   1篇
  1985年   3篇
  1983年   3篇
  1982年   2篇
  1980年   3篇
  1979年   1篇
  1975年   1篇
排序方式: 共有742条查询结果,搜索用时 843 毫秒
451.
This article derives closed-form solutions for fifth-ordered power method polynomial transformations based on the Method of Percentiles (MOP). A proposed MOP univariate procedure is compared with the Method of Moments (MOM) in the context of distribution fitting and estimating the shape functions. The MOP is also extended from univariate to multivariate data generation. The MOP procedure has an advantage because it does not require numerical integration to compute intermediate correlations and can be applied to distributions, where conventional moments do not exist. Simulation results demonstrate that the proposed MOP procedure is superior in terms of estimation, bias, and error.  相似文献   
452.
This paper provides a semiparametric framework for modeling multivariate conditional heteroskedasticity. We put forward latent stochastic volatility (SV) factors as capturing the commonality in the joint conditional variance matrix of asset returns. This approach is in line with common features as studied by Engle and Kozicki (1993), and it allows us to focus on identication of factors and factor loadings through first- and second-order conditional moments only. We assume that the time-varying part of risk premiums is based on constant prices of factor risks, and we consider a factor SV in mean model. Additional specification of both expectations and volatility of future volatility of factors provides conditional moment restrictions, through which the parameters of the model are all identied. These conditional moment restrictions pave the way for instrumental variables estimation and GMM inference.  相似文献   
453.
Summary.  The primary goal of multivariate statistical process performance monitoring is to identify deviations from normal operation within a manufacturing process. The basis of the monitoring schemes is historical data that have been collected when the process is running under normal operating conditions. These data are then used to establish confidence bounds to detect the onset of process deviations. In contrast with the traditional approaches that are based on the Gaussian assumption, this paper proposes the application of the infinite Gaussian mixture model (GMM) for the calculation of the confidence bounds, thereby relaxing the previous restrictive assumption. The infinite GMM is a special case of Dirichlet process mixtures and is introduced as the limit of the finite GMM, i.e. when the number of mixtures tends to ∞. On the basis of the estimation of the probability density function, via the infinite GMM, the confidence bounds are calculated by using the bootstrap algorithm. The methodology proposed is demonstrated through its application to a simulated continuous chemical process, and a batch semiconductor manufacturing process.  相似文献   
454.
The paper compares six smoothers, in terms of mean squared error and bias, when there are multiple predictors and the sample size is relatively small. The focus is on methods that use robust measures of location (primarily a 20% trimmed mean) and where there are four predictors. To add perspective, some methods designed for means are included. The smoothers include the locally weighted (loess) method derived by Cleveland and Devlin [W.S. Cleveland, S.J. Devlin, Locally-weighted regression: an approach to regression analysis by local fitting, Journal of the American Statistical Association 83 (1988) 596–610], a variation of a so-called running interval smoother where distances from a point are measured via a particular group of projections of the data, a running interval smoother where distances are measured based in part using the minimum volume ellipsoid estimator, a generalized additive model based on the running interval smoother, a generalized additive model based on the robust version of the smooth derived by Cleveland [W.S. Cleveland, Robust locally weighted regression and smoothing scatterplots, Journal of the American Statistical Association 74 (1979) 829–836], and a kernel regression method stemming from [J. Fan, Local linear smoothers and their minimax efficiencies, The Annals of Statistics 21 (1993) 196–216]. When the regression surface is a plane, the method stemming from [J. Fan, Local linear smoothers and their minimax efficiencies, The Annals of Statistics 21 (1993) 196–216] was found to dominate, and indeed offers a substantial advantage in various situations, even when the error term has a heavy-tailed distribution. However, if there is curvature, this method can perform poorly compared to the other smooths considered. Now the projection-type smoother used in conjunction with a 20% trimmed mean is recommended with the minimum volume ellipsoid method a close second.  相似文献   
455.
A bivariate generalized linear model is developed as a mixture distribution with one component of the mixture being discrete with probability mass only at the origin. The use of the proposed model is illustrated by analyzing local area meteorological measurements with constant correlation structure that incorporates predictor variables. The Monte Carlo study is performed to evaluate the inferential efficiency of model parameters for two types of true models. These results suggest that the estimates of regression parameters are consistent and the efficiency of the inference increases for the proposed model for ρ≥0.50 especially in larger samples. As an illustration of a bivariate generalized linear model, we analyze a precipitation monitoring data of adjacent local stations for Tokyo and Yokohama.  相似文献   
456.
Celebrating the 20th anniversary of the presentation of the paper by Dempster, Laird and Rubin which popularized the EM algorithm, we investigate, after a brief historical account, strategies that aim to make the EM algorithm converge faster while maintaining its simplicity and stability (e.g. automatic monotone convergence in likelihood). First we introduce the idea of a 'working parameter' to facilitate the search for efficient data augmentation schemes and thus fast EM implementations. Second, summarizing various recent extensions of the EM algorithm, we formulate a general alternating expectation–conditional maximization algorithm AECM that couples flexible data augmentation schemes with model reduction schemes to achieve efficient computations. We illustrate these methods using multivariate t -models with known or unknown degrees of freedom and Poisson models for image reconstruction. We show, through both empirical and theoretical evidence, the potential for a dramatic reduction in computational time with little increase in human effort. We also discuss the intrinsic connection between EM-type algorithms and the Gibbs sampler, and the possibility of using the techniques presented here to speed up the latter. The main conclusion of the paper is that, with the help of statistical considerations, it is possible to construct algorithms that are simple, stable and fast.  相似文献   
457.
Summary.  Multivariate meta-analysis allows the joint synthesis of summary estimates from multiple end points and accounts for their within-study and between-study correlation. Yet practitioners usually meta-analyse each end point independently. I examine the role of within-study correlation in multivariate meta-analysis, to elicit the consequences of ignoring it. Using analytic reasoning and a simulation study, the within-study correlation is shown to influence the 'borrowing of strength' across end points, and wrongly ignoring it gives meta-analysis results with generally inferior statistical properties; for example, on average it increases the mean-square error and standard error of pooled estimates, and for non-ignorable missing data it increases their bias. The influence of within-study correlation is only negligible when the within-study variation is small relative to the between-study variation, or when very small differences exist across studies in the within-study covariance matrices. The findings are demonstrated by applied examples within medicine, dentistry and education. Meta-analysts are thus encouraged to account for the correlation between end points. To facilitate this, I conclude by reviewing options for multivariate meta-analysis when within-study correlations are unknown; these include obtaining individual patient data, using external information, performing sensitivity analyses and using alternatively parameterized models.  相似文献   
458.
This paper concerns the problem of assessing autocorrelation of multivariate (i.e. systemwise) models. It is well known that systemwise diagnostic tests for autocorrelation often suffers from poor small sample properties in the sense that the true size overstates the nominal size. The failure of keeping control of the size usually stems from the fact that the critical values (used to decide the rejection area) originate from the slowly converging asymptotic null distribution. Another drawback of existing tests is that the power may be rather low if the deviation from the null is not symmetrical over the marginal models. In this paper we consider four quite different test techniques for autocorrelation. These are (i) Pillai's trace, (ii) Roy's largest root, (iii) the maximum F-statistic and (iv) the maximum t2 test. We show how to obtain control of the size of the tests, and then examine the true (small sample) size and power properties by means of Monte Carlo simulations.  相似文献   
459.
Acceleration of the EM Algorithm by using Quasi-Newton Methods   总被引:1,自引:0,他引:1  
The EM algorithm is a popular method for maximum likelihood estimation. Its simplicity in many applications and desirable convergence properties make it very attractive. Its sometimes slow convergence, however, has prompted researchers to propose methods to accelerate it. We review these methods, classifying them into three groups: pure , hybrid and EM-type accelerators. We propose a new pure and a new hybrid accelerator both based on quasi-Newton methods and numerically compare these and two other quasi-Newton accelerators. For this we use examples in each of three areas: Poisson mixtures, the estimation of covariance from incomplete data and multivariate normal mixtures. In these comparisons, the new hybrid accelerator was fastest on most of the examples and often dramatically so. In some cases it accelerated the EM algorithm by factors of over 100. The new pure accelerator is very simple to implement and competed well with the other accelerators. It accelerated the EM algorithm in some cases by factors of over 50. To obtain standard errors, we propose to approximate the inverse of the observed information matrix by using auxiliary output from the new hybrid accelerator. A numerical evaluation of these approximations indicates that they may be useful at least for exploratory purposes.  相似文献   
460.
In this paper, I introduce new methods for multilevel meta network analysis. The new methods can combine results from multiple network models, assess the effects of predictors at network or higher levels and account for both within- and cross-network correlations of the parameters in the network models. To demonstrate the new methods, I studied network dynamics of a smoking prevention intervention that was implemented in 76 classes of six middle schools in China. The results show that as compared to random intervention (i.e., that targets random students), smokers’ popularity was significantly reduced in the classes with network interventions (i.e., those target central students or students with their friends together). The findings highlight the importance of examining network outcomes in evaluating social and health interventions, the role of social selection in managing social influence, and the potential of using network methods to design more effective interventions.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号