首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   279篇
  免费   3篇
管理学   3篇
丛书文集   3篇
理论方法论   2篇
综合类   8篇
社会学   19篇
统计学   247篇
  2021年   1篇
  2020年   4篇
  2019年   8篇
  2018年   14篇
  2017年   26篇
  2016年   6篇
  2015年   6篇
  2014年   11篇
  2013年   81篇
  2012年   15篇
  2011年   9篇
  2010年   9篇
  2009年   10篇
  2008年   7篇
  2007年   9篇
  2006年   13篇
  2005年   8篇
  2004年   5篇
  2003年   2篇
  2002年   7篇
  2001年   5篇
  2000年   5篇
  1999年   7篇
  1998年   7篇
  1997年   3篇
  1995年   1篇
  1993年   1篇
  1984年   1篇
  1979年   1篇
排序方式: 共有282条查询结果,搜索用时 15 毫秒
181.
In scientific investigations, there are many situations where each two experimental units have to be grouped into a block of size two. For planning such experiments, the variance-based optimality criteria like A-, D- and E-criterion are typically employed to choose efficient designs, if the estimation efficiency of treatment contrasts is primarily concerned. Alternatively, if there are observations which tend to become lost during the experimental period, the robustness criteria against the unavailability of data should be strongly recommended for selecting the planning scheme. In this study, a new criterion, called minimum breakdown criterion, is proposed to quantify the robustness of designs in blocks of size two. Based on the proposed criterion, a new class of robust designs, called minimum breakdown designs, is defined. When various numbers of blocks are missing, the minimum breakdown designs provide the highest probabilities that all the treatment contrasts are estimable. An exhaustive search procedure is proposed to generate such designs. In addition, two classes of uniformly minimum breakdown designs are theoretically verified.  相似文献   
182.
In this paper, a generalized partially linear model (GPLM) with missing covariates is studied and a Monte Carlo EM (MCEM) algorithm with penalized-spline (P-spline) technique is developed to estimate the regression coefficients and nonparametric function, respectively. As classical model selection procedures such as Akaike's information criterion become invalid for our considered models with incomplete data, some new model selection criterions for GPLMs with missing covariates are proposed under two different missingness mechanism, say, missing at random (MAR) and missing not at random (MNAR). The most attractive point of our method is that it is rather general and can be extended to various situations with missing observations based on EM algorithm, especially when no missing data involved, our new model selection criterions are reduced to classical AIC. Therefore, we can not only compare models with missing observations under MAR/MNAR settings, but also can compare missing data models with complete-data models simultaneously. Theoretical properties of the proposed estimator, including consistency of the model selection criterions are investigated. A simulation study and a real example are used to illustrate the proposed methodology.  相似文献   
183.
In the field of social network analysis, there are situations in which researchers hope to ignore certain dyads in the computation of centrality to avoid biased or misleading results, but simply deleting these dyads will result in wrong conclusions. There is little work considering this particular problem except the eigenvector-like centrality method presented in 2015. In this paper, we revisit this problem and present a new degree-like centrality method which also allows some dyads to be excluded in the calculations. This new method adopts the technique of weighted symmetric nonnegative matrix factorization (abbreviated as WSNMF), and we will show that it can be seen as the generalized version of the existing eigenvector-like centrality. After applying it to several data sets, we test this new method's efficiency.  相似文献   
184.
In this article, we propose a new method of imputation that makes use of higher order moments of an auxiliary variable while imputing missing values. The mean, ratio, and regression methods of imputation are shown to be special cases and less efficient than the newly developed method of imputation, which makes use of higher order moments. Analytical comparisons show that the first-order mean squared error approximation for the proposed new method of imputation is always smaller than that for the regression method of imputation. At the end, the proposed higher order moments-based imputation method has been applied to a real dataset.  相似文献   
185.
This article studies how to identify influential observations in univariate autoregressive integrated moving average time series models and how to measure their effects on the estimated parameters of the model. The sensitivity of the parameters to the presence of either additive or innovational outliers is analyzed, and influence statistics based on the Mahalanobis distance are presented. The statistic linked to additive outliers is shown to be very useful for indicating the robustness of the fitted model to the given data set. Its application is illustrated using a relevant set of historical data.  相似文献   
186.
A linear recursive technique that does not use the Kalman filter approach is proposed to estimate missing observations in an univariate time series. It is assumed that the series follows an invertible ARIMA model. The procedure is based on the restricted forecasting approach, and the recursive linear estimators are optimal in terms of minimum mean-square error.  相似文献   
187.
错过购买情境下消费者后悔对购买意向的影响研究   总被引:1,自引:0,他引:1  
错过购买是一个常见的现象,如何引导错过购买的消费者在将来采取购买行为,具有重要的意义。本文的研究目的是要考察消费者错过购买后,营销者能否通过激发和利用消费者的后悔,促使其在下一次购买。三个实验的结果表明,消费者错过购买后,后悔会促使消费者提高未来的购买意向;如果得到关于购买结果的积极信息,后悔会增强,进而提高购买意向;而且后悔对未来购买意向的影响受到消费者感知的下一次购买机会易达性的调节。  相似文献   
188.
Missing data form a ubiquitous problem in scientific research, especially since most statistical analyses require complete data. To evaluate the performance of methods dealing with missing data, researchers perform simulation studies. An important aspect of these studies is the generation of missing values in a simulated, complete data set: the amputation procedure. We investigated the methodological validity and statistical nature of both the current amputation practice and a newly developed and implemented multivariate amputation procedure. We found that the current way of practice may not be appropriate for the generation of intuitive and reliable missing data problems. The multivariate amputation procedure, on the other hand, generates reliable amputations and allows for a proper regulation of missing data problems. The procedure has additional features to generate any missing data scenario precisely as intended. Hence, the multivariate amputation procedure is an efficient method to accurately evaluate missing data methodology.  相似文献   
189.
Summary.  The paper discusses the estimation of an unknown population size n . Suppose that an identification mechanism can identify n obs cases. The Horvitz–Thompson estimator of n adjusts this number by the inverse of 1− p 0, where the latter is the probability of not identifying a case. When repeated counts of identifying the same case are available, we can use the counting distribution for estimating p 0 to solve the problem. Frequently, the Poisson distribution is used and, more recently, mixtures of Poisson distributions. Maximum likelihood estimation is discussed by means of the EM algorithm. For truncated Poisson mixtures, a nested EM algorithm is suggested and illustrated for several application cases. The algorithmic principles are used to show an inequality, stating that the Horvitz–Thompson estimator of n by using the mixed Poisson model is always at least as large as the estimator by using a homogeneous Poisson model. In turn, if the homogeneous Poisson model is misspecified it will, potentially strongly, underestimate the true population size. Examples from various areas illustrate this finding.  相似文献   
190.
Celebrating the 20th anniversary of the presentation of the paper by Dempster, Laird and Rubin which popularized the EM algorithm, we investigate, after a brief historical account, strategies that aim to make the EM algorithm converge faster while maintaining its simplicity and stability (e.g. automatic monotone convergence in likelihood). First we introduce the idea of a 'working parameter' to facilitate the search for efficient data augmentation schemes and thus fast EM implementations. Second, summarizing various recent extensions of the EM algorithm, we formulate a general alternating expectation–conditional maximization algorithm AECM that couples flexible data augmentation schemes with model reduction schemes to achieve efficient computations. We illustrate these methods using multivariate t -models with known or unknown degrees of freedom and Poisson models for image reconstruction. We show, through both empirical and theoretical evidence, the potential for a dramatic reduction in computational time with little increase in human effort. We also discuss the intrinsic connection between EM-type algorithms and the Gibbs sampler, and the possibility of using the techniques presented here to speed up the latter. The main conclusion of the paper is that, with the help of statistical considerations, it is possible to construct algorithms that are simple, stable and fast.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号