首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   550篇
  免费   17篇
管理学   13篇
综合类   30篇
社会学   1篇
统计学   523篇
  2022年   6篇
  2021年   4篇
  2020年   12篇
  2019年   26篇
  2018年   28篇
  2017年   42篇
  2016年   19篇
  2015年   16篇
  2014年   20篇
  2013年   136篇
  2012年   51篇
  2011年   16篇
  2010年   21篇
  2009年   11篇
  2008年   10篇
  2007年   16篇
  2006年   11篇
  2005年   13篇
  2004年   16篇
  2003年   9篇
  2002年   7篇
  2001年   9篇
  2000年   14篇
  1999年   12篇
  1998年   7篇
  1997年   7篇
  1996年   11篇
  1995年   3篇
  1994年   1篇
  1993年   2篇
  1992年   7篇
  1990年   1篇
  1988年   2篇
  1984年   1篇
排序方式: 共有567条查询结果,搜索用时 15 毫秒
1.
The generalized half-normal (GHN) distribution and progressive type-II censoring are considered in this article for studying some statistical inferences of constant-stress accelerated life testing. The EM algorithm is considered to calculate the maximum likelihood estimates. Fisher information matrix is formed depending on the missing information law and it is utilized for structuring the asymptomatic confidence intervals. Further, interval estimation is discussed through bootstrap intervals. The Tierney and Kadane method, importance sampling procedure and Metropolis-Hastings algorithm are utilized to compute Bayesian estimates. Furthermore, predictive estimates for censored data and the related prediction intervals are obtained. We consider three optimality criteria to find out the optimal stress level. A real data set is used to illustrate the importance of GHN distribution as an alternative lifetime model for well-known distributions. Finally, a simulation study is provided with discussion.  相似文献   
2.
Summary.  Non-ignorable missing data, a serious problem in both clinical trials and observational studies, can lead to biased inferences. Quality-of-life measures have become increasingly popular in clinical trials. However, these measures are often incompletely observed, and investigators may suspect that missing quality-of-life data are likely to be non-ignorable. Although several recent references have addressed missing covariates in survival analysis, they all required the assumption that missingness is at random or that all covariates are discrete. We present a method for estimating the parameters in the Cox proportional hazards model when missing covariates may be non-ignorable and continuous or discrete. Our method is useful in reducing the bias and improving efficiency in the presence of missing data. The methodology clearly specifies assumptions about the missing data mechanism and, through sensitivity analysis, helps investigators to understand the potential effect of missing data on study results.  相似文献   
3.
Summary.  Social data often contain missing information. The problem is inevitably severe when analysing historical data. Conventionally, researchers analyse complete records only. Listwise deletion not only reduces the effective sample size but also may result in biased estimation, depending on the missingness mechanism. We analyse household types by using population registers from ancient China (618–907 AD) by comparing a simple classification, a latent class model of the complete data and a latent class model of the complete and partially missing data assuming four types of ignorable and non-ignorable missingness mechanisms. The findings show that either a frequency classification or a latent class analysis using the complete records only yielded biased estimates and incorrect conclusions in the presence of partially missing data of a non-ignorable mechanism. Although simply assuming ignorable or non-ignorable missing data produced consistently similarly higher estimates of the proportion of complex households, a specification of the relationship between the latent variable and the degree of missingness by a row effect uniform association model helped to capture the missingness mechanism better and improved the model fit.  相似文献   
4.
Bayesian networks for imputation   总被引:1,自引:0,他引:1  
Summary.  Bayesian networks are particularly useful for dealing with high dimensional statistical problems. They allow a reduction in the complexity of the phenomenon under study by representing joint relationships between a set of variables through conditional relationships between subsets of these variables. Following Thibaudeau and Winkler we use Bayesian networks for imputing missing values. This method is introduced to deal with the problem of the consistency of imputed values: preservation of statistical relationships between variables ( statistical consistency ) and preservation of logical constraints in data ( logical consistency ). We perform some experiments on a subset of anonymous individual records from the 1991 UK population census.  相似文献   
5.
In this article we provide a rigorous treatment of one of the central statistical issues of credit risk management. GivenK-1 rating categories, the rating of a corporate bond over a certain horizon may either stay the same or change to one of the remainingK-2 categories; in addition, it is usually the case that the rating of some bonds is withdrawn during the time interval considered in the analysis. When estimating transition probabilities, we have thus to consider aK-th category, called withdrawal, which contains (partially) missing data. We show how maximum likelihood estimation can be performed in this setup; whereas in discrete time our solution gives rigorous support to a solution often used in applications, in continuous time the maximum likelihood estimator of the transition matrix computed by means of the EM algorithm represents a significant improvement over existing methods.  相似文献   
6.
含修正项的KdV方程是否一定存在孤子尾是最近孤子理论中争论的中心问题之一。给出几类含修正项的KdV方程的精确解 ,证明含修正项的KdV方程不一定存在孤子尾  相似文献   
7.
This paper develops a likelihood‐based method for fitting additive models in the presence of measurement error. It formulates the additive model using the linear mixed model representation of penalized splines. In the presence of a structural measurement error model, the resulting likelihood involves intractable integrals, and a Monte Carlo expectation maximization strategy is developed for obtaining estimates. The method's performance is illustrated with a simulation study.  相似文献   
8.
Principal curves revisited   总被引:15,自引:0,他引:15  
A principal curve (Hastie and Stuetzle, 1989) is a smooth curve passing through the middle of a distribution or data cloud, and is a generalization of linear principal components. We give an alternative definition of a principal curve, based on a mixture model. Estimation is carried out through an EM algorithm. Some comparisons are made to the Hastie-Stuetzle definition.  相似文献   
9.
The World Health Organization (WHO) diagnostic criteria for diabetes mellitus were determined in part by evidence that in some populations the plasma glucose level 2 h after an oral glucose load is a mixture of two distinct distributions. We present a finite mixture model that allows the two component densities to be generalized linear models and the mixture probability to be a logistic regression model. The model allows us to estimate the prevalence of diabetes and sensitivity and specificity of the diagnostic criteria as a function of covariates and to estimate them in the absence of an external standard. Sensitivity is the probability that a test indicates disease conditionally on disease being present. Specificity is the probability that a test indicates no disease conditionally on no disease being present. We obtained maximum likelihood estimates via the EM algorithm and derived the standard errors from the information matrix and by the bootstrap. In the application to data from the diabetes in Egypt project a two-component mixture model fits well and the two components are interpreted as normal and diabetes. The means and variances are similar to results found in other populations. The minimum misclassification cutpoints decrease with age, are lower in urban areas and are higher in rural areas than the 200 mg dl-1 cutpoint recommended by the WHO. These differences are modest and our results generally support the WHO criterion. Our methods allow the direct inclusion of concomitant data whereas past analyses were based on partitioning the data.  相似文献   
10.
In this article, we propose an efficient and robust estimation for the semiparametric mixture model that is a mixture of unknown location-shifted symmetric distributions. Our estimation is derived by minimizing the profile Hellinger distance (MPHD) between the model and a nonparametric density estimate. We propose a simple and efficient algorithm to find the proposed MPHD estimation. Monte Carlo simulation study is conducted to examine the finite sample performance of the proposed procedure and to compare it with other existing methods. Based on our empirical studies, the newly proposed procedure works very competitively compared to the existing methods for normal component cases and much better for non-normal component cases. More importantly, the proposed procedure is robust when the data are contaminated with outlying observations. A real data application is also provided to illustrate the proposed estimation procedure.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号