首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   657篇
  免费   20篇
管理学   50篇
丛书文集   2篇
理论方法论   1篇
综合类   46篇
社会学   7篇
统计学   571篇
  2023年   2篇
  2022年   7篇
  2021年   7篇
  2020年   19篇
  2019年   31篇
  2018年   30篇
  2017年   50篇
  2016年   24篇
  2015年   19篇
  2014年   24篇
  2013年   156篇
  2012年   62篇
  2011年   25篇
  2010年   20篇
  2009年   17篇
  2008年   13篇
  2007年   23篇
  2006年   11篇
  2005年   14篇
  2004年   17篇
  2003年   9篇
  2002年   7篇
  2001年   10篇
  2000年   15篇
  1999年   13篇
  1998年   7篇
  1997年   6篇
  1996年   11篇
  1995年   3篇
  1994年   1篇
  1993年   2篇
  1992年   10篇
  1991年   1篇
  1990年   1篇
  1989年   3篇
  1988年   2篇
  1987年   3篇
  1984年   1篇
  1981年   1篇
排序方式: 共有677条查询结果,搜索用时 15 毫秒
591.
Demographic and Health Surveys collect child survival times that are clustered at the family and community levels. It is assumed that each cluster has a specific, unobservable, random frailty that induces an association in the survival times within the cluster. The Cox proportional hazards model, with family and community random frailties acting multiplicatively on the hazard rate, is presented. The estimation of the fixed effect and the association parameters of the modified model is then examined using the Gibbs sampler and the expectation–maximization (EM) algorithm. The methods are compared using child survival data collected in the 1992 Demographic and Health Survey of Malawi. The two methods lead to very similar estimates of fixed effect parameters. However, the estimates of random effect variances from the EM algorithm are smaller than those of the Gibbs sampler. Both estimation methods reveal considerable family variation in the survival of children, and very little variability over the communities.  相似文献   
592.
An algorithmic method is described for the construction of optimal incomplete block designs when a known correlation structure is assumed for observations from plots in the same block. The method is applicable to a wide class of designs and correlation structures. Some examples are given to illustrate the procedure.  相似文献   
593.
Let (ψii) be independent, identically distributed pairs of zero-one random variables with (possible) dependence of ψi and φi within the pair. For n pairs, both variables are observed, but for m1 additional pairs only ψi is observed and for m2 others φi is observed. If π = Pi = 1} and π·1=Pi, the problem is to test π·1. Maximum likelihood estimates of π and π·1 are obtained via the EM algorithm. A test statistic is developed whose null distribution is asymptotically chi-square with one degree of freedom (as n and either m1 or m2 tend to infinity). If m1 = m2 = 0 the statistic reduces to that of McNemar's test; if n = 0, it is equivalent to the statistic for testing equality of two independent proportions. This test is compared with other tests by means of Pitman efficiency. Examples are presented.  相似文献   
594.
A simple competing risk distribution as a possible alternative to the Weibull distribution in lifetime analysis is proposed. This distribution corresponds to the minimum between exponential and Weibull distributions. Our motivation is to take account of both accidental and aging failures in lifetime data analysis. First, the main characteristics of this distribution are presented. Then, the estimation of its parameters are considered through maximum likelihood and Bayesian inference. In particular, the existence of a unique consistent root of the likelihood equations is proved. Decision tests to choose between an exponential, Weibull and this competing risk distribution are presented. And this alternative model is compared to the Weibull model from numerical experiments on both real and simulated data sets, especially in an industrial context.  相似文献   
595.
This paper deals with a Bayesian analysis of a finite Beta mixture model. We present approximation method to evaluate the posterior distribution and Bayes estimators by Gibbs sampling, relying on the missing data structure of the mixture model. Experimental results concern contextual and non-contextual evaluations. The non-contextual evaluation is based on synthetic histograms, while the contextual one model the class-conditional densities of pattern-recognition data sets. The Beta mixture is also applied to estimate the parameters of SAR images histograms.  相似文献   
596.
Double censoring often occurs in registry studies when left censoring is present in addition to right censoring. In this work, we examine estimation of Aalen's nonparametric regression coefficients based on doubly censored data. We propose two estimation techniques. The first type of estimators, including ordinary least squared (OLS) estimator and weighted least squared (WLS) estimators, are obtained using martingale arguments. The second type of estimator, the maximum likelihood estimator (MLE), is obtained via expectation-maximization (EM) algorithms that treat the survival times of left censored observations as missing. Asymptotic properties, including the uniform consistency and weak convergence, are established for the MLE. Simulation results demonstrate that the MLE is more efficient than the OLS and WLS estimators.  相似文献   
597.
A new procedure is introduced for conducting screening experiments to find a small number of influential factors from among a large number of factors with negligible effects. It is intended for experiments in which the factors are easily controlled, as in simulation models. It adds observations sequentially after conducting a small initial experiment. The performance of the procedure is investigated using simulation, and evidence is presented that this and other procedures scale as the logarithm of the total number of factors if the number of influential factors is fixed. An investigation of the new procedure for 1–3 active factors shows that it compares favorably with competing methods, particularly when the size of the nonzero effects is 1–2 times the standard deviation. A limited look at the procedure for up to 6 active factors is also presented.  相似文献   
598.
Existing research on mixtures of regression models are limited to directly observed predictors. The estimation of mixtures of regression for measurement error data imposes challenges for statisticians. For linear regression models with measurement error data, the naive ordinary least squares method, which directly substitutes the observed surrogates for the unobserved error-prone variables, yields an inconsistent estimate for the regression coefficients. The same inconsistency also happens to the naive mixtures of regression estimate, which is based on the traditional maximum likelihood estimator and simply ignores the measurement error. To solve this inconsistency, we propose to use the deconvolution method to estimate the mixture likelihood of the observed surrogates. Then our proposed estimate is found by maximizing the estimated mixture likelihood. In addition, a generalized EM algorithm is also developed to find the estimate. The simulation results demonstrate that the proposed estimation procedures work well and perform much better than the naive estimates.  相似文献   
599.
Multivariate failure time data are commonly encountered in biomedical research since each study subject may experience multiple events or because there exists clustering of subjects such that failure times within the same cluster are correlated. In this article, we use the frailty approach to catch the related survival variables and assume each event is a discrete analog as an interval of clinical examinations periodically. For estimation, an Expectation–Maximization (EM) algorithm is developed and is applied to the diabetic retinopathy study (DRS).  相似文献   
600.
This article deals with a semisupervised learning based on naive Bayes assumption. A univariate Gaussian mixture density is used for continuous input variables whereas a histogram type density is adopted for discrete input variables. The EM algorithm is used for the computation of maximum likelihood estimators of parameters in the model when we fix the number of mixing components for each continuous input variable. We carry out a model selection for choosing a parsimonious model among various fitted models based on an information criterion. A common density method is proposed for the selection of significant input variables. Simulated and real datasets are used to illustrate the performance of the proposed method.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号