首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2306篇
  免费   45篇
  国内免费   9篇
管理学   104篇
劳动科学   1篇
民族学   1篇
人口学   11篇
丛书文集   57篇
理论方法论   17篇
综合类   514篇
社会学   9篇
统计学   1646篇
  2024年   1篇
  2023年   10篇
  2022年   20篇
  2021年   23篇
  2020年   41篇
  2019年   76篇
  2018年   81篇
  2017年   110篇
  2016年   50篇
  2015年   55篇
  2014年   71篇
  2013年   594篇
  2012年   209篇
  2011年   60篇
  2010年   71篇
  2009年   68篇
  2008年   74篇
  2007年   77篇
  2006年   70篇
  2005年   77篇
  2004年   87篇
  2003年   70篇
  2002年   62篇
  2001年   56篇
  2000年   56篇
  1999年   33篇
  1998年   36篇
  1997年   17篇
  1996年   16篇
  1995年   12篇
  1994年   10篇
  1993年   15篇
  1992年   12篇
  1991年   4篇
  1990年   12篇
  1989年   4篇
  1988年   2篇
  1987年   2篇
  1986年   1篇
  1985年   1篇
  1984年   3篇
  1982年   1篇
  1980年   3篇
  1979年   1篇
  1978年   1篇
  1977年   1篇
  1976年   1篇
  1975年   3篇
排序方式: 共有2360条查询结果,搜索用时 0 毫秒
1.
While much used in practice, latent variable models raise challenging estimation problems due to the intractability of their likelihood. Monte Carlo maximum likelihood (MCML), as proposed by Geyer & Thompson (1992 ), is a simulation-based approach to maximum likelihood approximation applicable to general latent variable models. MCML can be described as an importance sampling method in which the likelihood ratio is approximated by Monte Carlo averages of importance ratios simulated from the complete data model corresponding to an arbitrary value of the unknown parameter. This paper studies the asymptotic (in the number of observations) performance of the MCML method in the case of latent variable models with independent observations. This is in contrast with previous works on the same topic which only considered conditional convergence to the maximum likelihood estimator, for a fixed set of observations. A first important result is that when is fixed, the MCML method can only be consistent if the number of simulations grows exponentially fast with the number of observations. If on the other hand, is obtained from a consistent sequence of estimates of the unknown parameter, then the requirements on the number of simulations are shown to be much weaker.  相似文献   
2.
3.
Lin  Tsung I.  Lee  Jack C.  Ni  Huey F. 《Statistics and Computing》2004,14(2):119-130
A finite mixture model using the multivariate t distribution has been shown as a robust extension of normal mixtures. In this paper, we present a Bayesian approach for inference about parameters of t-mixture models. The specifications of prior distributions are weakly informative to avoid causing nonintegrable posterior distributions. We present two efficient EM-type algorithms for computing the joint posterior mode with the observed data and an incomplete future vector as the sample. Markov chain Monte Carlo sampling schemes are also developed to obtain the target posterior distribution of parameters. The advantages of Bayesian approach over the maximum likelihood method are demonstrated via a set of real data.  相似文献   
4.
The last observation carried forward (LOCF) approach is commonly utilized to handle missing values in the primary analysis of clinical trials. However, recent evidence suggests that likelihood‐based analyses developed under the missing at random (MAR) framework are sensible alternatives. The objective of this study was to assess the Type I error rates from a likelihood‐based MAR approach – mixed‐model repeated measures (MMRM) – compared with LOCF when estimating treatment contrasts for mean change from baseline to endpoint (Δ). Data emulating neuropsychiatric clinical trials were simulated in a 4 × 4 factorial arrangement of scenarios, using four patterns of mean changes over time and four strategies for deleting data to generate subject dropout via an MAR mechanism. In data with no dropout, estimates of Δ and SEΔ from MMRM and LOCF were identical. In data with dropout, the Type I error rates (averaged across all scenarios) for MMRM and LOCF were 5.49% and 16.76%, respectively. In 11 of the 16 scenarios, the Type I error rate from MMRM was at least 1.00% closer to the expected rate of 5.00% than the corresponding rate from LOCF. In no scenario did LOCF yield a Type I error rate that was at least 1.00% closer to the expected rate than the corresponding rate from MMRM. The average estimate of SEΔ from MMRM was greater in data with dropout than in complete data, whereas the average estimate of SEΔ from LOCF was smaller in data with dropout than in complete data, suggesting that standard errors from MMRM better reflected the uncertainty in the data. The results from this investigation support those from previous studies, which found that MMRM provided reasonable control of Type I error even in the presence of MNAR missingness. No universally best approach to analysis of longitudinal data exists. However, likelihood‐based MAR approaches have been shown to perform well in a variety of situations and are a sensible alternative to the LOCF approach. MNAR methods can be used within a sensitivity analysis framework to test the potential presence and impact of MNAR data, thereby assessing robustness of results from an MAR method. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   
5.
Maximum likelihood estimation and goodness-of-fit techniques are used within a competing risks framework to obtain maximum likelihood estimates of hazard, density, and survivor functions for randomly right-censored variables. Goodness-of- fit techniques are used to fit distributions to the crude lifetimes, which are used to obtain an estimate of the hazard function, which, in turn, is used to construct the survivor and density functions of the net lifetime of the variable of interest. If only one of the crude lifetimes can be adequately characterized by a parametric model, then semi-parametric estimates may be obtained using a maximum likelihood estimate of one crude lifetime and the empirical distribution function of the other. Simulation studies show that the survivor function estimates from crude lifetimes compare favourably with those given by the product-limit estimator when crude lifetimes are chosen correctly. Other advantages are discussed.  相似文献   
6.
Missing data, and the bias they can cause, are an almost ever‐present concern in clinical trials. The last observation carried forward (LOCF) approach has been frequently utilized to handle missing data in clinical trials, and is often specified in conjunction with analysis of variance (LOCF ANOVA) for the primary analysis. Considerable advances in statistical methodology, and in our ability to implement these methods, have been made in recent years. Likelihood‐based, mixed‐effects model approaches implemented under the missing at random (MAR) framework are now easy to implement, and are commonly used to analyse clinical trial data. Furthermore, such approaches are more robust to the biases from missing data, and provide better control of Type I and Type II errors than LOCF ANOVA. Empirical research and analytic proof have demonstrated that the behaviour of LOCF is uncertain, and in many situations it has not been conservative. Using LOCF as a composite measure of safety, tolerability and efficacy can lead to erroneous conclusions regarding the effectiveness of a drug. This approach also violates the fundamental basis of statistics as it involves testing an outcome that is not a physical parameter of the population, but rather a quantity that can be influenced by investigator behaviour, trial design, etc. Practice should shift away from using LOCF ANOVA as the primary analysis and focus on likelihood‐based, mixed‐effects model approaches developed under the MAR framework, with missing not at random methods used to assess robustness of the primary analysis. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   
7.
The robustness of Mauchly's sphericity test criterion when sampling from a mixture of two multivariate normal distributions is studied. The distribution of the sphericity test criterion when the sample covariance matrix has a non-central Wishart density of rank one is derived in terms of Meijer's G-functions; its distribution under the mixture model is then deduced. The robustness is studied by computing actual significance levels of the test under the mixture model using the critical values under the usual normal model.  相似文献   
8.
该文考虑一类对角型蜕化抛物组,允许主部系数矩阵的特征值正比例于未知解的模的适当正幂次,也正比例于未知解梯度的模的某个正幂次.对右端项有某种特殊结构的情形得到了解最大模的先验估计.  相似文献   
9.
Generalized additive models for location, scale and shape   总被引:10,自引:0,他引:10  
Summary.  A general class of statistical models for a univariate response variable is presented which we call the generalized additive model for location, scale and shape (GAMLSS). The model assumes independent observations of the response variable y given the parameters, the explanatory variables and the values of the random effects. The distribution for the response variable in the GAMLSS can be selected from a very general family of distributions including highly skew or kurtotic continuous and discrete distributions. The systematic part of the model is expanded to allow modelling not only of the mean (or location) but also of the other parameters of the distribution of y , as parametric and/or additive nonparametric (smooth) functions of explanatory variables and/or random-effects terms. Maximum (penalized) likelihood estimation is used to fit the (non)parametric models. A Newton–Raphson or Fisher scoring algorithm is used to maximize the (penalized) likelihood. The additive terms in the model are fitted by using a backfitting algorithm. Censored data are easily incorporated into the framework. Five data sets from different fields of application are analysed to emphasize the generality of the GAMLSS class of models.  相似文献   
10.
非实践性真理标准是前马克思真理标准说的共性。它有理性、感觉、经验和实验等四种历史形态 :理性标准表面上肯定真理的客观性 ,实际上否认真理的客观性 ;感觉标准具有主观唯心主义的特点 ;经验标准承认真理的客观性但还停留在认识的领域 ;实验标准预告着科学真理标准即将诞生。实践标准的确立来之不易 ,是一场认识论上的革命  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号