首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2163篇
  免费   46篇
  国内免费   8篇
管理学   156篇
民族学   2篇
人口学   15篇
丛书文集   48篇
理论方法论   23篇
综合类   442篇
社会学   19篇
统计学   1512篇
  2023年   6篇
  2022年   14篇
  2021年   11篇
  2020年   29篇
  2019年   54篇
  2018年   65篇
  2017年   114篇
  2016年   51篇
  2015年   45篇
  2014年   68篇
  2013年   589篇
  2012年   156篇
  2011年   65篇
  2010年   72篇
  2009年   57篇
  2008年   69篇
  2007年   56篇
  2006年   45篇
  2005年   88篇
  2004年   63篇
  2003年   52篇
  2002年   64篇
  2001年   62篇
  2000年   47篇
  1999年   42篇
  1998年   36篇
  1997年   34篇
  1996年   16篇
  1995年   15篇
  1994年   17篇
  1993年   20篇
  1992年   18篇
  1991年   13篇
  1990年   6篇
  1989年   7篇
  1988年   7篇
  1987年   7篇
  1986年   3篇
  1985年   5篇
  1984年   6篇
  1983年   7篇
  1982年   2篇
  1981年   5篇
  1980年   1篇
  1979年   1篇
  1978年   1篇
  1977年   3篇
  1976年   1篇
  1975年   2篇
排序方式: 共有2217条查询结果,搜索用时 15 毫秒
1.
While much used in practice, latent variable models raise challenging estimation problems due to the intractability of their likelihood. Monte Carlo maximum likelihood (MCML), as proposed by Geyer & Thompson (1992 ), is a simulation-based approach to maximum likelihood approximation applicable to general latent variable models. MCML can be described as an importance sampling method in which the likelihood ratio is approximated by Monte Carlo averages of importance ratios simulated from the complete data model corresponding to an arbitrary value of the unknown parameter. This paper studies the asymptotic (in the number of observations) performance of the MCML method in the case of latent variable models with independent observations. This is in contrast with previous works on the same topic which only considered conditional convergence to the maximum likelihood estimator, for a fixed set of observations. A first important result is that when is fixed, the MCML method can only be consistent if the number of simulations grows exponentially fast with the number of observations. If on the other hand, is obtained from a consistent sequence of estimates of the unknown parameter, then the requirements on the number of simulations are shown to be much weaker.  相似文献   
2.
Summary.  Generalized linear latent variable models (GLLVMs), as defined by Bartholomew and Knott, enable modelling of relationships between manifest and latent variables. They extend structural equation modelling techniques, which are powerful tools in the social sciences. However, because of the complexity of the log-likelihood function of a GLLVM, an approximation such as numerical integration must be used for inference. This can limit drastically the number of variables in the model and can lead to biased estimators. We propose a new estimator for the parameters of a GLLVM, based on a Laplace approximation to the likelihood function and which can be computed even for models with a large number of variables. The new estimator can be viewed as an M -estimator, leading to readily available asymptotic properties and correct inference. A simulation study shows its excellent finite sample properties, in particular when compared with a well-established approach such as LISREL. A real data example on the measurement of wealth for the computation of multidimensional inequality is analysed to highlight the importance of the methodology.  相似文献   
3.
Sets of relatively short time series arise in many situations. One aspect of their analysis may be the detection of outlying series. We examine the performance of standard normal outlier tests applied to the means, or to simple functions of the means, of AR(1) series, not necessarily of equal lengths. Although unequal lengths of series implies that the means have unequal variances, that are only known approximately, it is shown that nominal significance levels hold good under most circumstances. Thus a standard outlier test can usefully be applied, avoiding the complication of estimating the time series' parameters. The test's power is affected by unequal lengths, being higher when the slippage occurs in one of the longer series  相似文献   
4.
The problem considered is that of finding an optimum measurement schedule to estimate population parameters in a nonlinear model when the patient effects are random. The paper presents examples of the use of sensitivity functions, derived from the General Equivalence Theorem for D-optimality, in the construction of optimum population designs for such schedules. With independent observations, the theorem applies to the potential inclusion of a single observation. However, in population designs the observations are correlated and the theorem applies to the inclusion of an additional measurement schedule. In one example, three groups of patients of differing size are subject to distinct schedules. Numerical, as opposed to analytical, calculation of the sensitivity function is advocated. The required covariances of the observations are found by simulation.  相似文献   
5.
针对串联型稳压器,设计了一种应用于串联型稳压器具有自建基准的新型误差放大电路。该电路具有构思巧妙,结构优化,易于集成及较高的开环增益,共模抑制比及交流特性的优点。通过验证,实测数据与仿真结果基本一致。  相似文献   
6.
通过定义、定理、正反对比的例题论述了函数列收敛、一致收敛、内闭一致收敛及其之间的关系与差异。  相似文献   
7.
Summary. We develop a general methodology for tilting time series data. Attention is focused on a large class of regression problems, where errors are expressed through autoregressive processes. The class has a range of important applications and in the context of our work may be used to illustrate the application of tilting methods to interval estimation in regression, robust statistical inference and estimation subject to constraints. The method can be viewed as 'empirical likelihood with nuisance parameters'.  相似文献   
8.
Diagnostics for dependence within time series extremes   总被引:1,自引:0,他引:1  
Summary. The analysis of extreme values within a stationary time series entails various assumptions concerning its long- and short-range dependence. We present a range of new diagnostic tools for assessing whether these assumptions are appropriate and for identifying structure within extreme events. These tools are based on tail characteristics of joint survivor functions but can be implemented by using existing estimation methods for extremes of univariate independent and identically distributed variables. Our diagnostic aids are illustrated through theoretical examples, simulation studies and by application to rainfall and exchange rate data. On the basis of these diagnostics we can explain characteristics that are found in the observed extreme events of these series and also gain insight into the properties of events that are more extreme than those observed.  相似文献   
9.
Impacts of complex emergencies or relief interventions have often been evaluated by absolute mortality compared to international standardized mortality rates. A better evaluation would be to compare with local baseline mortality of the affected populations. A projection of population-based survival data into time of emergency or intervention based on information from before the emergency may create a local baseline reference. We find a log-transformed Gaussian time series model where standard errors of the estimated rates are included in the variance to have the best forecasting capacity. However, if time-at-risk during the forecasted period is known then forecasting might be done using a Poisson time series model with overdispersion. Whatever, the standard error of the estimated rates must be included in the variance of the model either in an additive form in a Gaussian model or in a multiplicative form by overdispersion in a Poisson model. Data on which the forecasting is based must be modelled carefully concerning not only calendar-time trends but also periods with excessive frequency of events (epidemics) and seasonal variations to eliminate residual autocorrelation and to make a proper reference for comparison, reflecting changes over time during the emergency. Hence, when modelled properly it is possible to predict a reference to an emergency-affected population based on local conditions. We predicted childhood mortality during the war in Guinea-Bissau 1998-1999. We found an increased mortality in the first half-year of the war and a mortality corresponding to the expected one in the last half-year of the war.  相似文献   
10.
研究了一种客户动态、静态属性数据相结合的客户分类方法。提出了客户时间序列的加权处理方法,并应用客户时间序列的统计特征作为聚类特征向量,采用混合式遗传算法对客户聚类,使每一类客户具有相似的时序特征。在此基础上将聚类结果与客户的静态属性数据相结合,对客户进一步分类。实验结果表明,与传统的基于静态属性数据的客户分类方法相比,本文的方法提高了客户分类的准确性。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号