首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1767篇
  免费   54篇
  国内免费   4篇
管理学   115篇
人口学   5篇
丛书文集   22篇
理论方法论   9篇
综合类   168篇
社会学   20篇
统计学   1486篇
  2023年   18篇
  2022年   16篇
  2021年   16篇
  2020年   30篇
  2019年   72篇
  2018年   72篇
  2017年   103篇
  2016年   58篇
  2015年   34篇
  2014年   60篇
  2013年   383篇
  2012年   140篇
  2011年   66篇
  2010年   49篇
  2009年   58篇
  2008年   50篇
  2007年   62篇
  2006年   53篇
  2005年   64篇
  2004年   52篇
  2003年   47篇
  2002年   45篇
  2001年   42篇
  2000年   38篇
  1999年   21篇
  1998年   24篇
  1997年   28篇
  1996年   16篇
  1995年   16篇
  1994年   10篇
  1993年   10篇
  1992年   12篇
  1991年   11篇
  1990年   5篇
  1989年   1篇
  1988年   7篇
  1987年   6篇
  1986年   3篇
  1985年   3篇
  1984年   3篇
  1983年   6篇
  1982年   6篇
  1981年   4篇
  1980年   2篇
  1979年   1篇
  1978年   2篇
排序方式: 共有1825条查询结果,搜索用时 93 毫秒
151.
In the context of the partially linear semiparametric model examined by Robinson (1988), we show that root-n-consisten estimation results established using kernel and series methods can also be obtained by using k-nearest-neighbor (k-nn) method.  相似文献   
152.
叶卫华 《宿州学院学报》2007,22(6):56-60,52
汉语中存在大量的"前"字所组成的时间词。这些时间词反映出汉民族的时间是空间的认知,表现出汉语空间—时间隐喻的直线性模式。这其中又存在两套时间是空间的隐喻系统——"时间移动"隐喻系统和"自我移动"隐喻系统,依据两个系统绝大部分时间词的语义已"脱离语境化",但少数时间词的语义并没有完全固化,需要认知语境的参与,进行认知推理,才能获取其意义。  相似文献   
153.
The local maximum likelihood estimate θ^ t of a parameter in a statistical model f ( x , θ) is defined by maximizing a weighted version of the likelihood function which gives more weight to observations in the neighbourhood of t . The paper studies the sense in which f ( t , θ^ t ) is closer to the true distribution g ( t ) than the usual estimate f ( t , θ^) is. Asymptotic results are presented for the case in which the model misspecification becomes vanishingly small as the sample size tends to ∞. In this setting, the relative entropy risk of the local method is better than that of maximum likelihood. The form of optimum weights for the local likelihood is obtained and illustrated for the normal distribution.  相似文献   
154.
Estimating population sizes by the catch-effort methods is of enormous importance, in particular to harvest animal populations. A unified mixture model is introduced for different catchability functions to account for heterogeneous catchabilities among individual animals. A sequence of lower bounds to the odds that a single animal is not caught are proposed and used to define pseudo maximum likelihood estimators for the population size. The one-sided nature of confidence intervals is discussed. The proposed estimation methods are presented and illustrated by numerical studies.  相似文献   
155.
Summary.  In process characterization the quality of information that is obtained depends directly on the quality of process model. The current quality revolution is now providing a strong stimulus for rethinking and re-evaluating many statistical ideas. Among these are the role of theoretic knowledge and data in statistical inference and some issues in theoretic–empirical modelling. With this concern the paper takes a broad, pragmatic view of statistical inference to include all aspects of model formulation. The estimation of model parameters traditionally assumes that a model has a prespecified known form and takes no account of possible uncertainty regarding model structure. But in practice model structural uncertainty is a fact of life and is likely to be more serious than other sources of uncertainty which have received far more attention. This is true whether the model is specified on subject-matter grounds or when a model is formulated, fitted and checked on the same data set in an iterative interactive way. For that reason novel modelling techniques have been fashioned for reducing model uncertainty. Using available knowledge for theoretic model elaboration the techniques that have been created approximate the exact unknown process model concurrently by accessible theoretic and polynomial empirical functions. The paper examines the effects of uncertainty for hybrid theoretic–empirical models and, for reducing uncertainty, additive and multiplicative methods of model formulation are fashioned. Such modelling techniques have been successfully applied to perfect a steady flow model for an air gauge sensor. Validation of the models elaborated has revealed that the multiplicative modelling approach allows us to attain a satisfactory model with small discrepancy from empirical evidence.  相似文献   
156.
Inference in hybrid Bayesian networks using dynamic discretization   总被引:1,自引:0,他引:1  
We consider approximate inference in hybrid Bayesian Networks (BNs) and present a new iterative algorithm that efficiently combines dynamic discretization with robust propagation algorithms on junction trees. Our approach offers a significant extension to Bayesian Network theory and practice by offering a flexible way of modeling continuous nodes in BNs conditioned on complex configurations of evidence and intermixed with discrete nodes as both parents and children of continuous nodes. Our algorithm is implemented in a commercial Bayesian Network software package, AgenaRisk, which allows model construction and testing to be carried out easily. The results from the empirical trials clearly show how our software can deal effectively with different type of hybrid models containing elements of expert judgment as well as statistical inference. In particular, we show how the rapid convergence of the algorithm towards zones of high probability density, make robust inference analysis possible even in situations where, due to the lack of information in both prior and data, robust sampling becomes unfeasible.  相似文献   
157.
文章分析了日语反复语之间的语义差异,揭示了名词型同语反复的结构特征和语义特征,并在关联理论的框架下对认知语用进行了阐释,指出名词型同语反复的语用推理过程是反复语在具体语境中相关的一组信息群与反复语属性相关的另外一组信息群的相互作用过程。  相似文献   
158.
Beta regression is a suitable choice for modelling continuous response variables taking values on the unit interval. Data structures such as hierarchical, repeated measures and longitudinal typically induce extra variability and/or dependence and can be accounted for by the inclusion of random effects. In this sense, Statistical inference typically requires numerical methods, possibly combined with sampling algorithms. A class of Beta mixed models is adopted for the analysis of two real problems with grouped data structures. We focus on likelihood inference and describe the implemented algorithms. The first is a study on the life quality index of industry workers with data collected according to an hierarchical sampling scheme. The second is a study assessing the impact of hydroelectric power plants upon measures of water quality indexes up, downstream and at the reservoirs of the dammed rivers, with a nested and longitudinal data structure. Results from different algorithms are reported for comparison including from data-cloning, an alternative to numerical approximations which also allows assessing identifiability. Confidence intervals based on profiled likelihoods are compared with those obtained by asymptotic quadratic approximations, showing relevant differences for parameters related to the random effects. In both cases, the scientific hypothesis of interest was investigated by comparing alternative models, leading to relevant interpretations of the results within each context.  相似文献   
159.
In recent years, there has been considerable interest in regression models based on zero-inflated distributions. These models are commonly encountered in many disciplines, such as medicine, public health, and environmental sciences, among others. The zero-inflated Poisson (ZIP) model has been typically considered for these types of problems. However, the ZIP model can fail if the non-zero counts are overdispersed in relation to the Poisson distribution, hence the zero-inflated negative binomial (ZINB) model may be more appropriate. In this paper, we present a Bayesian approach for fitting the ZINB regression model. This model considers that an observed zero may come from a point mass distribution at zero or from the negative binomial model. The likelihood function is utilized to compute not only some Bayesian model selection measures, but also to develop Bayesian case-deletion influence diagnostics based on q-divergence measures. The approach can be easily implemented using standard Bayesian software, such as WinBUGS. The performance of the proposed method is evaluated with a simulation study. Further, a real data set is analyzed, where we show that ZINB regression models seems to fit the data better than the Poisson counterpart.  相似文献   
160.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号