首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1385篇
  免费   37篇
  国内免费   7篇
管理学   60篇
人才学   1篇
人口学   1篇
丛书文集   30篇
理论方法论   11篇
综合类   276篇
社会学   20篇
统计学   1030篇
  2023年   12篇
  2022年   12篇
  2021年   12篇
  2020年   19篇
  2019年   48篇
  2018年   51篇
  2017年   79篇
  2016年   38篇
  2015年   19篇
  2014年   46篇
  2013年   246篇
  2012年   89篇
  2011年   62篇
  2010年   49篇
  2009年   63篇
  2008年   47篇
  2007年   61篇
  2006年   60篇
  2005年   57篇
  2004年   52篇
  2003年   44篇
  2002年   38篇
  2001年   40篇
  2000年   32篇
  1999年   19篇
  1998年   19篇
  1997年   22篇
  1996年   8篇
  1995年   13篇
  1994年   7篇
  1993年   7篇
  1992年   8篇
  1991年   10篇
  1990年   4篇
  1989年   1篇
  1988年   8篇
  1987年   5篇
  1986年   2篇
  1985年   3篇
  1984年   2篇
  1983年   5篇
  1982年   5篇
  1981年   2篇
  1980年   1篇
  1979年   1篇
  1978年   1篇
排序方式: 共有1429条查询结果,搜索用时 252 毫秒
391.
We discuss the nature of ancillary information in the context of the continuous uniform distribution. In the one-sample problem, the existence of sufficient statistics mitigates conditioning on the ancillary configuration. In the two-sample problem, additional ancillary information becomes available when the ratio of scale parameters is known. We give exact results for conditional inferences about the common scale parameter and for the difference in location parameters of two uniform distributions. The ancillary information affects the precision of the latter through a comparison of the sample value of the ratio of scale parameters with the known population value. A limited conditional simulation compares the Type I errors and power of these exact results with approximate results using the robust pooled t-statistic.  相似文献   
392.
Hidden Markov random field models provide an appealing representation of images and other spatial problems. The drawback is that inference is not straightforward for these models as the normalisation constant for the likelihood is generally intractable except for very small observation sets. Variational methods are an emerging tool for Bayesian inference and they have already been successfully applied in other contexts. Focusing on the particular case of a hidden Potts model with Gaussian noise, we show how variational Bayesian methods can be applied to hidden Markov random field inference. To tackle the obstacle of the intractable normalising constant for the likelihood, we explore alternative estimation approaches for incorporation into the variational Bayes algorithm. We consider a pseudo-likelihood approach as well as the more recent reduced dependence approximation of the normalisation constant. To illustrate the effectiveness of these approaches we present empirical results from the analysis of simulated datasets. We also analyse a real dataset and compare results with those of previous analyses as well as those obtained from the recently developed auxiliary variable MCMC method and the recursive MCMC method. Our results show that the variational Bayesian analyses can be carried out much faster than the MCMC analyses and produce good estimates of model parameters. We also found that the reduced dependence approximation of the normalisation constant outperformed the pseudo-likelihood approximation in our analysis of real and synthetic datasets.  相似文献   
393.
When a new product is the result of design and/or process improvements introduced in its predecessors, then the past failure data and the expert technical knowledge constitute a valuable source of information that can lead to a more accurate reliability estimate of the upgraded product. This paper proposes a Bayesian procedure to formalize the prior information available about the failure probability of an upgraded automotive component. The elicitation process makes use of the failure data of the past product, the designer information on the effectiveness of planned design/process modifications, information on actual working conditions of the upgraded component and, for outsourced components, technical knowledge on the effect of possible cost reductions. By using the proposed procedure, more accurate estimates of the failure probability can arise. The number of failed items in a future population of vehicles is also predicted to measure the effect of a possible extension of the warranty period. Finally, the proposed procedure was applied to a case study and its feasibility in supporting reliability estimation is illustrated.  相似文献   
394.
Recurrent event data often arise in biomedical studies, with examples including hospitalizations, infections, and treatment failures. In observational studies, it is often of interest to estimate the effects of covariates on the marginal recurrent event rate. The majority of existing rate regression methods assume multiplicative covariate effects. We propose a semiparametric model for the marginal recurrent event rate, wherein the covariates are assumed to add to the unspecified baseline rate. Covariate effects are summarized by rate differences, meaning that the absolute effect on the rate function can be determined from the regression coefficient alone. We describe modifications of the proposed method to accommodate a terminating event (e.g., death). Proposed estimators of the regression parameters and baseline rate are shown to be consistent and asymptotically Gaussian. Simulation studies demonstrate that the asymptotic approximations are accurate in finite samples. The proposed methods are applied to a state-wide kidney transplant data set.  相似文献   
395.
396.
It has long been asserted that in univariate location-scale models, when concerned with inference for either the location or scale parameter, the use of the inverse of the scale parameter as a Bayesian prior yields posterior credible sets that have exactly the correct frequentist confidence set interpretation. This claim dates to at least Peers, and has subsequently been noted by various authors, with varying degrees of justification. We present a simple, direct demonstration of the exact matching property of the posterior credible sets derived under use of this prior in the univariate location-scale model. This is done by establishing an equivalence between the conditional frequentist and posterior densities of the pivotal quantities on which conditional frequentist inferences are based.  相似文献   
397.
The problem of statistical calibration of a measuring instrument can be framed both in a statistical context as well as in an engineering context. In the first, the problem is dealt with by distinguishing between the ‘classical’ approach and the ‘inverse’ regression approach. Both of these models are static models and are used to estimate exact measurements from measurements that are affected by error. In the engineering context, the variables of interest are considered to be taken at the time at which you observe it. The Bayesian time series analysis method of Dynamic Linear Models can be used to monitor the evolution of the measures, thus introducing a dynamic approach to statistical calibration. The research presented employs a new approach to performing statistical calibration. A simulation study in the context of microwave radiometry is conducted that compares the dynamic model to traditional static frequentist and Bayesian approaches. The focus of the study is to understand how well the dynamic statistical calibration method performs under various signal-to-noise ratios, r.  相似文献   
398.
Alice L. Morais 《Statistics》2017,51(2):294-313
We extend the Weibull power series (WPS) class of distributions to the new class of extended Weibull power series (EWPS) class of distributions. The EWPS distributions are related to series and parallel systems with a random number of components, whereas the WPS distributions [Morais AL, Barreto-Souza W. A compound class of Weibull and power series distributions. Computational Statistics and Data Analysis. 2011;55:1410–1425] are related to series systems only. Unlike the WPS distributions, for which the Weibull is a limiting special case, the Weibull law is a particular case of the EWPS distributions. We prove that the distributions in this class are identifiable under a simple assumption. We also prove stochastic and hazard rate order results and highlight that the shapes of the EWPS distributions are markedly more flexible than the shapes of the WPS distributions. We define a regression model for the EWPS response random variable to model a scale parameter and its quantiles. We present the maximum likelihood estimator and prove its consistency and asymptotic normal distribution. Although series and parallel systems motivated the construction of this class, the EWPS distributions are suitable for modelling a wide range of positive data sets. To illustrate potential uses of this model, we apply it to a real data set on the tensile strength of coconut fibres and present a simple device for diagnostic purposes.  相似文献   
399.
In finance, inferences about future asset returns are typically quantified with the use of parametric distributions and single-valued probabilities. It is attractive to use less restrictive inferential methods, including nonparametric methods which do not require distributional assumptions about variables, and imprecise probability methods which generalize the classical concept of probability to set-valued quantities. Main attractions include the flexibility of the inferences to adapt to the available data and that the level of imprecision in inferences can reflect the amount of data on which these are based. This paper introduces nonparametric predictive inference (NPI) for stock returns. NPI is a statistical approach based on few assumptions, with inferences strongly based on data and with uncertainty quantified via lower and upper probabilities. NPI is presented for inference about future stock returns, as a measure for risk and uncertainty, and for pairwise comparison of two stocks based on their future aggregate returns. The proposed NPI methods are illustrated using historical stock market data.  相似文献   
400.
This paper conducts simulation-based comparison of several stochastic volatility models with leverage effects. Two new variants of asymmetric stochastic volatility models, which are subject to a logarithmic transformation on the squared asset returns, are proposed. The leverage effect is introduced into the model through correlation either between the innovations of the observation equation and the latent process, or between the logarithm of squared asset returns and the latent process. Suitable Markov Chain Monte Carlo algorithms are developed for parameter estimation and model comparison. Simulation results show that our proposed formulation of the leverage effect and the accompanying inference methods give rise to reasonable parameter estimates. Applications to two data sets uncover a negative correlation (which can be interpreted as a leverage effect) between the observed returns and volatilities, and a negative correlation between the logarithm of squared returns and volatilities.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号