首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2634篇
  免费   128篇
  国内免费   9篇
管理学   278篇
民族学   3篇
人口学   21篇
丛书文集   30篇
理论方法论   44篇
综合类   215篇
社会学   39篇
统计学   2141篇
  2023年   38篇
  2022年   36篇
  2021年   45篇
  2020年   46篇
  2019年   98篇
  2018年   114篇
  2017年   212篇
  2016年   97篇
  2015年   89篇
  2014年   124篇
  2013年   545篇
  2012年   223篇
  2011年   103篇
  2010年   83篇
  2009年   109篇
  2008年   84篇
  2007年   89篇
  2006年   79篇
  2005年   83篇
  2004年   74篇
  2003年   56篇
  2002年   49篇
  2001年   36篇
  2000年   41篇
  1999年   32篇
  1998年   31篇
  1997年   28篇
  1996年   13篇
  1995年   16篇
  1994年   16篇
  1993年   8篇
  1992年   14篇
  1991年   14篇
  1990年   5篇
  1989年   8篇
  1988年   7篇
  1987年   3篇
  1986年   3篇
  1985年   4篇
  1984年   2篇
  1983年   3篇
  1982年   5篇
  1981年   1篇
  1980年   2篇
  1979年   1篇
  1976年   1篇
  1975年   1篇
排序方式: 共有2771条查询结果,搜索用时 80 毫秒
151.
In applications of Gaussian processes (GPs) where quantification of uncertainty is a strict requirement, it is necessary to accurately characterize the posterior distribution over Gaussian process covariance parameters. This is normally done by means of standard Markov chain Monte Carlo (MCMC) algorithms, which require repeated expensive calculations involving the marginal likelihood. Motivated by the desire to avoid the inefficiencies of MCMC algorithms rejecting a considerable amount of expensive proposals, this paper develops an alternative inference framework based on adaptive multiple importance sampling (AMIS). In particular, this paper studies the application of AMIS for GPs in the case of a Gaussian likelihood, and proposes a novel pseudo-marginal-based AMIS algorithm for non-Gaussian likelihoods, where the marginal likelihood is unbiasedly estimated. The results suggest that the proposed framework outperforms MCMC-based inference of covariance parameters in a wide range of scenarios.  相似文献   
152.
A significant challenge in fitting metamodels of large-scale simulations with sufficient accuracy is in the computational time required for rigorous statistical validation. This paper addresses the statistical computation issues associated with the Bootstrap and modified PRESS statistic, which yield key metrics for error measurements in metamodelling validation. Experimentation is performed on different programming languages, namely, MATLAB, R, and Python, and implemented on different computing architectures including traditional multicore personal computers and high-power clusters with parallel computing capabilities. This study yields insight into the effect that programming languages and computing architecture have on the computational time for simulation metamodel validation. The experimentation is performed across two scenarios with varying complexity.  相似文献   
153.
Christoph Gietl 《Statistics》2017,51(3):668-684
This paper proves continuity of f-projections and the continuous dependence of the limit matrix of the iterative proportional fitting procedure (IPF procedure) on the given matrix as well as the given marginals under certain regularity constraints. For finite spaces, the concept of f-projections of finite measures on a compact and convex set is introduced and continuity of f-projections is proven. This result is applied to the IPF procedure. Given a nonnegative matrix as well as row and column marginals the IPF procedure generates a sequence of matrices, called the IPF sequence, by alternately fitting rows and columns to match their respective marginals. The procedure is equivalent to cyclic f-projections. If the IPF sequence converges, the application of the continuity of f-projections yields the continuous dependence of the limit matrix on the given matrix. By generalized convex programming and under some constraints, it is shown that the limit matrix of the IPF sequence continuously depends not only on the given matrix but also on the marginals.  相似文献   
154.
Single index model conditional quantile regression is proposed in order to overcome the dimensionality problem in nonparametric quantile regression. In the proposed method, the Bayesian elastic net is suggested for single index quantile regression for estimation and variables selection. The Gaussian process prior is considered for unknown link function and a Gibbs sampler algorithm is adopted for posterior inference. The results of the simulation studies and numerical example indicate that our propose method, BENSIQReg, offers substantial improvements over two existing methods, SIQReg and BSIQReg. The BENSIQReg has consistently show a good convergent property, has the least value of median of mean absolute deviations and smallest standard deviations, compared to the other two methods.  相似文献   
155.
Abrupt changes often occur for environmental and financial time series. Most often, these changes are due to human intervention. Change point analysis is a statistical tool used to analyze sudden changes in observations along the time series. In this paper, we propose a Bayesian model for extreme values for environmental and economic datasets that present a typical change point behavior. The model proposed in this paper addresses the situation in which more than one change point can occur in a time series. By analyzing maxima, the distribution of each regime is a generalized extreme value distribution. In this model, the change points are unknown and considered parameters to be estimated. Simulations of extremes with two change points showed that the proposed algorithm can recover the true values of the parameters, in addition to detecting the true change points in different configurations. Also, the number of change points was a problem to be considered, and the Bayesian estimation can correctly identify the correct number of change points for each application. Environmental and financial data were analyzed and results showed the importance of considering the change point in the data and revealed that this change of regime brought about an increase in the return levels, increasing the number of floods in cities around the rivers. Stock market levels showed the necessity of a model with three different regimes.  相似文献   
156.
157.
In the life test, predicting higher failure times than the largest failure time of the observed is an important issue. Although the Rayleigh distribution is a suitable model for analyzing the lifetime of components that age rapidly over time because its failure rate function is an increasing linear function of time, the inference for a two-parameter Rayleigh distribution based on upper record values has not been addressed from the Bayesian perspective. This paper provides Bayesian analysis methods by proposing a noninformative prior distribution to analyze survival data, using a two-parameter Rayleigh distribution based on record values. In addition, we provide a pivotal quantity and an algorithm based on the pivotal quantity to predict the behavior of future survival records. We show that the proposed method is superior to the frequentist counterpart in terms of the mean-squared error and bias through Monte carlo simulations. For illustrative purposes, survival data on lung cancer patients are analyzed, and it is proved that the proposed model can be a good alternative when prior information is not given.  相似文献   
158.
This research was motivated by our goal to design an efficient clinical trial to compare two doses of docosahexaenoic acid supplementation for reducing the rate of earliest preterm births (ePTB) and/or preterm births (PTB). Dichotomizing continuous gestational age (GA) data using a classic binomial distribution will result in a loss of information and reduced power. A distributional approach is an improved strategy to retain statistical power from the continuous distribution. However, appropriate distributions that fit the data properly, particularly in the tails, must be chosen, especially when the data are skewed. A recent study proposed a skew-normal method. We propose a three-component normal mixture model and introduce separate treatment effects at different components of GA. We evaluate operating characteristics of mixture model, beta-binomial model, and skew-normal model through simulation. We also apply these three methods to data from two completed clinical trials from the USA and Australia. Finite mixture models are shown to have favorable properties in PTB analysis but minimal benefit for ePTB analysis. Normal models on log-transformed data have the largest bias. Therefore we recommend finite mixture model for PTB study. Either finite mixture model or beta-binomial model is acceptable for ePTB study.  相似文献   
159.
The Markov chain Monte Carlo (MCMC) method generates samples from the posterior distribution and uses these samples to approximate expectations of quantities of interest. For the process, researchers have to decide whether the Markov chain has reached the desired posterior distribution. Using convergence diagnostic tests are very important to decide whether the Markov chain has reached the target distribution. Our interest in this study was to compare the performances of convergence diagnostic tests for all parameters of Bayesian Cox regression model with different number of iterations by using a simulation and a real lung cancer dataset.  相似文献   
160.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号