首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2597篇
  免费   68篇
  国内免费   11篇
管理学   213篇
民族学   2篇
人才学   1篇
人口学   31篇
丛书文集   46篇
理论方法论   47篇
综合类   410篇
社会学   96篇
统计学   1830篇
  2024年   3篇
  2023年   22篇
  2022年   19篇
  2021年   25篇
  2020年   57篇
  2019年   105篇
  2018年   102篇
  2017年   170篇
  2016年   90篇
  2015年   63篇
  2014年   107篇
  2013年   625篇
  2012年   179篇
  2011年   79篇
  2010年   101篇
  2009年   72篇
  2008年   66篇
  2007年   84篇
  2006年   77篇
  2005年   81篇
  2004年   63篇
  2003年   62篇
  2002年   84篇
  2001年   55篇
  2000年   37篇
  1999年   35篇
  1998年   29篇
  1997年   31篇
  1996年   20篇
  1995年   25篇
  1994年   13篇
  1993年   8篇
  1992年   19篇
  1991年   8篇
  1990年   11篇
  1989年   9篇
  1988年   4篇
  1987年   6篇
  1986年   4篇
  1985年   5篇
  1984年   3篇
  1983年   6篇
  1982年   1篇
  1981年   2篇
  1980年   2篇
  1979年   3篇
  1978年   2篇
  1977年   2篇
排序方式: 共有2676条查询结果,搜索用时 15 毫秒
91.
Summary.  Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalized) linear models, (generalized) additive models, smoothing spline models, state space models, semiparametric regression, spatial and spatiotemporal models, log-Gaussian Cox processes and geostatistical and geoadditive models. We consider approximate Bayesian inference in a popular subset of structured additive regression models, latent Gaussian models , where the latent field is Gaussian, controlled by a few hyperparameters and with non-Gaussian response variables. The posterior marginals are not available in closed form owing to the non-Gaussian response variables. For such models, Markov chain Monte Carlo methods can be implemented, but they are not without problems, in terms of both convergence and computational time. In some practical applications, the extent of these problems is such that Markov chain Monte Carlo sampling is simply not an appropriate tool for routine analysis. We show that, by using an integrated nested Laplace approximation and its simplified version, we can directly compute very accurate approximations to the posterior marginals. The main benefit of these approximations is computational: where Markov chain Monte Carlo algorithms need hours or days to run, our approximations provide more precise estimates in seconds or minutes. Another advantage with our approach is its generality, which makes it possible to perform Bayesian analysis in an automatic, streamlined way, and to compute model comparison criteria and various predictive measures so that models can be compared and the model under study can be challenged.  相似文献   
92.
Abstract.  One of the main research areas in Bayesian Nonparametrics is the proposal and study of priors which generalize the Dirichlet process. In this paper, we provide a comprehensive Bayesian non-parametric analysis of random probabilities which are obtained by normalizing random measures with independent increments (NRMI). Special cases of these priors have already shown to be useful for statistical applications such as mixture models and species sampling problems. However, in order to fully exploit these priors, the derivation of the posterior distribution of NRMIs is crucial: here we achieve this goal and, indeed, provide explicit and tractable expressions suitable for practical implementation. The posterior distribution of an NRMI turns out to be a mixture with respect to the distribution of a specific latent variable. The analysis is completed by the derivation of the corresponding predictive distributions and by a thorough investigation of the marginal structure. These results allow to derive a generalized Blackwell–MacQueen sampling scheme, which is then adapted to cover also mixture models driven by general NRMIs.  相似文献   
93.
Dynamic programming (DP) is a fast, elegant method for solving many one-dimensional optimisation problems but, unfortunately, most problems in image analysis, such as restoration and warping, are two-dimensional. We consider three generalisations of DP. The first is iterated dynamic programming (IDP), where DP is used to recursively solve each of a sequence of one-dimensional problems in turn, to find a local optimum. A second algorithm is an empirical, stochastic optimiser, which is implemented by adding progressively less noise to IDP. The final approach replaces DP by a more computationally intensive Forward-Backward Gibbs Sampler, and uses a simulated annealing cooling schedule. Results are compared with existing pixel-by-pixel methods of iterated conditional modes (ICM) and simulated annealing in two applications: to restore a synthetic aperture radar (SAR) image, and to warp a pulsed-field electrophoresis gel into alignment with a reference image. We find that IDP and its stochastic variant outperform the remaining algorithms.  相似文献   
94.
The maximum likelihood estimator (MLE) and the likelihood ratio test (LRT) will be considered for making inference about the scale parameter of the exponential distribution in case of moving extreme ranked set sampling (MERSS). The MLE and LRT can not be written in closed form. Therefore, a modification of the MLE using the technique suggested by Maharota and Nanda (Biometrika 61:601–606, 1974) will be considered and this modified estimator will be used to modify the LRT to get a test in closed form for testing a simple hypothesis against one sided alternatives. The same idea will be used to modify the most powerful test (MPT) for testing a simple hypothesis versus a simple hypothesis to get a test in closed form for testing a simple hypothesis against one sided alternatives. Then it appears that the modified estimator is a good competitor of the MLE and the modified tests are good competitors of the LRT using MERSS and simple random sampling (SRS).  相似文献   
95.
Nonparametric density estimation in the presence of measurement error is considered. The usual kernel deconvolution estimator seeks to account for the contamination in the data by employing a modified kernel. In this paper a new approach based on a weighted kernel density estimator is proposed. Theoretical motivation is provided by the existence of a weight vector that perfectly counteracts the bias in density estimation without generating an excessive increase in variance. In practice a data driven method of weight selection is required. Our strategy is to minimize the discrepancy between a standard kernel estimate from the contaminated data on the one hand, and the convolution of the weighted deconvolution estimate with the measurement error density on the other hand. We consider a direct implementation of this approach, in which the weights are optimized subject to sum and non-negativity constraints, and a regularized version in which the objective function includes a ridge-type penalty. Numerical tests suggest that the weighted kernel estimation can lead to tangible improvements in performance over the usual kernel deconvolution estimator. Furthermore, weighted kernel estimates are free from the problem of negative estimation in the tails that can occur when using modified kernels. The weighted kernel approach generalizes to the case of multivariate deconvolution density estimation in a very straightforward manner.  相似文献   
96.
When data are outcome-dependent non response, pseudo-likelihood yields consistent regression coefficients without specifying the missing data mechanism. However, it is onerous to derive parameter estimators including their standard errors from the regression coefficients under pseudo-likelihood (PL). The present study applies an imputation method to compute the asymptotic standard errors of parameter estimators. The proposed method is simpler than Delta method and it showed similar effect size of the standard errors to bootstrapping in simulation and application studies.  相似文献   
97.
98.
We propose an improved difference-cum-exponential ratio type estimator for estimating the finite population mean in simple and stratified random sampling using two auxiliary variables. We obtain properties of the estimators up to first order of approximation. The proposed class of estimators is found to be more efficient than the usual sample mean estimator, ratio estimator, exponential ratio type estimator, usual two difference type estimators, Rao (1991) estimator, Gupta and Shabbir (2008) estimator, and Grover and Kaur (2011) estimator. We use six real data sets in simple random sampling and two in stratified sampling for numerical comparisons.  相似文献   
99.
Copulas have proved to be very successful tools for the flexible modeling of dependence. Bivariate copulas have been deeply researched in recent years, while building higher-dimensional copulas is still recognized to be a difficult task. In this paper, we study the higher-dimensional dependent reliability systems using a type of decomposition called “vine,” by which a multivariate distribution can be decomposed into a cascade of bivariate copulas. Some equations of system reliability for parallel, series, and k-out-of-n systems are obtained and then decomposed based on C-vine and D-vine copulas. Finally, a shutdown system is considered to illustrate the results obtained in the paper.  相似文献   
100.
The occurrence of nonresponse is very much plebeian in surveys, which troubles the analysis, and hence, an inappropriate inference is left out. To counterbalance the sour effects of the incompleteness, fresh imputation techniques have been proposed with the aid of multi-auxiliary variates for the estimation of population mean on successive waves. Properties of the proposed estimators have been elaborated, and they have been compared with the work of Priyanka et al. (2015). Detailed simulation study is carried out to substantiate the empirical and theoretical results. Several possible cases have been addressed in which nonresponse can occur.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号