首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4333篇
  免费   137篇
  国内免费   17篇
管理学   186篇
民族学   3篇
人口学   18篇
丛书文集   78篇
理论方法论   40篇
综合类   679篇
社会学   29篇
统计学   3454篇
  2024年   2篇
  2023年   23篇
  2022年   31篇
  2021年   29篇
  2020年   70篇
  2019年   154篇
  2018年   174篇
  2017年   263篇
  2016年   130篇
  2015年   135篇
  2014年   193篇
  2013年   1226篇
  2012年   432篇
  2011年   182篇
  2010年   175篇
  2009年   151篇
  2008年   140篇
  2007年   127篇
  2006年   115篇
  2005年   92篇
  2004年   110篇
  2003年   64篇
  2002年   58篇
  2001年   63篇
  2000年   56篇
  1999年   55篇
  1998年   52篇
  1997年   40篇
  1996年   13篇
  1995年   15篇
  1994年   17篇
  1993年   15篇
  1992年   18篇
  1991年   4篇
  1990年   13篇
  1989年   8篇
  1988年   4篇
  1987年   5篇
  1986年   5篇
  1985年   1篇
  1984年   8篇
  1983年   3篇
  1982年   5篇
  1981年   1篇
  1980年   4篇
  1979年   1篇
  1978年   1篇
  1976年   2篇
  1975年   2篇
排序方式: 共有4487条查询结果,搜索用时 234 毫秒
51.
The theory of higher-order asymptotics provides accurate approximations to posterior distributions for a scalar parameter of interest, and to the corresponding tail area, for practical use in Bayesian analysis. The aim of this article is to extend these approximations to pseudo-posterior distributions, e.g., posterior distributions based on a pseudo-likelihood function and a suitable prior, which are proved to be particularly useful when the full likelihood is analytically or computationally infeasible. In particular, from a theoretical point of view, we derive the Laplace approximation for a pseudo-posterior distribution, and for the corresponding tail area, for a scalar parameter of interest, also in the presence of nuisance parameters. From a computational point of view, starting from these higher-order approximations, we discuss the higher-order tail area (HOTA) algorithm useful to approximate marginal posterior distributions, and related quantities. Compared to standard Markov chain Monte Carlo methods, the main advantage of the HOTA algorithm is that it gives independent samples at a negligible computational cost. The relevant computations are illustrated by two examples.  相似文献   
52.
In risk assessment, the moment‐independent sensitivity analysis (SA) technique for reducing the model uncertainty has attracted a great deal of attention from analysts and practitioners. It aims at measuring the relative importance of an individual input, or a set of inputs, in determining the uncertainty of model output by looking at the entire distribution range of model output. In this article, along the lines of Plischke et al., we point out that the original moment‐independent SA index (also called delta index) can also be interpreted as the dependence measure between model output and input variables, and introduce another moment‐independent SA index (called extended delta index) based on copula. Then, nonparametric methods for estimating the delta and extended delta indices are proposed. Both methods need only a set of samples to compute all the indices; thus, they conquer the problem of the “curse of dimensionality.” At last, an analytical test example, a risk assessment model, and the levelE model are employed for comparing the delta and the extended delta indices and testing the two calculation methods. Results show that the delta and the extended delta indices produce the same importance ranking in these three test examples. It is also shown that these two proposed calculation methods dramatically reduce the computational burden.  相似文献   
53.
For a normal distribution with known variance, the standard confidence interval of the location parameter is derived from the classical Neyman procedure. When the parameter space is known to be restricted, the standard confidence interval is arguably unsatisfactory. Recent articles have addressed this problem and proposed confidence intervals for the mean of a normal distribution where the parameter space is not less than zero. In this article, we propose a new confidence interval, rp interval, and derive the Bayesian credible interval and likelihood ratio interval for general restricted parameter space. We compare these intervals with the standard interval and the minimax interval. Simulation studies are undertaken to assess the performances of these confidence intervals.  相似文献   
54.
This paper considers the design of accelerated life test (ALT) sampling plans under Type I progressive interval censoring with random removals. We assume that the lifetime of products follows a Weibull distribution. Two levels of constant stress higher than the use condition are used. The sample size and the acceptability constant that satisfy given levels of producer's risk and consumer's risk are found. In particular, the optimal stress level and the allocation proportion are obtained by minimizing the generalized asymptotic variance of the maximum likelihood estimators of the model parameters. Furthermore, for validation purposes, a Monte Carlo simulation is conducted to assess the true probability of acceptance for the derived sampling plans.  相似文献   
55.
房屋拆迁中的国有土地使用权补偿是我国城镇房屋拆迁中的主要矛盾点之一。通过对司法判例进行实证研究发现,法院判决多数认为原告诉请国有土地使用权补偿于法无据,抑或认为对于国有土地使用权的补偿已随房屋评估作价。房屋拆迁的重点在于“地”,若仅对房屋所有权人进行补偿,而忽视国有土地使用权人,无疑会引发诸多诉讼,乃至于阻碍城镇化进程。因此,在渐进式改革的背景之下,可以开展国有土地使用权补偿的试点,同时应当完善《国有土地上房屋征收与补偿条例》,并将合理补偿原则作为补偿的基本原则,以此为基础建立一套独立于房屋拆迁补偿的国有土地使用权补偿体系。  相似文献   
56.
The asymptotic variance of the maximum likelihood estimate is proved to decrease when the maximization is restricted to a subspace that contains the true parameter value. Maximum likelihood estimation allows a systematic fitting of covariance models to the sample, which is important in data assimilation. The hierarchical maximum likelihood approach is applied to the spectral diagonal covariance model with different parameterizations of eigenvalue decay, and to the sparse inverse covariance model with specified parameter values on different sets of nonzero entries. It is shown computationally that using smaller sets of parameters can decrease the sampling noise in high dimension substantially.  相似文献   
57.
This paper assesses the performance of common estimators adjusting for differences in covariates, such as matching and regression, when faced with the so-called common support problems. It also shows how different procedures suggested in the literature affect the properties of such estimators. Based on an empirical Monte Carlo simulation design, a lack of common support is found to increase the root-mean-squared error of all investigated parametric and semiparametric estimators. Dropping observations that are off support usually improves their performance, although the magnitude of the improvement depends on the particular method used.  相似文献   
58.
For the hierarchical Poisson and gamma model, we calculate the Bayes posterior estimator of the parameter of the Poisson distribution under Stein's loss function which penalizes gross overestimation and gross underestimation equally and the corresponding Posterior Expected Stein's Loss (PESL). We also obtain the Bayes posterior estimator of the parameter under the squared error loss and the corresponding PESL. Moreover, we obtain the empirical Bayes estimators of the parameter of the Poisson distribution with a conjugate gamma prior by two methods. In numerical simulations, we have illustrated: The two inequalities of the Bayes posterior estimators and the PESLs; the moment estimators and the Maximum Likelihood Estimators (MLEs) are consistent estimators of the hyperparameters; the goodness-of-fit of the model to the simulated data. The numerical results indicate that the MLEs are better than the moment estimators when estimating the hyperparameters. Finally, we exploit the attendance data on 314 high school juniors from two urban high schools to illustrate our theoretical studies.  相似文献   
59.
Outlier detection algorithms are intimately connected with robust statistics that down‐weight some observations to zero. We define a number of outlier detection algorithms related to the Huber‐skip and least trimmed squares estimators, including the one‐step Huber‐skip estimator and the forward search. Next, we review a recently developed asymptotic theory of these. Finally, we analyse the gauge, the fraction of wrongly detected outliers, for a number of outlier detection algorithms and establish an asymptotic normal and a Poisson theory for the gauge.  相似文献   
60.
Monte Carlo methods are used to compare the methods of maximum likelihood and least squares to estimate a cumulative distribution function. When the probabilistic model used is correct or nearly correct, the two methods produce similar results with the MLE usually slightly superior When an incorrect model is used, or when the data is contaminated, the least squares technique often gives substantially superior results.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号