首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2353篇
  免费   16篇
  国内免费   1篇
管理学   99篇
民族学   15篇
人口学   42篇
丛书文集   7篇
理论方法论   61篇
综合类   21篇
社会学   314篇
统计学   1811篇
  2024年   5篇
  2023年   49篇
  2022年   16篇
  2021年   30篇
  2020年   58篇
  2019年   87篇
  2018年   163篇
  2017年   284篇
  2016年   101篇
  2015年   93篇
  2014年   87篇
  2013年   830篇
  2012年   247篇
  2011年   32篇
  2010年   27篇
  2009年   39篇
  2008年   29篇
  2007年   27篇
  2006年   12篇
  2005年   21篇
  2004年   15篇
  2003年   5篇
  2002年   13篇
  2001年   10篇
  2000年   10篇
  1999年   7篇
  1998年   8篇
  1997年   7篇
  1996年   3篇
  1995年   5篇
  1994年   6篇
  1993年   2篇
  1992年   5篇
  1991年   1篇
  1990年   5篇
  1989年   4篇
  1988年   1篇
  1987年   2篇
  1986年   3篇
  1985年   1篇
  1984年   1篇
  1983年   7篇
  1982年   2篇
  1981年   2篇
  1980年   2篇
  1979年   1篇
  1978年   3篇
  1976年   1篇
  1975年   1篇
排序方式: 共有2370条查询结果,搜索用时 31 毫秒
1.
The Coalition for a Healthier Community (CHC) initiative was implemented to improve the health and well-being of women and girls. Underpinning CHC is a gender-based focus that uses a network of community partners working collaboratively to generate relevant behavior change and improved health outcomes. Ten programs are trying to determine whether gender-focused system approaches are cost-effective ways to address health disparities in women and girls. Programs implemented through coalitions made up of academic institutions, public health departments, community-based organizations, and local, regional, and national organizations, are addressing health issues such as domestic violence, cardiovascular disease prevention, physical activity, and healthy eating. Although these programs are ongoing, they have made significant progress. Key factors contributing to their early success include a comprehensive needs assessment, robust coalitions, the diversity of populations targeted, programs based on findings of the needs assessments, evaluations taking into consideration the effect of gender, and strong academic–community partnerships. A noteworthy impact of these programs has been their ability to shape and impact public, social, and health policies at the state and local levels. However, there have been challenges associated with the implementation of such a complex program. Lessons learned are discussed in this paper.  相似文献   
2.
We consider a method of moments approach for dealing with censoring at zero for data expressed in levels when researchers would like to take logarithms. A Box–Cox transformation is employed. We explore this approach in the context of linear regression where both dependent and independent variables are censored. We contrast this method to two others, (1) dropping records of data containing censored values and (2) assuming normality for censored observations and the residuals in the model. Across the methods considered, where researchers are interested primarily in the slope parameter, estimation bias is consistently reduced using the method of moments approach.  相似文献   
3.
4.
The generalized half-normal (GHN) distribution and progressive type-II censoring are considered in this article for studying some statistical inferences of constant-stress accelerated life testing. The EM algorithm is considered to calculate the maximum likelihood estimates. Fisher information matrix is formed depending on the missing information law and it is utilized for structuring the asymptomatic confidence intervals. Further, interval estimation is discussed through bootstrap intervals. The Tierney and Kadane method, importance sampling procedure and Metropolis-Hastings algorithm are utilized to compute Bayesian estimates. Furthermore, predictive estimates for censored data and the related prediction intervals are obtained. We consider three optimality criteria to find out the optimal stress level. A real data set is used to illustrate the importance of GHN distribution as an alternative lifetime model for well-known distributions. Finally, a simulation study is provided with discussion.  相似文献   
5.
6.
It is well-known that, under Type II double censoring, the maximum likelihood (ML) estimators of the location and scale parameters, θ and δ, of a twoparameter exponential distribution are linear functions of the order statistics. In contrast, when θ is known, theML estimator of δ does not admit a closed form expression. It is shown, however, that theML estimator of the scale parameter exists and is unique. Moreover, it has good large-sample properties. In addition, sharp lower and upper bounds for this estimator are provided, which can serve as starting points for iterative interpolation methods such as regula falsi. Explicit expressions for the expected Fisher information and Cramér-Rao lower bound are also derived. In the Bayesian context, assuming an inverted gamma prior on δ, the uniqueness, boundedness and asymptotics of the highest posterior density estimator of δ can be deduced in a similar way. Finally, an illustrative example is included.  相似文献   
7.
On Optimality of Bayesian Wavelet Estimators   总被引:2,自引:0,他引:2  
Abstract.  We investigate the asymptotic optimality of several Bayesian wavelet estimators, namely, posterior mean, posterior median and Bayes Factor, where the prior imposed on wavelet coefficients is a mixture of a mass function at zero and a Gaussian density. We show that in terms of the mean squared error, for the properly chosen hyperparameters of the prior, all the three resulting Bayesian wavelet estimators achieve optimal minimax rates within any prescribed Besov space     for p  ≥ 2. For 1 ≤  p  < 2, the Bayes Factor is still optimal for (2 s +2)/(2 s +1) ≤  p  < 2 and always outperforms the posterior mean and the posterior median that can achieve only the best possible rates for linear estimators in this case.  相似文献   
8.
This article proposes a new data‐based prior distribution for the error variance in a Gaussian linear regression model, when the model is used for Bayesian variable selection and model averaging. For a given subset of variables in the model, this prior has a mode that is an unbiased estimator of the error variance but is suitably dispersed to make it uninformative relative to the marginal likelihood. The advantage of this empirical Bayes prior for the error variance is that it is centred and dispersed sensibly and avoids the arbitrary specification of hyperparameters. The performance of the new prior is compared to that of a prior proposed previously in the literature using several simulated examples and two loss functions. For each example our paper also reports results for the model that orthogonalizes the predictor variables before performing subset selection. A real example is also investigated. The empirical results suggest that for both the simulated and real data, the performance of the estimators based on the prior proposed in our article compares favourably with that of a prior used previously in the literature.  相似文献   
9.
Summary.  Wavelet shrinkage is an effective nonparametric regression technique, especially when the underlying curve has irregular features such as spikes or discontinuities. The basic idea is simple: take the discrete wavelet transform of data consisting of a signal corrupted by noise; shrink or remove the wavelet coefficients to remove the noise; then invert the discrete wavelet transform to form an estimate of the true underlying curve. Various researchers have proposed increasingly sophisticated methods of doing this by using real-valued wavelets. Complex-valued wavelets exist but are rarely used. We propose two new complex-valued wavelet shrinkage techniques: one based on multiwavelet style shrinkage and the other using Bayesian methods. Extensive simulations show that our methods almost always give significantly more accurate estimates than methods based on real-valued wavelets. Further, our multiwavelet style shrinkage method is both simpler and dramatically faster than its competitors. To understand the excellent performance of this method we present a new risk bound on its hard thresholded coefficients.  相似文献   
10.
Summary.  Given a large number of test statistics, a small proportion of which represent departures from the relevant null hypothesis, a simple rule is given for choosing those statistics that are indicative of departure. It is based on fitting by moments a mixture model to the set of test statistics and then deriving an estimated likelihood ratio. Simulation suggests that the procedure has good properties when the departure from an overall null hypothesis is not too small.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号