首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1135篇
  免费   35篇
  国内免费   1篇
管理学   51篇
民族学   1篇
人口学   18篇
丛书文集   24篇
理论方法论   13篇
综合类   135篇
社会学   18篇
统计学   911篇
  2023年   7篇
  2022年   5篇
  2021年   14篇
  2020年   21篇
  2019年   33篇
  2018年   33篇
  2017年   66篇
  2016年   20篇
  2015年   24篇
  2014年   40篇
  2013年   360篇
  2012年   110篇
  2011年   34篇
  2010年   35篇
  2009年   53篇
  2008年   44篇
  2007年   40篇
  2006年   31篇
  2005年   36篇
  2004年   23篇
  2003年   10篇
  2002年   14篇
  2001年   13篇
  2000年   14篇
  1999年   11篇
  1998年   11篇
  1997年   8篇
  1996年   4篇
  1995年   5篇
  1994年   6篇
  1993年   2篇
  1992年   6篇
  1991年   2篇
  1990年   5篇
  1989年   4篇
  1988年   1篇
  1987年   2篇
  1986年   3篇
  1985年   2篇
  1984年   1篇
  1983年   7篇
  1982年   2篇
  1981年   1篇
  1980年   1篇
  1979年   1篇
  1978年   3篇
  1976年   1篇
  1975年   1篇
  1966年   1篇
排序方式: 共有1171条查询结果,搜索用时 15 毫秒
1.
The generalized half-normal (GHN) distribution and progressive type-II censoring are considered in this article for studying some statistical inferences of constant-stress accelerated life testing. The EM algorithm is considered to calculate the maximum likelihood estimates. Fisher information matrix is formed depending on the missing information law and it is utilized for structuring the asymptomatic confidence intervals. Further, interval estimation is discussed through bootstrap intervals. The Tierney and Kadane method, importance sampling procedure and Metropolis-Hastings algorithm are utilized to compute Bayesian estimates. Furthermore, predictive estimates for censored data and the related prediction intervals are obtained. We consider three optimality criteria to find out the optimal stress level. A real data set is used to illustrate the importance of GHN distribution as an alternative lifetime model for well-known distributions. Finally, a simulation study is provided with discussion.  相似文献   
2.
It is well-known that, under Type II double censoring, the maximum likelihood (ML) estimators of the location and scale parameters, θ and δ, of a twoparameter exponential distribution are linear functions of the order statistics. In contrast, when θ is known, theML estimator of δ does not admit a closed form expression. It is shown, however, that theML estimator of the scale parameter exists and is unique. Moreover, it has good large-sample properties. In addition, sharp lower and upper bounds for this estimator are provided, which can serve as starting points for iterative interpolation methods such as regula falsi. Explicit expressions for the expected Fisher information and Cramér-Rao lower bound are also derived. In the Bayesian context, assuming an inverted gamma prior on δ, the uniqueness, boundedness and asymptotics of the highest posterior density estimator of δ can be deduced in a similar way. Finally, an illustrative example is included.  相似文献   
3.
On Optimality of Bayesian Wavelet Estimators   总被引:2,自引:0,他引:2  
Abstract.  We investigate the asymptotic optimality of several Bayesian wavelet estimators, namely, posterior mean, posterior median and Bayes Factor, where the prior imposed on wavelet coefficients is a mixture of a mass function at zero and a Gaussian density. We show that in terms of the mean squared error, for the properly chosen hyperparameters of the prior, all the three resulting Bayesian wavelet estimators achieve optimal minimax rates within any prescribed Besov space     for p  ≥ 2. For 1 ≤  p  < 2, the Bayes Factor is still optimal for (2 s +2)/(2 s +1) ≤  p  < 2 and always outperforms the posterior mean and the posterior median that can achieve only the best possible rates for linear estimators in this case.  相似文献   
4.
以展览业发达的东莞为研究样地,利用因素分析与聚类研究相结合的综合定量方法,对专业观众的观展动机进行综合评估及分类研究。研究表明,专业观众的非购买动机甚过购买动机,他们观展动机的四个维度因子依次为:搜集信息、建立市场关系、考察奖励、采购行为。本研究还发现专业观众的观展动机分为目标模糊型、信息搜集导向型、目标多维明确型三种类型。  相似文献   
5.
This article proposes a new data‐based prior distribution for the error variance in a Gaussian linear regression model, when the model is used for Bayesian variable selection and model averaging. For a given subset of variables in the model, this prior has a mode that is an unbiased estimator of the error variance but is suitably dispersed to make it uninformative relative to the marginal likelihood. The advantage of this empirical Bayes prior for the error variance is that it is centred and dispersed sensibly and avoids the arbitrary specification of hyperparameters. The performance of the new prior is compared to that of a prior proposed previously in the literature using several simulated examples and two loss functions. For each example our paper also reports results for the model that orthogonalizes the predictor variables before performing subset selection. A real example is also investigated. The empirical results suggest that for both the simulated and real data, the performance of the estimators based on the prior proposed in our article compares favourably with that of a prior used previously in the literature.  相似文献   
6.
Summary.  Factor analysis is a powerful tool to identify the common characteristics among a set of variables that are measured on a continuous scale. In the context of factor analysis for non-continuous-type data, most applications are restricted to item response data only. We extend the factor model to accommodate ranked data. The Monte Carlo expectation–maximization algorithm is used for parameter estimation at which the E-step is implemented via the Gibbs sampler. An analysis based on both complete and incomplete ranked data (e.g. rank the top q out of k items) is considered. Estimation of the factor scores is also discussed. The method proposed is applied to analyse a set of incomplete ranked data that were obtained from a survey that was carried out in GuangZhou, a major city in mainland China, to investigate the factors affecting people's attitude towards choosing jobs.  相似文献   
7.
Summary.  Wavelet shrinkage is an effective nonparametric regression technique, especially when the underlying curve has irregular features such as spikes or discontinuities. The basic idea is simple: take the discrete wavelet transform of data consisting of a signal corrupted by noise; shrink or remove the wavelet coefficients to remove the noise; then invert the discrete wavelet transform to form an estimate of the true underlying curve. Various researchers have proposed increasingly sophisticated methods of doing this by using real-valued wavelets. Complex-valued wavelets exist but are rarely used. We propose two new complex-valued wavelet shrinkage techniques: one based on multiwavelet style shrinkage and the other using Bayesian methods. Extensive simulations show that our methods almost always give significantly more accurate estimates than methods based on real-valued wavelets. Further, our multiwavelet style shrinkage method is both simpler and dramatically faster than its competitors. To understand the excellent performance of this method we present a new risk bound on its hard thresholded coefficients.  相似文献   
8.
Summary.  Given a large number of test statistics, a small proportion of which represent departures from the relevant null hypothesis, a simple rule is given for choosing those statistics that are indicative of departure. It is based on fitting by moments a mixture model to the set of test statistics and then deriving an estimated likelihood ratio. Simulation suggests that the procedure has good properties when the departure from an overall null hypothesis is not too small.  相似文献   
9.
It is often of interest to find the maximum or near maxima among a set of vector‐valued parameters in a statistical model; in the case of disease mapping, for example, these correspond to relative‐risk “hotspots” where public‐health intervention may be needed. The general problem is one of estimating nonlinear functions of the ensemble of relative risks, but biased estimates result if posterior means are simply substituted into these nonlinear functions. The authors obtain better estimates of extrema from a new, weighted ranks squared error loss function. The derivation of these Bayes estimators assumes a hidden‐Markov random‐field model for relative risks, and their behaviour is illustrated with real and simulated data.  相似文献   
10.
To explore the projection efficiency of a design, Tsai, et al [2000. Projective three-level main effects designs robust to model uncertainty. Biometrika 87, 467–475] introduced the Q criterion to compare three-level main-effects designs for quantitative factors that allow the consideration of interactions in addition to main effects. In this paper, we extend their method and focus on the case in which experimenters have some prior knowledge, in advance of running the experiment, about the probabilities of effects being non-negligible. A criterion which incorporates experimenters’ prior beliefs about the importance of each effect is introduced to compare orthogonal, or nearly orthogonal, main effects designs with robustness to interactions as a secondary consideration. We show that this criterion, exploiting prior information about model uncertainty, can lead to more appropriate designs reflecting experimenters’ prior beliefs.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号