首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1423篇
  免费   33篇
  国内免费   1篇
管理学   84篇
民族学   2篇
人口学   41篇
丛书文集   33篇
理论方法论   58篇
综合类   209篇
社会学   64篇
统计学   966篇
  2023年   8篇
  2022年   8篇
  2021年   10篇
  2020年   21篇
  2019年   41篇
  2018年   46篇
  2017年   73篇
  2016年   30篇
  2015年   30篇
  2014年   56篇
  2013年   371篇
  2012年   111篇
  2011年   67篇
  2010年   38篇
  2009年   60篇
  2008年   54篇
  2007年   72篇
  2006年   27篇
  2005年   43篇
  2004年   28篇
  2003年   36篇
  2002年   21篇
  2001年   27篇
  2000年   18篇
  1999年   11篇
  1998年   13篇
  1997年   14篇
  1996年   5篇
  1995年   12篇
  1994年   9篇
  1993年   2篇
  1992年   6篇
  1991年   2篇
  1990年   7篇
  1989年   4篇
  1988年   1篇
  1987年   2篇
  1986年   3篇
  1985年   9篇
  1984年   6篇
  1983年   20篇
  1982年   5篇
  1981年   4篇
  1980年   6篇
  1979年   8篇
  1978年   9篇
  1977年   1篇
  1976年   1篇
  1975年   1篇
排序方式: 共有1457条查询结果,搜索用时 15 毫秒
101.
In this study, we propose a prior on restricted Vector Autoregressive (VAR) models. The prior setting permits efficient Markov Chain Monte Carlo (MCMC) sampling from the posterior of the VAR parameters and estimation of the Bayes factor. Numerical simulations show that when the sample size is small, the Bayes factor is more effective in selecting the correct model than the commonly used Schwarz criterion. We conduct Bayesian hypothesis testing of VAR models on the macroeconomic, state-, and sector-specific effects of employment growth.  相似文献   
102.
In simulation studies for discriminant analysis, misclassification errors are often computed using the Monte Carlo method, by testing a classifier on large samples generated from known populations. Although large samples are expected to behave closely to the underlying distributions, they may not do so in a small interval or region, and thus may lead to unexpected results. We demonstrate with an example that the LDA misclassification error computed via the Monte Carlo method may often be smaller than the Bayes error. We give a rigorous explanation and recommend a method to properly compute misclassification errors.  相似文献   
103.
In this article, new pseudo-Bayes and pseudo-empirical Bayes estimators for estimating the proportion of a potentially sensitive attribute in a survey sampling have been introduced. The proposed estimators are compared with the recent estimator proposed by Odumade and Singh [Efficient use of two decks of cards in randomized response sampling, Comm. Statist. Theory Methods 38 (2009), pp. 439–446] and Warner [Randomized response: A survey technique for eliminating evasive answer bias, J. Amer. Statist. Assoc. 60 (1965), pp. 63–69].  相似文献   
104.
In this paper, the design of reliability sampling plans for the Pareto lifetime model under progressive Type-II right censoring is considered. Sampling plans are derived using the decision theoretic approach with a suitable loss or cost function that consists of sampling cost, rejection cost, and acceptance cost. The decision rule is based on the estimated reliability function. Plans are constructed within the Bayesian context using the natural conjugate prior. Simulations for evaluating the Bayes risk are carried out and the optimal sampling plans are reported for various sample sizes, observed number of failures and removal probabilities.  相似文献   
105.
This article presents the statistical inferences on Weibull parameters with the data that are progressively type II censored. The maximum likelihood estimators are derived. For incorporation of previous information with current data, the Bayesian approach is considered. We obtain the Bayes estimators under squared error loss with a bivariate prior distribution, and derive the credible intervals for the parameters of Weibull distribution. Also, the Bayes prediction intervals for future observations are obtained in the one- and two-sample cases. The method is shown to be practical, although a computer program is required for its implementation. A numerical example is presented for illustration and some simulation study are performed.  相似文献   
106.
The marginal likelihood can be notoriously difficult to compute, and particularly so in high-dimensional problems. Chib and Jeliazkov employed the local reversibility of the Metropolis–Hastings algorithm to construct an estimator in models where full conditional densities are not available analytically. The estimator is free of distributional assumptions and is directly linked to the simulation algorithm. However, it generally requires a sequence of reduced Markov chain Monte Carlo runs which makes the method computationally demanding especially in cases when the parameter space is large. In this article, we study the implementation of this estimator on latent variable models which embed independence of the responses to the observables given the latent variables (conditional or local independence). This property is employed in the construction of a multi-block Metropolis-within-Gibbs algorithm that allows to compute the estimator in a single run, regardless of the dimensionality of the parameter space. The counterpart one-block algorithm is also considered here, by pointing out the difference between the two approaches. The paper closes with the illustration of the estimator in simulated and real-life data sets.  相似文献   
107.
We propose optimal procedures to achieve the goal of partitioning k multivariate normal populations into two disjoint subsets with respect to a given standard vector. Definition of good or bad multivariate normal populations is given according to their Mahalanobis distances to a known standard vector as being small or large. Partitioning k multivariate normal populations is reduced to partitioning k non-central Chi-square or non-central F distributions with respect to the corresponding non-centrality parameters depending on whether the covariance matrices are known or unknown. The minimum required sample size for each population is determined to ensure that the probability of correct decision attains a certain level. An example is given to illustrate our procedures.  相似文献   
108.
A Bayesian discovery procedure   总被引:1,自引:0,他引:1  
Summary.  We discuss a Bayesian discovery procedure for multiple-comparison problems. We show that, under a coherent decision theoretic framework, a loss function combining true positive and false positive counts leads to a decision rule that is based on a threshold of the posterior probability of the alternative. Under a semiparametric model for the data, we show that the Bayes rule can be approximated by the optimal discovery procedure, which was recently introduced by Storey. Improving the approximation leads us to a Bayesian discovery procedure, which exploits the multiple shrinkage in clusters that are implied by the assumed non-parametric model. We compare the Bayesian discovery procedure and the optimal discovery procedure estimates in a simple simulation study and in an assessment of differential gene expression based on microarray data from tumour samples. We extend the setting of the optimal discovery procedure by discussing modifications of the loss function that lead to different single-thresholding statistics. Finally, we provide an application of the previous arguments to dependent (spatial) data.  相似文献   
109.
The Bayes estimators of the Gini index, the mean income and the proportion of the population living below a prescribed income level are obtained in this paper on the basis of censored income data from a pareto income distribution. The said estimators are obtained under the assumptions of a two-parameter exponential prior distribution and the usual squared error loss function. This work is also extended to the case when the income data are grouped and the exact incomes for the individuals in the population are not available. The method for the assessment of the hyperparameters is also outlined. Finally, the results are generalized for the doubly truncated gamma prior distribution. Now deceased.  相似文献   
110.
Some studies generate data that can be grouped into clusters in more than one way. Consider for instance a smoking prevention study in which responses on smoking status are collected over several years in a cohort of students from a number of different schools. This yields longitudinal data, also cross‐sectionaliy clustered in schools. The authors present a model for analyzing binary data of this type, combining generalized estimating equations and estimation of random effects to address the longitudinal and cross‐sectional dependence, respectively. The estimation procedure for this model is discussed, as are the results of a simulation study used to investigate the properties of its estimates. An illustration using data from a smoking prevention trial is given.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号