首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1073篇
  免费   26篇
  国内免费   3篇
管理学   43篇
民族学   3篇
人口学   42篇
丛书文集   21篇
理论方法论   8篇
综合类   250篇
社会学   31篇
统计学   704篇
  2023年   8篇
  2022年   10篇
  2021年   6篇
  2020年   22篇
  2019年   26篇
  2018年   28篇
  2017年   61篇
  2016年   32篇
  2015年   16篇
  2014年   47篇
  2013年   257篇
  2012年   89篇
  2011年   34篇
  2010年   38篇
  2009年   37篇
  2008年   41篇
  2007年   28篇
  2006年   28篇
  2005年   24篇
  2004年   17篇
  2003年   16篇
  2002年   19篇
  2001年   19篇
  2000年   22篇
  1999年   25篇
  1998年   17篇
  1997年   15篇
  1996年   8篇
  1995年   31篇
  1994年   16篇
  1993年   12篇
  1992年   14篇
  1991年   9篇
  1990年   3篇
  1989年   3篇
  1988年   3篇
  1987年   4篇
  1986年   3篇
  1985年   2篇
  1984年   2篇
  1983年   1篇
  1982年   2篇
  1981年   2篇
  1980年   1篇
  1979年   2篇
  1977年   1篇
  1976年   1篇
排序方式: 共有1102条查询结果,搜索用时 15 毫秒
91.
On the probability distribution of economic growth   总被引:1,自引:0,他引:1  
Three important and significantly heteroscedastic gross domestic product series are studied. Omnipresent heteroscedasticity is removed and the distributions of the series are then compared to normal, normal mixture and normal–asymmetric Laplace (NAL) distributions. NAL represents a skewed and leptokurtic distribution, which is in line with the Aghion and Howitt [1 Aghion, P. and Howitt, P. 1992. A model of growth through creative destruction. Econometrica, 60: 323351. [Crossref], [Web of Science ®] [Google Scholar]] model for economic growth, based on Schumpeter's idea of creative destruction. Statistical properties of the NAL distributions are provided and it is shown that NAL fits the data better than the alternatives.  相似文献   
92.
In this article, we develop regression models with cross‐classified responses. Conditional independence structures can be explored/exploited through the selective inclusion/exclusion of terms in a certain functional ANOVA decomposition, and the estimation is done nonparametrically via the penalized likelihood method. A cohort of computational and data analytical tools are presented, which include cross‐validation for smoothing parameter selection, Kullback–Leibler projection for model selection, and Bayesian confidence intervals for odds ratios. Random effects are introduced to model possible correlations such as those found in longitudinal and clustered data. Empirical performances of the methods are explored in simulation studies of limited scales, and a real data example is presented using some eyetracking data from linguistic studies. The techniques are implemented in a suite of R functions, whose usage is briefly described in the appendix. The Canadian Journal of Statistics 39: 591–609; 2011. © 2011 Statistical Society of Canada  相似文献   
93.
In this paper we consider the estimation of a density function on the basis of a random stratified sample from weighted distributions. We propose a linear wavelet density estimator and prove its consistency. The behavior of the proposed estimator and its smoothed versions is eventually illustrated by simulated examples and a case study involving alcohol blood level in DUI cases.  相似文献   
94.
The paper develops some objective priors for correlation coefficient of the bivariate normal distribution. The criterion used is the asymptotic matching of coverage probabilities of Bayesian credible intervals with the corresponding frequentist coverage probabilities. The paper uses various matching criteria, namely, quantile matching, highest posterior density matching, and matching via inversion of test statistics. Each matching criterion leads to a different prior for the parameter of interest. We evaluate their performance by comparing credible intervals through simulation studies. In addition, inference through several likelihood-based methods have been discussed.  相似文献   
95.
We propose a modification to the regular kernel density estimation method that use asymmetric kernels to circumvent the spill over problem for densities with positive support. First a pivoting method is introduced for placement of the data relative to the kernel function. This yields a strongly consistent density estimator that integrates to one for each fixed bandwidth in contrast to most density estimators based on asymmetric kernels proposed in the literature. Then a data-driven Bayesian local bandwidth selection method is presented and lognormal, gamma, Weibull and inverse Gaussian kernels are discussed as useful special cases. Simulation results and a real-data example illustrate the advantages of the new methodology.  相似文献   
96.
Suppose that the conditional density of a response variable given a vector of explanatory variables is parametrically modelled, and that data are collected by a two-phase sampling design. First, a simple random sample is drawn from the population. The stratum membership in a finite number of strata of the response and explanatory variables is recorded for each unit. Second, a subsample is drawn from the phase-one sample such that the selection probability is determined by the stratum membership. The response and explanatory variables are fully measured at this phase. We synthesize existing results on nonparametric likelihood estimation and present a streamlined approach for the computation and the large sample theory of profile likelihood in four different situations. The amount of information in terms of data and assumptions varies depending on whether the phase-one data are retained, the selection probabilities are known, and/or the stratum probabilities are known. We establish and illustrate numerically the order of efficiency among the maximum likelihood estimators, according to the amount of information utilized, in the four situations.  相似文献   
97.
The main goal in small area estimation is to use models to ‘borrow strength’ from the ensemble because the direct estimates of small area parameters are generally unreliable. However, model-based estimates from the small areas do not usually match the value of the single estimate for the large area. Benchmarking is done by applying a constraint, internally or externally, to ensure that the ‘total’ of the small areas matches the ‘grand total’. This is particularly useful because it is difficult to check model assumptions owing to the sparseness of the data. We use a Bayesian nested error regression model, which incorporates unit-level covariates and sampling weights, to develop a method to internally benchmark the finite population means of small areas. We use two examples to illustrate our method. We also perform a simulation study to further assess the properties of our method.  相似文献   
98.
Heavy tail probability distributions are important in many scientific disciplines such as hydrology, geology, and physics and therefore feature heavily in statistical practice. Rather than specifying a family of heavy-tailed distributions for a given application, it is more common to use a nonparametric approach, where the distributions are classified according to the tail behavior. Through the use of the logarithm of Parzen's density-quantile function, this work proposes a consistent, flexible estimator of the tail exponent. The approach we develop is based on a Fourier series estimator and allows for separate estimates of the left and right tail exponents. The theoretical properties for the tail exponent estimator are determined, and we also provide some results of independent interest that may be used to establish weak convergence of stochastic processes. We assess the practical performance of the method by exploring its finite sample properties in simulation studies. The overall performance is competitive with classical tail index estimators, and, in contrast, with these our method obtains somewhat better results in the case of lighter heavy-tailed distributions.  相似文献   
99.
Abstract. In numerous applications data are observed at random times and an estimated graph of the spectral density may be relevant for characterizing and explaining phenomena. By using a wavelet analysis, one derives a non‐parametric estimator of the spectral density of a Gaussian process with stationary increments (or a stationary Gaussian process) from the observation of one path at random discrete times. For every positive frequency, this estimator is proved to satisfy a central limit theorem with a convergence rate depending on the roughness of the process and the moment of random durations between successive observations. In the case of stationary Gaussian processes, one can compare this estimator with estimators based on the empirical periodogram. Both estimators reach the same optimal rate of convergence, but the estimator based on wavelet analysis converges for a different class of random times. Simulation examples and an application to biological data are also provided.  相似文献   
100.
The Dirichlet process prior allows flexible nonparametric mixture modeling. The number of mixture components is not specified in advance and can grow as new data arrive. However, analyses based on the Dirichlet process prior are sensitive to the choice of the parameters, including an infinite-dimensional distributional parameter G 0. Most previous applications have either fixed G 0 as a member of a parametric family or treated G 0 in a Bayesian fashion, using parametric prior specifications. In contrast, we have developed an adaptive nonparametric method for constructing smooth estimates of G 0. We combine this method with a technique for estimating α, the other Dirichlet process parameter, that is inspired by an existing characterization of its maximum-likelihood estimator. Together, these estimation procedures yield a flexible empirical Bayes treatment of Dirichlet process mixtures. Such a treatment is useful in situations where smooth point estimates of G 0 are of intrinsic interest, or where the structure of G 0 cannot be conveniently modeled with the usual parametric prior families. Analysis of simulated and real-world datasets illustrates the robustness of this approach.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号