首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6486篇
  免费   175篇
  国内免费   16篇
管理学   244篇
民族学   5篇
人口学   47篇
丛书文集   33篇
理论方法论   22篇
综合类   506篇
社会学   60篇
统计学   5760篇
  2024年   2篇
  2023年   36篇
  2022年   49篇
  2021年   44篇
  2020年   113篇
  2019年   237篇
  2018年   277篇
  2017年   432篇
  2016年   201篇
  2015年   157篇
  2014年   196篇
  2013年   1980篇
  2012年   593篇
  2011年   174篇
  2010年   190篇
  2009年   201篇
  2008年   182篇
  2007年   146篇
  2006年   144篇
  2005年   130篇
  2004年   143篇
  2003年   110篇
  2002年   103篇
  2001年   109篇
  2000年   94篇
  1999年   95篇
  1998年   97篇
  1997年   73篇
  1996年   48篇
  1995年   37篇
  1994年   42篇
  1993年   37篇
  1992年   33篇
  1991年   17篇
  1990年   25篇
  1989年   16篇
  1988年   19篇
  1987年   10篇
  1986年   9篇
  1985年   5篇
  1984年   16篇
  1983年   17篇
  1982年   9篇
  1981年   6篇
  1980年   4篇
  1979年   6篇
  1978年   5篇
  1977年   2篇
  1976年   2篇
  1975年   3篇
排序方式: 共有6677条查询结果,搜索用时 250 毫秒
221.
Inverse probability weighting (IPW) and multiple imputation are two widely adopted approaches dealing with missing data. The former models the selection probability, and the latter models data distribution. Consistent estimation requires correct specification of corresponding models. Although the augmented IPW method provides an extra layer of protection on consistency, it is usually not sufficient in practice as the true data‐generating process is unknown. This paper proposes a method combining the two approaches in the same spirit of calibration in sampling survey literature. Multiple models for both the selection probability and data distribution can be simultaneously accounted for, and the resulting estimator is consistent if any model is correctly specified. The proposed method is within the framework of estimating equations and is general enough to cover regression analysis with missing outcomes and/or missing covariates. Results on both theoretical and numerical investigation are provided.  相似文献   
222.
为了识别驱动中国宏观经济周期波动性的影响因素,依据中国经济的特殊性,基于1978-2014年42个宏观经济变量的样本数据集构建动态因子模型进行实证分析。研究发现,驱动中国宏观经济波动主要因素有5个潜在宏观因子,其中前四个主要因子分别揭示了驱动中国经济周期波动的主要波动源,它们分别为工业产出因子、外商直接投资(FDI)因子、设备利用率因子和全要素生产率因子。另外,讨论了熨平经济周期性波动的经济政策选择。  相似文献   
223.
Remote sensing of the earth with satellites yields datasets that can be massive in size, nonstationary in space, and non‐Gaussian in distribution. To overcome computational challenges, we use the reduced‐rank spatial random effects (SRE) model in a statistical analysis of cloud‐mask data from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) instrument on board NASA's Terra satellite. Parameterisations of cloud processes are the biggest source of uncertainty and sensitivity in different climate models’ future projections of Earth's climate. An accurate quantification of the spatial distribution of clouds, as well as a rigorously estimated pixel‐scale clear‐sky‐probability process, is needed to establish reliable estimates of cloud‐distributional changes and trends caused by climate change. Here we give a hierarchical spatial‐statistical modelling approach for a very large spatial dataset of 2.75 million pixels, corresponding to a granule of MODIS cloud‐mask data, and we use spatial change‐of‐Support relationships to estimate cloud fraction at coarser resolutions. Our model is non‐Gaussian; it postulates a hidden process for the clear‐sky probability that makes use of the SRE model, EM‐estimation, and optimal (empirical Bayes) spatial prediction of the clear‐sky‐probability process. Measures of prediction uncertainty are also given.  相似文献   
224.
基于LW估计的中国证券市场长记忆性实证研究   总被引:1,自引:0,他引:1  
针对国内研究证券市场长记忆性主要采用的是时域方法,文章采用基于频域的半参数Local Whittle估计方法研究了中国证券市场的长记忆特性,并且和对数周期图回归(GPH)方法进行了比较。结果表明,LW方法具有不受时间频率的影响,能够有效消除时间序列中短期记忆和周期性对估计结果的影响等优点,明显优于GPH方法。实证结果表明,中国证券市场存在明显的长记忆性,并且长记忆行为在重大突发事件发生期间更加明显。文章还从长记忆的角度探讨了“政策市”的存在性和相关特点。  相似文献   
225.
226.
In this paper, we consider the problem of making statistical inference for a truncated normal distribution under progressive type I interval censoring. We obtain maximum likelihood estimators of unknown parameters using the expectation-maximization algorithm and in sequel, we also compute corresponding midpoint estimates of parameters. Estimation based on the probability plot method is also considered. Asymptotic confidence intervals of unknown parameters are constructed based on the observed Fisher information matrix. We obtain Bayes estimators of parameters with respect to informative and non-informative prior distributions under squared error and linex loss functions. We compute these estimates using the importance sampling procedure. The highest posterior density intervals of unknown parameters are constructed as well. We present a Monte Carlo simulation study to compare the performance of proposed point and interval estimators. Analysis of a real data set is also performed for illustration purposes. Finally, inspection times and optimal censoring plans based on the expected Fisher information matrix are discussed.  相似文献   
227.
This paper addresses the problems of frequentist and Bayesian estimation for the unknown parameters of generalized Lindley distribution based on lower record values. We first derive the exact explicit expressions for the single and product moments of lower record values, and then use these results to compute the means, variances and covariance between two lower record values. We next obtain the maximum likelihood estimators and associated asymptotic confidence intervals. Furthermore, we obtain Bayes estimators under the assumption of gamma priors on both the shape and the scale parameters of the generalized Lindley distribution, and associated the highest posterior density interval estimates. The Bayesian estimation is studied with respect to both symmetric (squared error) and asymmetric (linear-exponential (LINEX)) loss functions. Finally, we compute Bayesian predictive estimates and predictive interval estimates for the future record values. To illustrate the findings, one real data set is analyzed, and Monte Carlo simulations are performed to compare the performances of the proposed methods of estimation and prediction.  相似文献   
228.
In high-dimensional linear regression, the dimension of variables is always greater than the sample size. In this situation, the traditional variance estimation technique based on ordinary least squares constantly exhibits a high bias even under sparsity assumption. One of the major reasons is the high spurious correlation between unobserved realized noise and several predictors. To alleviate this problem, a refitted cross-validation (RCV) method has been proposed in the literature. However, for a complicated model, the RCV exhibits a lower probability that the selected model includes the true model in case of finite samples. This phenomenon may easily result in a large bias of variance estimation. Thus, a model selection method based on the ranks of the frequency of occurrences in six votes from a blocked 3×2 cross-validation is proposed in this study. The proposed method has a considerably larger probability of including the true model in practice than the RCV method. The variance estimation obtained using the model selected by the proposed method also shows a lower bias and a smaller variance. Furthermore, theoretical analysis proves the asymptotic normality property of the proposed variance estimation.  相似文献   
229.
The purpose of this study is to highlight the application of sparse logistic regression models in dealing with prediction of tumour pathological subtypes based on lung cancer patients'' genomic information. We consider sparse logistic regression models to deal with the high dimensionality and correlation between genomic regions. In a hierarchical likelihood (HL) method, it is assumed that the random effects follow a normal distribution and its variance is assumed to follow a gamma distribution. This formulation considers ridge and lasso penalties as special cases. We extend the HL penalty to include a ridge penalty (called ‘HLnet’) in a similar principle of the elastic net penalty, which is constructed from lasso penalty. The results indicate that the HL penalty creates more sparse estimates than lasso penalty with comparable prediction performance, while HLnet and elastic net penalties have the best prediction performance in real data. We illustrate the methods in a lung cancer study.  相似文献   
230.
Data collected in various scientific fields are count data. One way to analyze such data is to compare the individual levels of the factor treatment using multiple comparisons. However, the measured individuals are often clustered – e.g. according to litter or rearing. This must be considered when estimating the parameters by a repeated measurement model. In addition, ignoring the overdispersion to which count data is prone leads to an increase of the type one error rate. We carry out simulation studies using several different data settings and compare different multiple contrast tests with parameter estimates from generalized estimation equations and generalized linear mixed models in order to observe coverage and rejection probabilities. We generate overdispersed, clustered count data in small samples as can be observed in many biological settings. We have found that the generalized estimation equations outperform generalized linear mixed models if the variance-sandwich estimator is correctly specified. Furthermore, generalized linear mixed models show problems with the convergence rate under certain data settings, but there are model implementations with lower implications exists. Finally, we use an example of genetic data to demonstrate the application of the multiple contrast test and the problems of ignoring strong overdispersion.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号