首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2951篇
  免费   109篇
  国内免费   34篇
管理学   304篇
民族学   13篇
人口学   51篇
丛书文集   152篇
理论方法论   59篇
综合类   960篇
社会学   128篇
统计学   1427篇
  2024年   2篇
  2023年   17篇
  2022年   36篇
  2021年   43篇
  2020年   55篇
  2019年   97篇
  2018年   116篇
  2017年   128篇
  2016年   114篇
  2015年   99篇
  2014年   143篇
  2013年   525篇
  2012年   244篇
  2011年   165篇
  2010年   145篇
  2009年   122篇
  2008年   122篇
  2007年   145篇
  2006年   115篇
  2005年   103篇
  2004年   105篇
  2003年   100篇
  2002年   78篇
  2001年   55篇
  2000年   49篇
  1999年   33篇
  1998年   25篇
  1997年   26篇
  1996年   10篇
  1995年   10篇
  1994年   10篇
  1993年   7篇
  1992年   14篇
  1991年   8篇
  1990年   4篇
  1989年   7篇
  1988年   1篇
  1987年   2篇
  1986年   2篇
  1985年   2篇
  1984年   3篇
  1983年   1篇
  1982年   3篇
  1980年   1篇
  1977年   1篇
  1975年   1篇
排序方式: 共有3094条查询结果,搜索用时 234 毫秒
91.
移动互联网的快速发展和市场竞争的加剧促使电信运营商开始转型,加强产业链合作,服务提供商(SP)、内容提供商(CP)等互联网企业和终端制造商是电信运营商寻求合作的重要方向.文章分析了移动互联网背景下电信运营商开展产业链延伸合作的动因;提出了平台开放型、股权投资型、战略联盟型等三种电信运营商和SP/CP的合作模式,以及合约合作型、深入定制型、运营商自建型等三种电信运营商和终端制造商的合作模式;并进一步分析了电信运营商与SP/CP以及终端制造商合作的模式选择策略,以期为中国电信运营商开展产业链合作提供借鉴和参考.  相似文献   
92.
We update a previous approach to the estimation of the size of an open population when there are multiple lists at each time point. Our motivation is 35 years of longitudinal data on the detection of drug users by the Central Registry of Drug Abuse in Hong Kong. We develop a two‐stage smoothing spline approach. This gives a flexible and easily implemented alternative to the previous method which was based on kernel smoothing. The new method retains the property of reducing the variability of the individual estimates at each time point. We evaluate the new method by means of a simulation study that includes an examination of the effects of variable selection. The new method is then applied to data collected by the Central Registry of Drug Abuse. The parameter estimates obtained are compared with the well known Jolly–Seber estimates based on single capture methods.  相似文献   
93.
This article describes how a frequentist model averaging approach can be used for concentration–QT analyses in the context of thorough QTc studies. Based on simulations, we have concluded that starting from three candidate model families (linear, exponential, and Emax) the model averaging approach leads to treatment effect estimates that are quite robust with respect to the control of the type I error in nearly all simulated scenarios; in particular, with the model averaging approach, the type I error appears less sensitive to model misspecification than the widely used linear model. We noticed also few differences in terms of performance between the model averaging approach and the more classical model selection approach, but we believe that, despite both can be recommended in practice, the model averaging approach can be more appealing because of some deficiencies of model selection approach pointed out in the literature. We think that a model averaging or model selection approach should be systematically considered for conducting concentration–QT analyses. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
94.
This article investigates the choice of working covariance structures in the analysis of spatially correlated observations motivated by cardiac imaging data. Through Monte Carlo simulations, we found that the choice of covariance structure affects the efficiency of the estimator and power of the test. Choosing the popular unstructured working covariance structure results in an over-inflated Type I error possibly due to a sample size not large enough relative to the number of parameters being estimated. With regard to model fit indices, Bayesian Information Criterion outperforms Akaike Information Criterion in choosing the correct covariance structure used to generate data.  相似文献   
95.
Mehmet Caner 《Econometric Reviews》2016,35(8-10):1343-1346
This special issue is concerned with model selection and shrinkage estimators. This Introduction gives an overview of the papers published in this special issue.  相似文献   
96.
Oracle Inequalities for Convex Loss Functions with Nonlinear Targets   总被引:1,自引:1,他引:0  
This article considers penalized empirical loss minimization of convex loss functions with unknown target functions. Using the elastic net penalty, of which the Least Absolute Shrinkage and Selection Operator (Lasso) is a special case, we establish a finite sample oracle inequality which bounds the loss of our estimator from above with high probability. If the unknown target is linear, this inequality also provides an upper bound of the estimation error of the estimated parameter vector. Next, we use the non-asymptotic results to show that the excess loss of our estimator is asymptotically of the same order as that of the oracle. If the target is linear, we give sufficient conditions for consistency of the estimated parameter vector. We briefly discuss how a thresholded version of our estimator can be used to perform consistent variable selection. We give two examples of loss functions covered by our framework.  相似文献   
97.
This article considers in-sample prediction and out-of-sample forecasting in regressions with many exogenous predictors. We consider four dimension-reduction devices: principal components, ridge, Landweber Fridman, and partial least squares. We derive rates of convergence for two representative models: an ill-posed model and an approximate factor model. The theory is developed for a large cross-section and a large time-series. As all these methods depend on a tuning parameter to be selected, we also propose data-driven selection methods based on cross-validation and establish their optimality. Monte Carlo simulations and an empirical application to forecasting inflation and output growth in the U.S. show that data-reduction methods outperform conventional methods in several relevant settings, and might effectively guard against instabilities in predictors’ forecasting ability.  相似文献   
98.
In this paper, we focus on the problem of factor screening in nonregular two-level designs through gradually reducing the number of possible sets of active factors. We are particularly concerned with situations when three or four factors are active. Our proposed method works through examining fits of projection models, where variable selection techniques are used to reduce the number of terms. To examine the reliability of the methods in combination with such techniques, a panel of models consisting of three or four active factors with data generated from the 12-run and the 20-run Plackett–Burman (PB) design is used. The dependence of the procedure on the amount of noise, the number of active factors and the number of experimental factors is also investigated. For designs with few runs such as the 12-run PB design, variable selection should be done with care and default procedures in computer software may not be reliable to which we suggest improvements. A real example is included to show how we propose factor screening can be done in practice.  相似文献   
99.
This paper provides a Bayesian estimation procedure for monotone regression models incorporating the monotone trend constraint subject to uncertainty. For monotone regression modeling with stochastic restrictions, we propose a Bayesian Bernstein polynomial regression model using two-stage hierarchical prior distributions based on a family of rectangle-screened multivariate Gaussian distributions extended from the work of Gurtis and Ghosh [7 S.M. Curtis and S.K. Ghosh, A variable selection approach to monotonic regression with Bernstein polynomials, J. Appl. Stat. 38 (2011), pp. 961976. doi: 10.1080/02664761003692423[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]]. This approach reflects the uncertainty about the prior constraint, and thus proposes a regression model subject to monotone restriction with uncertainty. Based on the proposed model, we derive the posterior distributions for unknown parameters and present numerical schemes to generate posterior samples. We show the empirical performance of the proposed model based on synthetic data and real data applications and compare the performance to the Bernstein polynomial regression model of Curtis and Ghosh [7 S.M. Curtis and S.K. Ghosh, A variable selection approach to monotonic regression with Bernstein polynomials, J. Appl. Stat. 38 (2011), pp. 961976. doi: 10.1080/02664761003692423[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]] for the shape restriction with certainty. We illustrate the effectiveness of our proposed method that incorporates the uncertainty of the monotone trend and automatically adapts the regression function to the monotonicity, through empirical analysis with synthetic data and real data applications.  相似文献   
100.
以中国P2P行业信任危机为背景,借助网贷之家公布的数据,利用面板固定效应模型研究信任缺失情形下的逆向选择问题以及政府和行业协会重建品牌信任的效果,结果表明:P2P行业信任危机下的平台旨在通过建设品牌形象来吸引投资人与借款人的做法是徒劳的,信任缺失导致合规平台的诚信经营并不为公众所认可,长此以往这些平台很可能会退出市场,进而引发逆向选择问题;自上而下的制度建设在重建品牌信任中取得了可喜的成绩,有效缓解并逆转了P2P行业的逆向选择问题,且市场整治初见成效。此研究成果对于目前正在进行的P2P行业整顿具有十分现实的指导意义。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号