首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4302篇
  免费   121篇
  国内免费   20篇
管理学   235篇
民族学   8篇
人口学   64篇
丛书文集   58篇
理论方法论   34篇
综合类   667篇
社会学   57篇
统计学   3320篇
  2024年   8篇
  2023年   22篇
  2022年   21篇
  2021年   54篇
  2020年   69篇
  2019年   151篇
  2018年   192篇
  2017年   298篇
  2016年   168篇
  2015年   141篇
  2014年   178篇
  2013年   1122篇
  2012年   346篇
  2011年   160篇
  2010年   136篇
  2009年   149篇
  2008年   165篇
  2007年   125篇
  2006年   97篇
  2005年   117篇
  2004年   100篇
  2003年   84篇
  2002年   71篇
  2001年   60篇
  2000年   61篇
  1999年   52篇
  1998年   60篇
  1997年   41篇
  1996年   20篇
  1995年   27篇
  1994年   18篇
  1993年   20篇
  1992年   21篇
  1991年   12篇
  1990年   10篇
  1989年   8篇
  1988年   8篇
  1987年   6篇
  1986年   3篇
  1985年   7篇
  1984年   6篇
  1983年   9篇
  1982年   4篇
  1981年   3篇
  1980年   5篇
  1979年   1篇
  1978年   1篇
  1977年   5篇
  1975年   1篇
排序方式: 共有4443条查询结果,搜索用时 31 毫秒
101.
Analysis of massive datasets is challenging owing to limitations of computer primary memory. Composite quantile regression (CQR) is a robust and efficient estimation method. In this paper, we extend CQR to massive datasets and propose a divide-and-conquer CQR method. The basic idea is to split the entire dataset into several blocks, applying the CQR method for data in each block, and finally combining these regression results via weighted average. The proposed approach significantly reduces the required amount of primary memory, and the resulting estimate will be as efficient as if the entire data set is analysed simultaneously. Moreover, to improve the efficiency of CQR, we propose a weighted CQR estimation approach. To achieve sparsity with high-dimensional covariates, we develop a variable selection procedure to select significant parametric components and prove the method possessing the oracle property. Both simulations and data analysis are conducted to illustrate the finite sample performance of the proposed methods.  相似文献   
102.
Estimation in the multivariate context when the number of observations available is less than the number of variables is a classical theoretical problem. In order to ensure estimability, one has to assume certain constraints on the parameters. A method for maximum likelihood estimation under constraints is proposed to solve this problem. Even in the extreme case where only a single multivariate observation is available, this may provide a feasible solution. It simultaneously provides a simple, straightforward methodology to allow for specific structures within and between covariance matrices of several populations. This methodology yields exact maximum likelihood estimates.  相似文献   
103.
Recent research indicates that CEOs’ temporal focus (the degree to which individuals attend to the past, present, and future) is a critical predictor for strategic outcomes. Building on paradox theory and the attention-based view, we examine the implications of CEOs’ past and future focus for strategic change. Results from polynomial regression analysis reveal that CEOs who cognitively embrace both the past and the future at the same time engage more in strategic change. In addition, our results reveal that the positive strategic change−firm performance relationship is enhanced when CEOs’ past focus is high, whereas CEOs’ future focus mitigates the translation of strategic change into firm performance (when their past focus is low at the same time). In addition, supplemental analyses indicate that the impact of CEOs’ temporal focus turns out differently in stable and dynamic environments. Our study thus extends the literature on both individual’s temporal focus and strategic change.  相似文献   
104.
In a recent issue of this journal, Holgersson et al. [Dummy variables vs. category-wise models, J. Appl. Stat. 41(2) (2014), pp. 233–241, doi:10.1080/02664763.2013.838665] compared the use of dummy coding in regression analysis to the use of category-wise models (i.e. estimating separate regression models for each group) with respect to estimating and testing group differences in intercept and in slope. They presented three objections against the use of dummy variables in a single regression equation, which could be overcome by the category-wise approach. In this note, I first comment on each of these three objections and next draw attention to some other issues in comparing these two approaches. This commentary further clarifies the differences and similarities between dummy variable and category-wise approaches.  相似文献   
105.
The estimation of the mixtures of regression models is usually based on the normal assumption of components and maximum likelihood estimation of the normal components is sensitive to noise, outliers, or high-leverage points. Missing values are inevitable in many situations and parameter estimates could be biased if the missing values are not handled properly. In this article, we propose the mixtures of regression models for contaminated incomplete heterogeneous data. The proposed models provide robust estimates of regression coefficients varying across latent subgroups even under the presence of missing values. The methodology is illustrated through simulation studies and a real data analysis.  相似文献   
106.
Focusing on the model selection problems in the family of Poisson mixture models (including the Poisson mixture regression model with random effects and zero‐inflated Poisson regression model with random effects), the current paper derives two conditional Akaike information criteria. The criteria are the unbiased estimators of the conditional Akaike information based on the conditional log‐likelihood and the conditional Akaike information based on the joint log‐likelihood, respectively. The derivation is free from the specific parametric assumptions about the conditional mean of the true data‐generating model and applies to different types of estimation methods. Additionally, the derivation is not based on the asymptotic argument. Simulations show that the proposed criteria have promising estimation accuracy. In addition, it is found that the criterion based on the conditional log‐likelihood demonstrates good model selection performance under different scenarios. Two sets of real data are used to illustrate the proposed method.  相似文献   
107.
It has been known that when there is a break in the variance (unconditional heteroskedasticity) of the error term in linear regression models, a routine application of the Lagrange multiplier (LM) test for autocorrelation can cause potentially significant size distortions. We propose a new test for autocorrelation that is robust in the presence of a break in variance. The proposed test is a modified LM test based on a generalized least squares regression. Monte Carlo simulations show that the new test performs well in finite samples and it is especially comparable to other existing heteroskedasticity-robust tests in terms of size, and much better in terms of power.  相似文献   
108.
In this paper we consider a semiparametric regression model involving a d-dimensional quantitative explanatory variable X and including a dimension reduction of X via an index βX. In this model, the main goal is to estimate the Euclidean parameter β and to predict the real response variable Y conditionally to X. Our approach is based on sliced inverse regression (SIR) method and optimal quantization in Lp-norm. We obtain the convergence of the proposed estimators of β and of the conditional distribution. Simulation studies show the good numerical behavior of the proposed estimators for finite sample size.  相似文献   
109.
In Wu and Zen (1999), a linear model selection procedure based on M-estimation is proposed, which includes many classical model selection criteria as its special cases, and it is shown that the selection procedure is strongly consistent for a variety of penalty functions. In this paper, we will investigate its small sample performances for some choices of fixed penalty functions. It can be seen that the performance varies with the choice of the penalty. Hence, a randomized penalty based on observed data is proposed, which preserves the consistency property and provides improved performance over a fixed choice of penalty functions.  相似文献   
110.
城市化是走向现代化的必经阶段,准确的城市化预测是进行经济、社会建设的基础。在结构突变理论的基础上,用Logistic模型对1978~2010年陕西城市化率进行分析。结论表明:1999年为陕西城市化率的结构突变点,说明城市化率的增长受到外部冲击的影响,分段以后的拟合优度明显提高。分别以阈值0.8和1进行分阶段构建的Logistic拟合精度明显提高,但阈值为1的精度更高,说明陕西城市化还在加速,预测表明到2030年陕西城市化率将达到70%左右。总体而言,从1984年到2030年为陕西省城市化的加速阶段。城市化加速阶段的住房问题、人口膨胀、环境恶化、交通拥挤、社会治安问题必须妥善解决。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号