首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4248篇
  免费   125篇
  国内免费   20篇
管理学   204篇
民族学   8篇
人口学   58篇
丛书文集   52篇
理论方法论   32篇
综合类   564篇
社会学   65篇
统计学   3410篇
  2024年   10篇
  2023年   22篇
  2022年   46篇
  2021年   58篇
  2020年   73篇
  2019年   152篇
  2018年   194篇
  2017年   301篇
  2016年   177篇
  2015年   141篇
  2014年   177篇
  2013年   1106篇
  2012年   349篇
  2011年   144篇
  2010年   138篇
  2009年   145篇
  2008年   155篇
  2007年   112篇
  2006年   96篇
  2005年   111篇
  2004年   93篇
  2003年   71篇
  2002年   68篇
  2001年   55篇
  2000年   62篇
  1999年   53篇
  1998年   60篇
  1997年   42篇
  1996年   18篇
  1995年   28篇
  1994年   17篇
  1993年   19篇
  1992年   20篇
  1991年   9篇
  1990年   9篇
  1989年   7篇
  1988年   8篇
  1987年   5篇
  1986年   2篇
  1985年   7篇
  1984年   6篇
  1983年   7篇
  1982年   4篇
  1981年   3篇
  1980年   5篇
  1979年   1篇
  1978年   1篇
  1977年   5篇
  1975年   1篇
排序方式: 共有4393条查询结果,搜索用时 640 毫秒
91.
In this article, we consider a nonparametric regression model with replicated observations based on the dependent error’s structure, for exhibiting dependence among the units. The wavelet procedures are developed to estimate the regression function. The moment consistency, the strong consistency, strong convergence rate and asymptotic normality of wavelet estimator are established under suitable conditions. A simulation study is undertaken to assess the finite sample performance of the proposed method.  相似文献   
92.
Analysis of massive datasets is challenging owing to limitations of computer primary memory. Composite quantile regression (CQR) is a robust and efficient estimation method. In this paper, we extend CQR to massive datasets and propose a divide-and-conquer CQR method. The basic idea is to split the entire dataset into several blocks, applying the CQR method for data in each block, and finally combining these regression results via weighted average. The proposed approach significantly reduces the required amount of primary memory, and the resulting estimate will be as efficient as if the entire data set is analysed simultaneously. Moreover, to improve the efficiency of CQR, we propose a weighted CQR estimation approach. To achieve sparsity with high-dimensional covariates, we develop a variable selection procedure to select significant parametric components and prove the method possessing the oracle property. Both simulations and data analysis are conducted to illustrate the finite sample performance of the proposed methods.  相似文献   
93.
Estimation in the multivariate context when the number of observations available is less than the number of variables is a classical theoretical problem. In order to ensure estimability, one has to assume certain constraints on the parameters. A method for maximum likelihood estimation under constraints is proposed to solve this problem. Even in the extreme case where only a single multivariate observation is available, this may provide a feasible solution. It simultaneously provides a simple, straightforward methodology to allow for specific structures within and between covariance matrices of several populations. This methodology yields exact maximum likelihood estimates.  相似文献   
94.
Recent research indicates that CEOs’ temporal focus (the degree to which individuals attend to the past, present, and future) is a critical predictor for strategic outcomes. Building on paradox theory and the attention-based view, we examine the implications of CEOs’ past and future focus for strategic change. Results from polynomial regression analysis reveal that CEOs who cognitively embrace both the past and the future at the same time engage more in strategic change. In addition, our results reveal that the positive strategic change−firm performance relationship is enhanced when CEOs’ past focus is high, whereas CEOs’ future focus mitigates the translation of strategic change into firm performance (when their past focus is low at the same time). In addition, supplemental analyses indicate that the impact of CEOs’ temporal focus turns out differently in stable and dynamic environments. Our study thus extends the literature on both individual’s temporal focus and strategic change.  相似文献   
95.
In a recent issue of this journal, Holgersson et al. [Dummy variables vs. category-wise models, J. Appl. Stat. 41(2) (2014), pp. 233–241, doi:10.1080/02664763.2013.838665] compared the use of dummy coding in regression analysis to the use of category-wise models (i.e. estimating separate regression models for each group) with respect to estimating and testing group differences in intercept and in slope. They presented three objections against the use of dummy variables in a single regression equation, which could be overcome by the category-wise approach. In this note, I first comment on each of these three objections and next draw attention to some other issues in comparing these two approaches. This commentary further clarifies the differences and similarities between dummy variable and category-wise approaches.  相似文献   
96.
Business establishment microdata typically are required to satisfy agency-specified edit rules, such as balance equations and linear inequalities. Inevitably some establishments' reported data violate the edit rules. Statistical agencies correct faulty values using a process known as edit-imputation. Business establishment data also must be heavily redacted before being shared with the public; indeed, confidentiality concerns lead many agencies not to share establishment microdata as unrestricted access files. When microdata must be heavily redacted, one approach is to create synthetic data, as done in the U.S. Longitudinal Business Database and the German IAB Establishment Panel. This article presents the first implementation of a fully integrated approach to edit-imputation and data synthesis. We illustrate the approach on data from the U.S. Census of Manufactures and present a variety of evaluations of the utility of the synthetic data. The paper also presents assessments of disclosure risks for several intruder attacks. We find that the synthetic data preserve important distributional features from the post-editing confidential microdata, and have low risks for the various attacks.  相似文献   
97.
The estimation of the mixtures of regression models is usually based on the normal assumption of components and maximum likelihood estimation of the normal components is sensitive to noise, outliers, or high-leverage points. Missing values are inevitable in many situations and parameter estimates could be biased if the missing values are not handled properly. In this article, we propose the mixtures of regression models for contaminated incomplete heterogeneous data. The proposed models provide robust estimates of regression coefficients varying across latent subgroups even under the presence of missing values. The methodology is illustrated through simulation studies and a real data analysis.  相似文献   
98.
Recent developments have made model-based imputation of network data feasible in principle, but the extant literature provides few practical examples of its use. In this paper, we consider 14 schools from the widely used In-School Survey of Add Health (Harris et al., 2009), applying an ERGM-based estimation and simulation approach to impute the network missing data for each school. Add Health's complex study design leads to multiple types of missingness, and we introduce practical techniques for handing each. We also develop a cross-validation based method – Held-Out Predictive Evaluation (HOPE) – for assessing this approach. Our results suggest that ERGM-based imputation of edge variables is a viable approach to the analysis of complex studies such as Add Health, provided that care is used in understanding and accounting for the study design.  相似文献   
99.
Focusing on the model selection problems in the family of Poisson mixture models (including the Poisson mixture regression model with random effects and zero‐inflated Poisson regression model with random effects), the current paper derives two conditional Akaike information criteria. The criteria are the unbiased estimators of the conditional Akaike information based on the conditional log‐likelihood and the conditional Akaike information based on the joint log‐likelihood, respectively. The derivation is free from the specific parametric assumptions about the conditional mean of the true data‐generating model and applies to different types of estimation methods. Additionally, the derivation is not based on the asymptotic argument. Simulations show that the proposed criteria have promising estimation accuracy. In addition, it is found that the criterion based on the conditional log‐likelihood demonstrates good model selection performance under different scenarios. Two sets of real data are used to illustrate the proposed method.  相似文献   
100.
It has been known that when there is a break in the variance (unconditional heteroskedasticity) of the error term in linear regression models, a routine application of the Lagrange multiplier (LM) test for autocorrelation can cause potentially significant size distortions. We propose a new test for autocorrelation that is robust in the presence of a break in variance. The proposed test is a modified LM test based on a generalized least squares regression. Monte Carlo simulations show that the new test performs well in finite samples and it is especially comparable to other existing heteroskedasticity-robust tests in terms of size, and much better in terms of power.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号