首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6166篇
  免费   283篇
  国内免费   114篇
管理学   438篇
民族学   16篇
人才学   1篇
人口学   110篇
丛书文集   265篇
理论方法论   135篇
综合类   1674篇
社会学   328篇
统计学   3596篇
  2024年   14篇
  2023年   88篇
  2022年   114篇
  2021年   143篇
  2020年   176篇
  2019年   268篇
  2018年   306篇
  2017年   387篇
  2016年   272篇
  2015年   230篇
  2014年   313篇
  2013年   1063篇
  2012年   407篇
  2011年   269篇
  2010年   229篇
  2009年   225篇
  2008年   246篇
  2007年   256篇
  2006年   225篇
  2005年   227篇
  2004年   194篇
  2003年   156篇
  2002年   127篇
  2001年   128篇
  2000年   107篇
  1999年   75篇
  1998年   66篇
  1997年   47篇
  1996年   31篇
  1995年   35篇
  1994年   30篇
  1993年   17篇
  1992年   23篇
  1991年   11篇
  1990年   13篇
  1989年   5篇
  1988年   8篇
  1987年   7篇
  1986年   6篇
  1985年   7篇
  1984年   6篇
  1983年   5篇
  1980年   1篇
排序方式: 共有6563条查询结果,搜索用时 46 毫秒
231.
In this article, we propose a factor-adjusted multiple testing (FAT) procedure based on factor-adjusted p-values in a linear factor model involving some observable and unobservable factors, for the purpose of selecting skilled funds in empirical finance. The factor-adjusted p-values were obtained after extracting the latent common factors by the principal component method. Under some mild conditions, the false discovery proportion can be consistently estimated even if the idiosyncratic errors are allowed to be weakly correlated across units. Furthermore, by appropriately setting a sequence of threshold values approaching zero, the proposed FAT procedure enjoys model selection consistency. Extensive simulation studies and a real data analysis for selecting skilled funds in the U.S. financial market are presented to illustrate the practical utility of the proposed method. Supplementary materials for this article are available online.  相似文献   
232.
In confirmatory clinical trials, the prespecification of the primary analysis model is a universally accepted scientific principle to allow strict control of the type I error. Consequently, both the ICH E9 guideline and the European Medicines Agency (EMA) guideline on missing data in confirmatory clinical trials require that the primary analysis model is defined unambiguously. This requirement applies to mixed models for longitudinal data handling missing data implicitly. To evaluate the compliance with the EMA guideline, we evaluated the model specifications in those clinical study protocols from development phases II and III submitted between 2015 and 2018 to the Ethics Committee at Hannover Medical School under the German Medicinal Products Act, which planned to use a mixed model for longitudinal data in the confirmatory testing strategy. Overall, 39 trials from different types of sponsors and a wide range of therapeutic areas were evaluated. While nearly all protocols specify the fixed and random effects of the analysis model (95%), only 77% give the structure of the covariance matrix used for modeling the repeated measurements. Moreover, the testing method (36%), the estimation method (28%), the computation method (3%), and the fallback strategy (18%) are given by less than half the study protocols. Subgroup analyses indicate that these findings are universal and not specific to clinical trial phases or size of company. Altogether, our results show that guideline compliance is to various degrees poor and consequently, strict type I error rate control at the intended level is not guaranteed.  相似文献   
233.
Nonresponse is a very common phenomenon in survey sampling. Nonignorable nonresponse – that is, a response mechanism that depends on the values of the variable having nonresponse – is the most difficult type of nonresponse to handle. This article develops a robust estimation approach to estimating equations (EEs) by incorporating the modelling of nonignorably missing data, the generalized method of moments (GMM) method and the imputation of EEs via the observed data rather than the imputed missing values when some responses are subject to nonignorably missingness. Based on a particular semiparametric logistic model for nonignorable missing response, this paper proposes the modified EEs to calculate the conditional expectation under nonignorably missing data. We can apply the GMM to infer the parameters. The advantage of our method is that it replaces the non-parametric kernel-smoothing with a parametric sampling importance resampling (SIR) procedure to avoid nonparametric kernel-smoothing problems with high dimensional covariates. The proposed method is shown to be more robust than some current approaches by the simulations.  相似文献   
234.
This article focuses on the clustering problem based on Dirichlet process (DP) mixtures. To model both time invariant and temporal patterns, different from other existing clustering methods, the proposed semi-parametric model is flexible in that both the common and unique patterns are taken into account simultaneously. Furthermore, by jointly clustering subjects and the associated variables, the intrinsic complex shared patterns among subjects and among variables are expected to be captured. The number of clusters and cluster assignments are directly inferred with the use of DP. Simulation studies illustrate the effectiveness of the proposed method. An application to wheal size data is discussed with an aim of identifying novel temporal patterns among allergens within subject clusters.  相似文献   
235.
While excess zeros are often thought to cause data over-dispersion (i.e. when the variance exceeds the mean), this implication is not absolute. One should instead consider a flexible class of distributions that can address data dispersion along with excess zeros. This work develops a zero-inflated sum-of-Conway-Maxwell-Poissons (ZISCMP) regression as a flexible analysis tool to model count data that express significant data dispersion and contain excess zeros. This class of models contains several special case zero-inflated regressions, including zero-inflated Poisson (ZIP), zero-inflated negative binomial (ZINB), zero-inflated binomial (ZIB), and the zero-inflated Conway-Maxwell-Poisson (ZICMP). Through simulated and real data examples, we demonstrate class flexibility and usefulness. We further utilize it to analyze shark species data from Australia's Great Barrier Reef to assess the environmental impact of human action on the number of various species of sharks.  相似文献   
236.
Dimensionality reduction is one of the important preprocessing steps in high-dimensional data analysis. In this paper we propose a supervised manifold learning method, it makes use of the information of continuous dependent variables to distinguish intrinsic neighbourhood and extrinsic neighbourhood of data samples, and construct two graphs according to these two kinds of neighbourhoods. Following the idea of Laplacian eigenmaps, we reveal that on the low-dimensional manifold the neighbourhood structure can be preserved or even improved. Our approach has two important characteristics: (i) it uses dependent variables to find an informative low-dimensional projection which is robust to noisy independent variables and (ii) the objective function simultaneously enlarges the distance between dissimilar samples and pushes similar samples close to each other according to the graph constructed with the help of continuous dependent variables. Our experiments demonstrate that the effectiveness of our method is over their traditional rivals.  相似文献   
237.
This paper discusses regression analysis of clustered current status data under semiparametric additive hazards models. In particular, we consider the situation when cluster sizes can be informative about correlated failure times from the same cluster. To address the problem, we present estimating equation-based estimation procedures and establish asymptotic properties of the resulting estimates. Finite sample performance of the proposed method is assessed through an extensive simulation study, which indicates the procedure works well. The method is applied to a motivating data set from a lung tumorigenicity study.  相似文献   
238.
The paper examines to what extent a player's market value depends on his skills. Therefore, a data set covering 28 performance measures and the market values of about 493 players from 1. and 2. German Bundesliga is analysed. Applying robust analysis techniques, we are able to robustly estimate market values of soccer players. The results show (1) that there are significantly underrated and overrated players and (2) that a player's affiliation to a certain team may contribute to his market value. We conclude that a club's reputation affects the market values of its players and that star players are in tendency overrated.  相似文献   
239.
Q. F. Xu  C. Cai  X. Huang 《Statistics》2019,53(1):26-42
In recent decades, quantile regression has received much more attention from academics and practitioners. However, most of existing computational algorithms are only effective for small or moderate size problems. They cannot solve quantile regression with large-scale data reliably and efficiently. To this end, we propose a new algorithm to implement quantile regression on large-scale data using the sparse exponential transform (SET) method. This algorithm mainly constructs a well-conditioned basis and a sampling matrix to reduce the number of observations. It then solves a quantile regression problem on this reduced matrix and obtains an approximate solution. Through simulation studies and empirical analysis of a 5% sample of the US 2000 Census data, we demonstrate efficiency of the SET-based algorithm. Numerical results indicate that our new algorithm is effective in terms of computation time and performs well for large-scale quantile regression.  相似文献   
240.
Many research fields increasingly involve analyzing data of a complex structure. Models investigating the dependence of a response on a predictor have moved beyond the ordinary scalar-on-vector regression. We propose a regression model for a scalar response and a surface (or a bivariate function) predictor. The predictor has a random component and the regression model falls in the framework of linear random effects models. We estimate the model parameters via maximizing the log-likelihood with the ECME (Expectation/Conditional Maximization Either) algorithm. We use the approach to analyze a data set where the response is the neuroticism score and the predictor is the resting-state brain function image. In the simulations we tried, the approach has better performance than two other approaches, a functional principal component regression approach and a smooth scalar-on-image regression approach.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号