首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5910篇
  免费   227篇
  国内免费   113篇
管理学   385篇
民族学   13篇
人才学   1篇
人口学   109篇
丛书文集   237篇
理论方法论   126篇
综合类   1517篇
社会学   324篇
统计学   3538篇
  2024年   15篇
  2023年   99篇
  2022年   127篇
  2021年   145篇
  2020年   176篇
  2019年   264篇
  2018年   312篇
  2017年   379篇
  2016年   267篇
  2015年   223篇
  2014年   304篇
  2013年   1031篇
  2012年   390篇
  2011年   248篇
  2010年   214篇
  2009年   213篇
  2008年   227篇
  2007年   232篇
  2006年   200篇
  2005年   207篇
  2004年   176篇
  2003年   139篇
  2002年   100篇
  2001年   110篇
  2000年   95篇
  1999年   64篇
  1998年   64篇
  1997年   44篇
  1996年   27篇
  1995年   31篇
  1994年   25篇
  1993年   16篇
  1992年   19篇
  1991年   11篇
  1990年   11篇
  1989年   6篇
  1988年   8篇
  1987年   7篇
  1986年   6篇
  1985年   6篇
  1984年   6篇
  1983年   5篇
  1980年   1篇
排序方式: 共有6250条查询结果,搜索用时 15 毫秒
131.
Q. F. Xu  C. Cai  X. Huang 《Statistics》2019,53(1):26-42
In recent decades, quantile regression has received much more attention from academics and practitioners. However, most of existing computational algorithms are only effective for small or moderate size problems. They cannot solve quantile regression with large-scale data reliably and efficiently. To this end, we propose a new algorithm to implement quantile regression on large-scale data using the sparse exponential transform (SET) method. This algorithm mainly constructs a well-conditioned basis and a sampling matrix to reduce the number of observations. It then solves a quantile regression problem on this reduced matrix and obtains an approximate solution. Through simulation studies and empirical analysis of a 5% sample of the US 2000 Census data, we demonstrate efficiency of the SET-based algorithm. Numerical results indicate that our new algorithm is effective in terms of computation time and performs well for large-scale quantile regression.  相似文献   
132.
We propose novel parametric concentric multi‐unimodal small‐subsphere families of densities for p ? 1 ≥ 2‐dimensional spherical data. Their parameters describe a common axis for K small hypersubspheres, an array of K directional modes, one mode for each subsphere, and K pairs of concentrations parameters, each pair governing horizontal (within the subsphere) and vertical (orthogonal to the subsphere) concentrations. We introduce two kinds of distributions. In its one‐subsphere version, the first kind coincides with a special case of the Fisher–Bingham distribution, and the second kind is a novel adaption that models independent horizontal and vertical variations. In its multisubsphere version, the second kind allows for a correlation of horizontal variation over different subspheres. In medical imaging, the situation of p ? 1 = 2 occurs precisely in modeling the variation of a skeletally represented organ shape due to rotation, twisting, and bending. For both kinds, we provide new computationally feasible algorithms for simulation and estimation and propose several tests. To the best knowledge of the authors, our proposed models are the first to treat the variation of directional data along several concentric small hypersubspheres, concentrated near modes on each subsphere, let alone horizontal dependence. Using several simulations, we show that our methods are more powerful than a recent nonparametric method and ad hoc methods. Using data from medical imaging, we demonstrate the advantage of our method and infer on the dominating axis of rotation of the human knee joint at different walking phases.  相似文献   
133.
This paper discusses regression analysis of clustered current status data under semiparametric additive hazards models. In particular, we consider the situation when cluster sizes can be informative about correlated failure times from the same cluster. To address the problem, we present estimating equation-based estimation procedures and establish asymptotic properties of the resulting estimates. Finite sample performance of the proposed method is assessed through an extensive simulation study, which indicates the procedure works well. The method is applied to a motivating data set from a lung tumorigenicity study.  相似文献   
134.
Empirical Bayes is a versatile approach to “learn from a lot” in two ways: first, from a large number of variables and, second, from a potentially large amount of prior information, for example, stored in public repositories. We review applications of a variety of empirical Bayes methods to several well‐known model‐based prediction methods, including penalized regression, linear discriminant analysis, and Bayesian models with sparse or dense priors. We discuss “formal” empirical Bayes methods that maximize the marginal likelihood but also more informal approaches based on other data summaries. We contrast empirical Bayes to cross‐validation and full Bayes and discuss hybrid approaches. To study the relation between the quality of an empirical Bayes estimator and p, the number of variables, we consider a simple empirical Bayes estimator in a linear model setting. We argue that empirical Bayes is particularly useful when the prior contains multiple parameters, which model a priori information on variables termed “co‐data”. In particular, we present two novel examples that allow for co‐data: first, a Bayesian spike‐and‐slab setting that facilitates inclusion of multiple co‐data sources and types and, second, a hybrid empirical Bayes–full Bayes ridge regression approach for estimation of the posterior predictive interval.  相似文献   
135.
Many research fields increasingly involve analyzing data of a complex structure. Models investigating the dependence of a response on a predictor have moved beyond the ordinary scalar-on-vector regression. We propose a regression model for a scalar response and a surface (or a bivariate function) predictor. The predictor has a random component and the regression model falls in the framework of linear random effects models. We estimate the model parameters via maximizing the log-likelihood with the ECME (Expectation/Conditional Maximization Either) algorithm. We use the approach to analyze a data set where the response is the neuroticism score and the predictor is the resting-state brain function image. In the simulations we tried, the approach has better performance than two other approaches, a functional principal component regression approach and a smooth scalar-on-image regression approach.  相似文献   
136.
In this paper, we consider parametric Bayesian inference for stochastic differential equations driven by a pure‐jump stable Lévy process, which is observed at high frequency. In most cases of practical interest, the likelihood function is not available; hence, we use a quasi‐likelihood and place an associated prior on the unknown parameters. It is shown under regularity conditions that there is a Bernstein–von Mises theorem associated to the posterior. We then develop a Markov chain Monte Carlo algorithm for Bayesian inference, and assisted with theoretical results, we show how to scale Metropolis–Hastings proposals when the frequency of the data grows, in order to prevent the acceptance ratio from going to zero in the large data limit. Our algorithm is presented on numerical examples that help verify our theoretical findings.  相似文献   
137.
The combined model accounts for different forms of extra-variability and has traditionally been applied in the likelihood framework, or in the Bayesian setting via Markov chain Monte Carlo. In this article, integrated nested Laplace approximation is investigated as an alternative estimation method for the combined model for count data, and compared with the former estimation techniques. Longitudinal, spatial, and multi-hierarchical data scenarios are investigated in three case studies as well as a simulation study. As a conclusion, integrated nested Laplace approximation provides fast and precise estimation, while avoiding convergence problems often seen when using Markov chain Monte Carlo.  相似文献   
138.
In this article, we propose a factor-adjusted multiple testing (FAT) procedure based on factor-adjusted p-values in a linear factor model involving some observable and unobservable factors, for the purpose of selecting skilled funds in empirical finance. The factor-adjusted p-values were obtained after extracting the latent common factors by the principal component method. Under some mild conditions, the false discovery proportion can be consistently estimated even if the idiosyncratic errors are allowed to be weakly correlated across units. Furthermore, by appropriately setting a sequence of threshold values approaching zero, the proposed FAT procedure enjoys model selection consistency. Extensive simulation studies and a real data analysis for selecting skilled funds in the U.S. financial market are presented to illustrate the practical utility of the proposed method. Supplementary materials for this article are available online.  相似文献   
139.
In this paper, we investigate the k-nearest neighbours (kNN) estimation of nonparametric regression model for strong mixing functional time series data. More precisely, we establish the uniform almost complete convergence rate of the kNN estimator under some mild conditions. Furthermore, a simulation study and an empirical application to the real data analysis of sea surface temperature (SST) are carried out to illustrate the finite sample performances and the usefulness of the kNN approach.  相似文献   
140.
In confirmatory clinical trials, the prespecification of the primary analysis model is a universally accepted scientific principle to allow strict control of the type I error. Consequently, both the ICH E9 guideline and the European Medicines Agency (EMA) guideline on missing data in confirmatory clinical trials require that the primary analysis model is defined unambiguously. This requirement applies to mixed models for longitudinal data handling missing data implicitly. To evaluate the compliance with the EMA guideline, we evaluated the model specifications in those clinical study protocols from development phases II and III submitted between 2015 and 2018 to the Ethics Committee at Hannover Medical School under the German Medicinal Products Act, which planned to use a mixed model for longitudinal data in the confirmatory testing strategy. Overall, 39 trials from different types of sponsors and a wide range of therapeutic areas were evaluated. While nearly all protocols specify the fixed and random effects of the analysis model (95%), only 77% give the structure of the covariance matrix used for modeling the repeated measurements. Moreover, the testing method (36%), the estimation method (28%), the computation method (3%), and the fallback strategy (18%) are given by less than half the study protocols. Subgroup analyses indicate that these findings are universal and not specific to clinical trial phases or size of company. Altogether, our results show that guideline compliance is to various degrees poor and consequently, strict type I error rate control at the intended level is not guaranteed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号