首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   15738篇
  免费   660篇
  国内免费   207篇
管理学   1861篇
劳动科学   2篇
民族学   63篇
人才学   3篇
人口学   314篇
丛书文集   777篇
理论方法论   354篇
综合类   7395篇
社会学   525篇
统计学   5311篇
  2024年   19篇
  2023年   115篇
  2022年   208篇
  2021年   235篇
  2020年   346篇
  2019年   454篇
  2018年   511篇
  2017年   652篇
  2016年   541篇
  2015年   555篇
  2014年   881篇
  2013年   2106篇
  2012年   1174篇
  2011年   1018篇
  2010年   841篇
  2009年   829篇
  2008年   909篇
  2007年   881篇
  2006年   809篇
  2005年   683篇
  2004年   565篇
  2003年   481篇
  2002年   419篇
  2001年   362篇
  2000年   229篇
  1999年   176篇
  1998年   95篇
  1997年   98篇
  1996年   72篇
  1995年   62篇
  1994年   45篇
  1993年   39篇
  1992年   37篇
  1991年   40篇
  1990年   24篇
  1989年   18篇
  1988年   15篇
  1987年   9篇
  1986年   7篇
  1985年   13篇
  1984年   8篇
  1983年   9篇
  1982年   5篇
  1981年   1篇
  1980年   1篇
  1979年   4篇
  1978年   2篇
  1977年   1篇
  1975年   1篇
排序方式: 共有10000条查询结果,搜索用时 765 毫秒
501.
Motivated by a recent tuberculosis (TB) study, this paper is concerned with covariates missing not at random (MNAR) and models the potential intracluster correlation by a frailty. We consider the regression analysis of right‐censored event times from clustered subjects under a Cox proportional hazards frailty model and present the semiparametric maximum likelihood estimator (SPMLE) of the model parameters. An easy‐to‐implement pseudo‐SPMLE is then proposed to accommodate more realistic situations using readily available supplementary information on the missing covariates. Algorithms are provided to compute the estimators and their consistent variance estimators. We demonstrate that both the SPMLE and the pseudo‐SPMLE are consistent and asymptotically normal by the arguments based on the theory of modern empirical processes. The proposed approach is examined numerically via simulation and illustrated with an analysis of the motivating TB study data.  相似文献   
502.
The combined model accounts for different forms of extra-variability and has traditionally been applied in the likelihood framework, or in the Bayesian setting via Markov chain Monte Carlo. In this article, integrated nested Laplace approximation is investigated as an alternative estimation method for the combined model for count data, and compared with the former estimation techniques. Longitudinal, spatial, and multi-hierarchical data scenarios are investigated in three case studies as well as a simulation study. As a conclusion, integrated nested Laplace approximation provides fast and precise estimation, while avoiding convergence problems often seen when using Markov chain Monte Carlo.  相似文献   
503.
In this paper, we discuss the usual stochastic and reversed hazard rate orders between the series and parallel systems from two sets of independent heterogeneous exponentiated Weibull components. We also obtain the results concerning the convex transform orders between parallel systems and obtain necessary and sufficient conditions under which the dispersive and usual stochastic orders, and the right spread and increasing convex orders between the lifetimes of the two systems are equivalent. Finally, in the multiple-outlier exponentiated Weibull models, based on weak majorization and p-larger orders between the vectors of scale and shape parameters, some characterization results for comparing the lifetimes of parallel and series systems are also established, respectively. The results of this paper can be used in practical situations to find various bounds for the important aging characteristics of these systems.  相似文献   
504.
In this article, we propose a factor-adjusted multiple testing (FAT) procedure based on factor-adjusted p-values in a linear factor model involving some observable and unobservable factors, for the purpose of selecting skilled funds in empirical finance. The factor-adjusted p-values were obtained after extracting the latent common factors by the principal component method. Under some mild conditions, the false discovery proportion can be consistently estimated even if the idiosyncratic errors are allowed to be weakly correlated across units. Furthermore, by appropriately setting a sequence of threshold values approaching zero, the proposed FAT procedure enjoys model selection consistency. Extensive simulation studies and a real data analysis for selecting skilled funds in the U.S. financial market are presented to illustrate the practical utility of the proposed method. Supplementary materials for this article are available online.  相似文献   
505.
The prediction of the time of default in a credit risk setting via survival analysis needs to take a high censoring rate into account. This rate is because default does not occur for the majority of debtors. Mixture cure models allow the part of the loan population that is unsusceptible to default to be modeled, distinct from time of default for the susceptible population. In this article, we extend the mixture cure model to include time-varying covariates. We illustrate the method via simulations and by incorporating macro-economic factors as predictors for an actual bank dataset.  相似文献   
506.
This paper derives Akaike information criterion (AIC), corrected AIC, the Bayesian information criterion (BIC) and Hannan and Quinn’s information criterion for approximate factor models assuming a large number of cross-sectional observations and studies the consistency properties of these information criteria. It also reports extensive simulation results comparing the performance of the extant and new procedures for the selection of the number of factors. The simulation results show the di?culty of determining which criterion performs best. In practice, it is advisable to consider several criteria at the same time, especially Hannan and Quinn’s information criterion, Bai and Ng’s ICp2 and BIC3, and Onatski’s and Ahn and Horenstein’s eigenvalue-based criteria. The model-selection criteria considered in this paper are also applied to Stock and Watson’s two macroeconomic data sets. The results differ considerably depending on the model-selection criterion in use, but evidence suggesting five factors for the first data and five to seven factors for the second data is obtainable.  相似文献   
507.
In confirmatory clinical trials, the prespecification of the primary analysis model is a universally accepted scientific principle to allow strict control of the type I error. Consequently, both the ICH E9 guideline and the European Medicines Agency (EMA) guideline on missing data in confirmatory clinical trials require that the primary analysis model is defined unambiguously. This requirement applies to mixed models for longitudinal data handling missing data implicitly. To evaluate the compliance with the EMA guideline, we evaluated the model specifications in those clinical study protocols from development phases II and III submitted between 2015 and 2018 to the Ethics Committee at Hannover Medical School under the German Medicinal Products Act, which planned to use a mixed model for longitudinal data in the confirmatory testing strategy. Overall, 39 trials from different types of sponsors and a wide range of therapeutic areas were evaluated. While nearly all protocols specify the fixed and random effects of the analysis model (95%), only 77% give the structure of the covariance matrix used for modeling the repeated measurements. Moreover, the testing method (36%), the estimation method (28%), the computation method (3%), and the fallback strategy (18%) are given by less than half the study protocols. Subgroup analyses indicate that these findings are universal and not specific to clinical trial phases or size of company. Altogether, our results show that guideline compliance is to various degrees poor and consequently, strict type I error rate control at the intended level is not guaranteed.  相似文献   
508.
Portfolio evaluation is the evaluation of multiple projects with a common purpose. While logic models have been used in many ways to support evaluation, and data visualization has been used widely to present and communicate evaluation findings, adopting logic models for portfolio evaluation and using data visualization to share findings simultaneously is surprisingly limited in the literature. With the data from a sample portfolio of 209 projects which aims to improve the system of early care and education (ECE), this study illustrated how to use logic model and data visualization techniques to conduct a portfolio evaluation by answering two evaluation questions: “To what extent are the elements of a logic model (strategies, sub-strategies, activities, outcomes, and impacts) reflected in the sample portfolio?” and “Which dominant paths through the logic model were illuminated by the data visualization technique?” For the first question, the visualization technique illuminated several dominant strategies, sub-strategies, activities, and outcomes. For the second question, our visualization techniques made it convenient to identify critical paths through the logic model. Implications for both program evaluation and program planning were discussed.  相似文献   
509.
This paper proposes the use of the Bernstein–Dirichlet process prior for a new nonparametric approach to estimating the link function in the single-index model (SIM). The Bernstein–Dirichlet process prior has so far mainly been used for nonparametric density estimation. Here we modify this approach to allow for an approximation of the unknown link function. Instead of the usual Gaussian distribution, the error term is assumed to be asymmetric Laplace distributed which increases the flexibility and robustness of the SIM. To automatically identify truly active predictors, spike-and-slab priors are used for Bayesian variable selection. Posterior computations are performed via a Metropolis-Hastings-within-Gibbs sampler using a truncation-based algorithm for stick-breaking priors. We compare the efficiency of the proposed approach with well-established techniques in an extensive simulation study and illustrate its practical performance by an application to nonparametric modelling of the power consumption in a sewage treatment plant.  相似文献   
510.
In this paper, two new multiple influential observation detection methods, GCD.GSPR and mCD*, are introduced for logistic regression. The proposed diagnostic measures are compared with the generalized difference in fits (GDFFITS) and the generalized squared difference in beta (GSDFBETA), which are multiple influential diagnostics. The simulation study is conducted with one, two and five independent variable logistic regression models. The performance of the diagnostic measures is examined for a single contaminated independent variable for each model and in the case where all the independent variables are contaminated with certain contamination rates and intensity. In addition, the performance of the diagnostic measures is compared in terms of the correct identification rate and swamping rate via a frequently referred to data set in the literature.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号