首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6437篇
  免费   1027篇
  国内免费   20篇
管理学   1412篇
劳动科学   1篇
民族学   25篇
人才学   1篇
人口学   163篇
丛书文集   151篇
理论方法论   1080篇
综合类   1203篇
社会学   2300篇
统计学   1148篇
  2024年   4篇
  2023年   35篇
  2022年   19篇
  2021年   121篇
  2020年   226篇
  2019年   410篇
  2018年   286篇
  2017年   481篇
  2016年   427篇
  2015年   421篇
  2014年   483篇
  2013年   1136篇
  2012年   512篇
  2011年   377篇
  2010年   357篇
  2009年   257篇
  2008年   294篇
  2007年   216篇
  2006年   231篇
  2005年   217篇
  2004年   198篇
  2003年   153篇
  2002年   165篇
  2001年   160篇
  2000年   123篇
  1999年   22篇
  1998年   17篇
  1997年   20篇
  1996年   12篇
  1995年   14篇
  1994年   19篇
  1993年   13篇
  1992年   15篇
  1991年   11篇
  1990年   5篇
  1989年   4篇
  1988年   14篇
  1987年   8篇
  1986年   1篇
排序方式: 共有7484条查询结果,搜索用时 140 毫秒
1.
This article highlights three dimensions to understanding children's well‐being during and after parental imprisonment which have not been fully explored in current research. A consideration of ‘time’ reveals the importance of children's past experiences and their anticipated futures. A focus on ‘space’ highlights the impact of new or altered environmental dynamics. A study of ‘agency’ illuminates how children cope within structural, material and social confines which intensify vulnerability and dependency. This integrated perspective reveals important differences in individual children's experiences and commonalities in broader systemic and social constraints on prisoners’ children. The paper analyses data from a prospective longitudinal study of 35 prisoners’ children during and after their (step) father's imprisonment to illustrate the arguments.  相似文献   
2.
Financial stress index (FSI) is considered to be an important risk management tool to quantify financial vulnerabilities. This paper proposes a new framework based on a hybrid classifier model that integrates rough set theory (RST), FSI, support vector regression (SVR) and a control chart to identify stressed periods. First, the RST method is applied to select variables. The outputs are used as input data for FSI–SVR computation. Empirical analysis is conducted based on monthly FSI of the Federal Reserve Bank of Saint Louis from January 1992 to June 2011. A comparison study is performed between FSI based on the principal component analysis and FSI–SVR. A control chart based on FSI–SVR and extreme value theory is proposed to identify the extremely stressed periods. Our approach identified different stressed periods including internet bubble, subprime crisis and actual financial stress episodes, along with the calmest periods, agreeing with those given by Federal Reserve System reports.  相似文献   
3.
This study examined the prevalence of workplace flexibility and the mechanisms that allow workplace flexibility to influence turnover intentions through work–family and family–work conflicts and job satisfaction among low‐wage workers in South Korea. Participants included 250 low‐wage workers whose monthly salary was less than 2 million Korean won (approx. $1,900). The study results indicate that low‐wage workers have limited access to workplace flexibility and that workplace flexibility plays a significant protective role in reducing their turnover intention, indirectly by decreasing work–family conflicts and enhancing job satisfaction. This article also discusses the implications of these findings for labor policy and social work practice.  相似文献   
4.
5.
In this paper, we consider the deterministic trend model where the error process is allowed to be weakly or strongly correlated and subject to non‐stationary volatility. Extant estimators of the trend coefficient are analysed. We find that under heteroskedasticity, the Cochrane–Orcutt‐type estimator (with some initial condition) could be less efficient than Ordinary Least Squares (OLS) when the process is highly persistent, whereas it is asymptotically equivalent to OLS when the process is less persistent. An efficient non‐parametrically weighted Cochrane–Orcutt‐type estimator is then proposed. The efficiency is uniform over weak or strong serial correlation and non‐stationary volatility of unknown form. The feasible estimator relies on non‐parametric estimation of the volatility function, and the asymptotic theory is provided. We use the data‐dependent smoothing bandwidth that can automatically adjust for the strength of non‐stationarity in volatilities. The implementation does not require pretesting persistence of the process or specification of non‐stationary volatility. Finite‐sample evaluation via simulations and an empirical application demonstrates the good performance of proposed estimators.  相似文献   
6.
Empirical Bayes is a versatile approach to “learn from a lot” in two ways: first, from a large number of variables and, second, from a potentially large amount of prior information, for example, stored in public repositories. We review applications of a variety of empirical Bayes methods to several well‐known model‐based prediction methods, including penalized regression, linear discriminant analysis, and Bayesian models with sparse or dense priors. We discuss “formal” empirical Bayes methods that maximize the marginal likelihood but also more informal approaches based on other data summaries. We contrast empirical Bayes to cross‐validation and full Bayes and discuss hybrid approaches. To study the relation between the quality of an empirical Bayes estimator and p, the number of variables, we consider a simple empirical Bayes estimator in a linear model setting. We argue that empirical Bayes is particularly useful when the prior contains multiple parameters, which model a priori information on variables termed “co‐data”. In particular, we present two novel examples that allow for co‐data: first, a Bayesian spike‐and‐slab setting that facilitates inclusion of multiple co‐data sources and types and, second, a hybrid empirical Bayes–full Bayes ridge regression approach for estimation of the posterior predictive interval.  相似文献   
7.
Strong orthogonal arrays (SOAs) were recently introduced and studied as a class of space‐filling designs for computer experiments. An important problem that has not been addressed in the literature is that of design selection for such arrays. In this article, we conduct a systematic investigation into this problem, and we focus on the most useful SOA(n,m,4,2 + )s and SOA(n,m,4,2)s. This article first addresses the problem of design selection for SOAs of strength 2+ by examining their three‐dimensional projections. Both theoretical and computational results are presented. When SOAs of strength 2+ do not exist, we formulate a general framework for the selection of SOAs of strength 2 by looking at their two‐dimensional projections. The approach is fruitful, as it is applicable when SOAs of strength 2+ do not exist and it gives rise to them when they do. The Canadian Journal of Statistics 47: 302–314; 2019 © 2019 Statistical Society of Canada  相似文献   
8.
Proportional hazards are a common assumption when designing confirmatory clinical trials in oncology. This assumption not only affects the analysis part but also the sample size calculation. The presence of delayed effects causes a change in the hazard ratio while the trial is ongoing since at the beginning we do not observe any difference between treatment arms, and after some unknown time point, the differences between treatment arms will start to appear. Hence, the proportional hazards assumption no longer holds, and both sample size calculation and analysis methods to be used should be reconsidered. The weighted log‐rank test allows a weighting for early, middle, and late differences through the Fleming and Harrington class of weights and is proven to be more efficient when the proportional hazards assumption does not hold. The Fleming and Harrington class of weights, along with the estimated delay, can be incorporated into the sample size calculation in order to maintain the desired power once the treatment arm differences start to appear. In this article, we explore the impact of delayed effects in group sequential and adaptive group sequential designs and make an empirical evaluation in terms of power and type‐I error rate of the of the weighted log‐rank test in a simulated scenario with fixed values of the Fleming and Harrington class of weights. We also give some practical recommendations regarding which methodology should be used in the presence of delayed effects depending on certain characteristics of the trial.  相似文献   
9.
《Risk analysis》2018,38(9):1988-2009
Harbor seals in Iliamna Lake, Alaska, are a small, isolated population, and one of only two freshwater populations of harbor seals in the world, yet little is known about their abundance or risk for extinction. Bayesian hierarchical models were used to estimate abundance and trend of this population. Observational models were developed from aerial survey and harvest data, and they included effects for time of year and time of day on survey counts. Underlying models of abundance and trend were based on a Leslie matrix model that used prior information on vital rates from the literature. We developed three scenarios for variability in the priors and used them as part of a sensitivity analysis. The models were fitted using Markov chain Monte Carlo methods. The population production rate implied by the vital rate estimates was about 5% per year, very similar to the average annual harvest rate. After a period of growth in the 1980s, the population appears to be relatively stable at around 400 individuals. A population viability analysis assessing the risk of quasi‐extinction, defined as any reduction to 50 animals or below in the next 100 years, ranged from 1% to 3%, depending on the prior scenario. Although this is moderately low risk, it does not include genetic or catastrophic environmental events, which may have occurred to the population in the past, so our results should be applied cautiously.  相似文献   
10.
The generalized half-normal (GHN) distribution and progressive type-II censoring are considered in this article for studying some statistical inferences of constant-stress accelerated life testing. The EM algorithm is considered to calculate the maximum likelihood estimates. Fisher information matrix is formed depending on the missing information law and it is utilized for structuring the asymptomatic confidence intervals. Further, interval estimation is discussed through bootstrap intervals. The Tierney and Kadane method, importance sampling procedure and Metropolis-Hastings algorithm are utilized to compute Bayesian estimates. Furthermore, predictive estimates for censored data and the related prediction intervals are obtained. We consider three optimality criteria to find out the optimal stress level. A real data set is used to illustrate the importance of GHN distribution as an alternative lifetime model for well-known distributions. Finally, a simulation study is provided with discussion.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号