全文获取类型
收费全文 | 4034篇 |
免费 | 48篇 |
国内免费 | 16篇 |
专业分类
管理学 | 407篇 |
民族学 | 6篇 |
人口学 | 69篇 |
丛书文集 | 54篇 |
理论方法论 | 99篇 |
综合类 | 357篇 |
社会学 | 255篇 |
统计学 | 2851篇 |
出版年
2025年 | 1篇 |
2024年 | 22篇 |
2023年 | 37篇 |
2022年 | 35篇 |
2021年 | 51篇 |
2020年 | 73篇 |
2019年 | 133篇 |
2018年 | 169篇 |
2017年 | 253篇 |
2016年 | 123篇 |
2015年 | 119篇 |
2014年 | 122篇 |
2013年 | 886篇 |
2012年 | 335篇 |
2011年 | 141篇 |
2010年 | 126篇 |
2009年 | 158篇 |
2008年 | 152篇 |
2007年 | 146篇 |
2006年 | 121篇 |
2005年 | 135篇 |
2004年 | 111篇 |
2003年 | 92篇 |
2002年 | 71篇 |
2001年 | 75篇 |
2000年 | 70篇 |
1999年 | 56篇 |
1998年 | 52篇 |
1997年 | 38篇 |
1996年 | 19篇 |
1995年 | 16篇 |
1994年 | 27篇 |
1993年 | 18篇 |
1992年 | 19篇 |
1991年 | 13篇 |
1990年 | 11篇 |
1989年 | 8篇 |
1988年 | 12篇 |
1987年 | 7篇 |
1986年 | 4篇 |
1985年 | 8篇 |
1984年 | 6篇 |
1983年 | 9篇 |
1982年 | 9篇 |
1981年 | 1篇 |
1980年 | 2篇 |
1979年 | 2篇 |
1978年 | 2篇 |
1977年 | 1篇 |
1976年 | 1篇 |
排序方式: 共有4098条查询结果,搜索用时 0 毫秒
891.
Therese Saltkjel Tone Alm Andreassen Mirella Minkman 《International Journal of Social Welfare》2023,32(2):149-163
Conceptual frameworks are important for advancing systematic understanding in a field of research. Many conceptual models have been developed to study service integration, but few have addressed activation. Based on an outline of the literature on the integration of labour market services, we explored two complementary conceptual models from integrated care and assessed whether the models could be transferred to the context of inclusive activation. The transferral of conceptual models is contingent upon whether the significant features of inclusive activation are like those of health care, and whether barriers to integrated labour market services are considered. We argue that the models facilitate a more analytical focus on service integration. Nevertheless, the models must be adjusted to account for the significant position of workplaces and employers, the importance of frontline professionals' knowledge base, the co-production of service provision and the values characterising the service encounters. 相似文献
892.
893.
Myungjin Kim;Gyuhyeong Goh; 《Stat》2024,13(2):e678
Despite the increasing importance of high-dimensional varying coefficient models, the study of their Bayesian versions is still in its infancy. This paper contributes to the literature by developing a sparse empirical Bayes formulation that addresses the problem of high-dimensional model selection in the framework of Bayesian varying coefficient modelling under Gaussian process (GP) priors. To break the computational bottleneck of GP-based varying coefficient modelling, we introduce the low-cost computation strategy that incorporates linear algebra techniques and the Laplace approximation into the evaluation of the high-dimensional posterior model distribution. A simulation study is conducted to demonstrate the superiority of the proposed Bayesian method compared to an existing high-dimensional varying coefficient modelling approach. In addition, its applicability to real data analysis is illustrated using yeast cell cycle data. 相似文献
894.
We propose tractable symmetric exponential families of distributions for multivariate vectors of 0's and 1's in dimensions, or what are referred to in this paper as binary vectors, that allow for nontrivial amounts of variation around some central value . We note that more or less standard asymptotics provides likelihood-based inference in the one-sample problem. We then consider mixture models where component distributions are of this form. Bayes analysis based on Dirichlet processes and Jeffreys priors for the exponential family parameters prove tractable and informative in problems where relevant distributions for a vector of binary variables are clearly not symmetric. We also extend our proposed Bayesian mixture model analysis to datasets with missing entries. Performance is illustrated through simulation studies and application to real datasets. 相似文献
895.
Owen G. Ward Zhen Huang Andrew Davison Tian Zheng 《Statistical Analysis and Data Mining》2021,14(1):5-17
Embedding nodes of a large network into a metric (e.g., Euclidean) space has become an area of active research in statistical machine learning, which has found applications in natural and social sciences. Generally, a representation of a network object is learned in a Euclidean geometry and is then used for subsequent tasks regarding the nodes and/or edges of the network, such as community detection, node classification and link prediction. Network embedding algorithms have been proposed in multiple disciplines, often with domain-specific notations and details. In addition, different measures and tools have been adopted to evaluate and compare the methods proposed under different settings, often dependent of the downstream tasks. As a result, it is challenging to study these algorithms in the literature systematically. Motivated by the recently proposed PCS framework for Veridical Data Science, we propose a framework for network embedding algorithms and discuss how the principles of predictability, computability, and stability (PCS) apply in this context. The utilization of this framework in network embedding holds the potential to motivate and point to new directions for future research. 相似文献
896.
Joshua Hanson Pavel Bochev Biliana Paskaleva 《Statistical Analysis and Data Mining》2021,14(6):521-535
Radiation-induced photocurrent in semiconductor devices can be simulated using complex physics-based models, which are accurate, but computationally expensive. This presents a challenge for implementing device characteristics in high-level circuit simulations where it is computationally infeasible to evaluate detailed models for multiple individual circuit elements. In this work we demonstrate a procedure for learning compact delayed photocurrent models that are efficient enough to implement in large-scale circuit simulations, but remain faithful to the underlying physics. Our approach utilizes dynamic mode decomposition (DMD), a system identification technique for learning reduced-order discrete-time dynamical systems from time series data based on singular value decomposition. To obtain physics-aware device models, we simulate the excess carrier density induced by radiation pulses by solving numerically the ambipolar diffusion equation, then use the simulated internal state as training data for the DMD algorithm. Our results show that the significantly reduced-order delayed photocurrent models obtained via this method accurately approximate the dynamics of the internal excess carrier density—which can be used to calculate the induced current at the device boundaries—while remaining compact enough to incorporate into larger circuit simulations. 相似文献
897.
The importance of providing explanations for predictions made by black-box models has led to the development of explainer model methods such as LIME (local interpretable model-agnostic explanations). LIME uses a surrogate model to explain the relationship between predictor variables and predictions from a black-box model in a local region around a prediction of interest. However, the quality of the resulting explanations relies on how well the explainer model captures the black-box model in a specified local region. Here we introduce three visual diagnostics to assess the quality of LIME explanations: (1) explanation scatterplots, (2) assessment metric plots, and (3) feature heatmaps. We apply the visual diagnostics to a forensics bullet matching dataset to show examples where LIME explanations depend on the tuning parameter values and the explainer model oversimplifies the black-box model. Our examples raise concerns about claims made of LIME that are similar to other criticisms in the literature. 相似文献
898.
This paper considers the feature screening method for the ultrahigh dimensional semiparametric linear models with longitudinal data. The C-statistic which measures the rank concordance between predictors and outcomes is generalized to the longitudinal data. On the basis of C-statistic and the score equation theory, we propose a feature screening method named LCSIS. Based on the smoothed technique and the score equations, the proposed estimating screening procedure is easy to compute and satisfies the feature screening consistency. Furthermore, Monte Carlo simulation studies and a real data application are conducted to examine the finite sample performance of the proposed procedure. 相似文献
899.
Minghong Yao Yuning Wang Yan Ren Yulong Jia Kang Zou Ling Li Xin Sun 《Research Synthesis Methods》2023,14(5):689-706
Rare events meta-analyses of randomized controlled trials (RCTs) are often underpowered because the outcomes are infrequent. Real-world evidence (RWE) from non-randomized studies may provide valuable complementary evidence about the effects of rare events, and there is growing interest in including such evidence in the decision-making process. Several methods for combining RCTs and RWE studies have been proposed, but the comparative performance of these methods is not well understood. We describe a simulation study that aims to evaluate an array of alternative Bayesian methods for including RWE in rare events meta-analysis of RCTs: the naïve data synthesis, the design-adjusted synthesis, the use of RWE as prior information, the three-level hierarchical models, and the bias-corrected meta-analysis model. The percentage bias, root-mean-square-error, mean 95% credible interval width, coverage probability, and power are used to measure performance. The various methods are illustrated using a systematic review to evaluate the risk of diabetic ketoacidosis among patients using sodium/glucose co-transporter 2 inhibitors as compared with active-comparators. Our simulations show that the bias-corrected meta-analysis model is comparable to or better than the other methods in terms of all evaluated performance measures and simulation scenarios. Our results also demonstrate that data solely from RCTs may not be sufficiently reliable for assessing the effects of rare events. In summary, the inclusion of RWE could increase the certainty and comprehensiveness of the body of evidence of rare events from RCTs, and the bias-corrected meta-analysis model may be preferable. 相似文献
900.
Zhenbang Wang Emanuel Ben-David Guoqing Diao Martin Slawski 《Wiley Interdisciplinary Reviews: Computational Statistics》2022,14(4):e1570
Data are often collected from multiple heterogeneous sources and are combined subsequently. In combing data, record linkage is an essential task for linking records in datasets that refer to the same entity. Record linkage is generally not error-free; there is a possibility that records belonging to different entities are linked or that records belonging to the same entity are missed. It is not advisable to simply ignore such errors because they can lead to data contamination and introduce bias in sample selection or estimation, which, in return, can lead to misleading statistical results and conclusions. For a long while, this problem was not properly recognized, but in recent years a growing number of researchers have developed methodology for dealing with linkage errors in regression analysis with linked datasets. The main goal of this overview is to give an account of those developments, with an emphasis on recent approaches and their connection to the so-called “Broken Sample” problem. We also provide a short empirical study that illustrates the efficacy of corrective methods in different scenarios. This article is categorized under:
- Statistical Models > Model Selection
- Statistical and Graphical Methods of Data Analysis > Robust Methods
- Statistical and Graphical Methods of Data Analysis > Multivariate Analysis