全文获取类型
收费全文 | 15972篇 |
免费 | 511篇 |
国内免费 | 202篇 |
专业分类
管理学 | 1845篇 |
劳动科学 | 2篇 |
民族学 | 116篇 |
人才学 | 1篇 |
人口学 | 403篇 |
丛书文集 | 992篇 |
理论方法论 | 572篇 |
综合类 | 8918篇 |
社会学 | 1208篇 |
统计学 | 2628篇 |
出版年
2024年 | 25篇 |
2023年 | 143篇 |
2022年 | 144篇 |
2021年 | 190篇 |
2020年 | 322篇 |
2019年 | 363篇 |
2018年 | 401篇 |
2017年 | 491篇 |
2016年 | 443篇 |
2015年 | 525篇 |
2014年 | 841篇 |
2013年 | 1612篇 |
2012年 | 1103篇 |
2011年 | 1082篇 |
2010年 | 884篇 |
2009年 | 826篇 |
2008年 | 948篇 |
2007年 | 1053篇 |
2006年 | 990篇 |
2005年 | 858篇 |
2004年 | 738篇 |
2003年 | 624篇 |
2002年 | 531篇 |
2001年 | 380篇 |
2000年 | 279篇 |
1999年 | 165篇 |
1998年 | 84篇 |
1997年 | 99篇 |
1996年 | 88篇 |
1995年 | 69篇 |
1994年 | 63篇 |
1993年 | 59篇 |
1992年 | 45篇 |
1991年 | 24篇 |
1990年 | 30篇 |
1989年 | 35篇 |
1988年 | 26篇 |
1987年 | 24篇 |
1986年 | 21篇 |
1985年 | 14篇 |
1984年 | 15篇 |
1983年 | 7篇 |
1982年 | 6篇 |
1981年 | 9篇 |
1980年 | 2篇 |
1979年 | 1篇 |
1978年 | 1篇 |
1977年 | 1篇 |
1975年 | 1篇 |
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
221.
High-content automated imaging platforms allow the multiplexing of several targets simultaneously to generate multi-parametric single-cell data sets over extended periods of time. Typically, standard simple measures such as mean value of all cells at every time point are calculated to summarize the temporal process, resulting in loss of time dynamics of the single cells. Multiple experiments are performed but observation time points are not necessarily identical, leading to difficulties when integrating summary measures from different experiments. We used functional data analysis to analyze continuous curve data, where the temporal process of a response variable for each single cell can be described using a smooth curve. This allows analyses to be performed on continuous functions, rather than on original discrete data points. Functional regression models were applied to determine common temporal characteristics of a set of single cell curves and random effects were employed in the models to explain variation between experiments. The aim of the multiplexing approach is to simultaneously analyze the effect of a large number of compounds in comparison to control to discriminate between their mode of action. Functional principal component analysis based on T-statistic curves for pairwise comparison to control was used to study time-dependent compound effects. 相似文献
222.
Rodrigo R. Pescim Edwin M. M. Ortega Gauss M. Cordeiro Morad Alizadeh 《Journal of applied statistics》2017,44(2):233-252
We introduce a log-linear regression model based on the odd log-logistic generalized half-normal distribution [7]. Some of its structural properties including explicit expressions for the density function, quantile and generating functions and ordinary moments are derived. We estimate the model parameters by the maximum likelihood method. For different parameter settings, proportion of censoring and sample size, some simulations are performed to investigate the behavior of the estimators. We derive the appropriate matrices for assessing local influence diagnostics on the parameter estimates under different perturbation schemes. We also define the martingale and modified deviance residuals to detect outliers and evaluate the model assumptions. In addition, we demonstrate that the extended regression model can be very useful in the analysis of real data and provide more realistic fits than other special regression models. The potentiality of the new regression model is illustrated by means of a real data set. 相似文献
223.
Many directional data such as wind directions can be collected extremely easily so that experiments typically yield a huge number of data points that are sequentially collected. To deal with such big data, the traditional nonparametric techniques rapidly require a lot of time to be computed and therefore become useless in practice if real time or online forecasts are expected. In this paper, we propose a recursive kernel density estimator for directional data which (i) can be updated extremely easily when a new set of observations is available and (ii) keeps asymptotically the nice features of the traditional kernel density estimator. Our methodology is based on Robbins–Monro stochastic approximations ideas. We show that our estimator outperforms the traditional techniques in terms of computational time while being extremely competitive in terms of efficiency with respect to its competitors in the sequential context considered here. We obtain expressions for its asymptotic bias and variance together with an almost sure convergence rate and an asymptotic normality result. Our technique is illustrated on a wind dataset collected in Spain. A Monte‐Carlo study confirms the nice properties of our recursive estimator with respect to its non‐recursive counterpart. 相似文献
224.
225.
The gist of the quickest change-point detection problem is to detect the presence of a change in the statistical behavior of a series of sequentially made observations, and do so in an optimal detection-speed-versus-“false-positive”-risk manner. When optimality is understood either in the generalized Bayesian sense or as defined in Shiryaev's multi-cyclic setup, the so-called Shiryaev–Roberts (SR) detection procedure is known to be the “best one can do”, provided, however, that the observations’ pre- and post-change distributions are both fully specified. We consider a more realistic setup, viz. one where the post-change distribution is assumed known only up to a parameter, so that the latter may be misspecified. The question of interest is the sensitivity (or robustness) of the otherwise “best” SR procedure with respect to a possible misspecification of the post-change distribution parameter. To answer this question, we provide a case study where, in a specific Gaussian scenario, we allow the SR procedure to be “out of tune” in the way of the post-change distribution parameter, and numerically assess the effect of the “mistuning” on Shiryaev's (multi-cyclic) Stationary Average Detection Delay delivered by the SR procedure. The comprehensive quantitative robustness characterization of the SR procedure obtained in the study can be used to develop the respective theory as well as to provide a rational for practical design of the SR procedure. The overall qualitative conclusion of the study is an expected one: the SR procedure is less (more) robust for less (more) contrast changes and for lower (higher) levels of the false alarm risk. 相似文献
226.
In this article, dichotomous variables are used to compare between linear and nonlinear Bayesian structural equation models. Gibbs sampling method is applied for estimation and model comparison. Statistical inferences that involve estimation of parameters and their standard deviations and residuals analysis for testing the selected model are discussed. Hidden continuous normal distribution (censored normal distribution) is used to solve the problem of dichotomous variables. The proposed procedure is illustrated by a simulation data obtained from R program. Analyses are done by using R2WinBUGS package in R-program. 相似文献
227.
Several probability distributions have been proposed in the literature, especially with the aim of obtaining models that are more flexible relative to the behaviors of the density and hazard rate functions. Recently, two generalizations of the Lindley distribution were proposed in the literature: the power Lindley distribution and the inverse Lindley distribution. In this article, a distribution is obtained from these two generalizations and named as inverse power Lindley distribution. Some properties of this distribution and study of the behavior of maximum likelihood estimators are presented and discussed. It is also applied considering two real datasets and compared with the fits obtained for already-known distributions. When applied, the inverse power Lindley distribution was found to be a good alternative for modeling survival data. 相似文献
228.
In this article, we perform Bayesian estimation of stochastic volatility models with heavy tail distributions using Metropolis adjusted Langevin (MALA) and Riemman manifold Langevin (MMALA) methods. We provide analytical expressions for the application of these methods, assess the performance of these methodologies in simulated data, and illustrate their use on two financial time series datasets. 相似文献
229.
The traditional Cobb–Douglas production function uses the compact mathematical form to describe the relationship between the production results and production factors in the production technology process. However, in macro-economic production, multi-structured production exists universally. In order to better demonstrate such input–output relation, a composite production function model is proposed in this article. In aspect of model parameter estimation, artificial fish swarm algorithm is applied. The algorithm has satisfactory performance in overcoming local extreme value and acquiring global extreme value. Moreover, realization of the algorithm does not need the gradient value of the objective function. For this reason, it is adaptive to searching space. Through the improved artificial fish swarm algorithm, convergence rate and precision are both considerably improved. In aspect of model application, the composite production function model is mainly used to calculate economic growth factor contribution rate. In this article, a relatively more accurate calculating method is proposed. In the end, empirical analysis on economic growth contribution rate of China is implemented. 相似文献
230.
In this article, we consider a linear model in which the covariates are measured with errors. We propose a t-type corrected-loss estimation of the covariate effect, when the measurement error follows the Laplace distribution. The proposed estimator is asymptotically normal. In practical studies, some outliers that diminish the robustness of the estimation occur. Simulation studies show that the estimators are resistant to vertical outliers and an application of 6-minute walk test is presented to show that the proposed method performs well. 相似文献