全文获取类型
收费全文 | 23754篇 |
免费 | 791篇 |
国内免费 | 245篇 |
专业分类
管理学 | 1725篇 |
劳动科学 | 3篇 |
民族学 | 241篇 |
人才学 | 1篇 |
人口学 | 583篇 |
丛书文集 | 1931篇 |
理论方法论 | 1018篇 |
综合类 | 14846篇 |
社会学 | 1622篇 |
统计学 | 2820篇 |
出版年
2024年 | 38篇 |
2023年 | 188篇 |
2022年 | 199篇 |
2021年 | 263篇 |
2020年 | 424篇 |
2019年 | 465篇 |
2018年 | 482篇 |
2017年 | 585篇 |
2016年 | 575篇 |
2015年 | 662篇 |
2014年 | 1175篇 |
2013年 | 1968篇 |
2012年 | 1525篇 |
2011年 | 1663篇 |
2010年 | 1400篇 |
2009年 | 1285篇 |
2008年 | 1437篇 |
2007年 | 1659篇 |
2006年 | 1544篇 |
2005年 | 1406篇 |
2004年 | 1279篇 |
2003年 | 1170篇 |
2002年 | 953篇 |
2001年 | 793篇 |
2000年 | 546篇 |
1999年 | 236篇 |
1998年 | 129篇 |
1997年 | 129篇 |
1996年 | 110篇 |
1995年 | 79篇 |
1994年 | 80篇 |
1993年 | 68篇 |
1992年 | 52篇 |
1991年 | 31篇 |
1990年 | 30篇 |
1989年 | 36篇 |
1988年 | 25篇 |
1987年 | 23篇 |
1986年 | 21篇 |
1985年 | 14篇 |
1984年 | 15篇 |
1983年 | 7篇 |
1982年 | 6篇 |
1981年 | 9篇 |
1980年 | 2篇 |
1979年 | 1篇 |
1978年 | 1篇 |
1977年 | 1篇 |
1975年 | 1篇 |
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
121.
民族地区经济发展存在的主要问题和矛盾 总被引:1,自引:0,他引:1
民族地区经济发展中存在着诸多有利因素 ,尤其是西部大开发战略的实施 ,将极大地促进民族地区经济发展。但也存在着社会发育程度与发展社会主义市场经济、资源开发与环境保护、公平与效率、小生产与大市场大流通等方面的矛盾 相似文献
122.
Process regression methodology is underdeveloped relative to the frequency with which pertinent data arise. In this article, the response-190 is a binary indicator process representing the joint event of being alive and remaining in a specific state. The process is indexed by time (e.g., time since diagnosis) and observed continuously. Data of this sort occur frequently in the study of chronic disease. A general area of application involves a recurrent event with non-negligible duration (e.g., hospitalization and associated length of hospital stay) and subject to a terminating event (e.g., death). We propose a semiparametric multiplicative model for the process version of the probability of being alive and in the (transient) state of interest. Under the proposed methods, the regression parameter is estimated through a procedure that does not require estimating the baseline probability. Unlike the majority of process regression methods, the proposed methods accommodate multiple sources of censoring. In particular, we derive a computationally convenient variant of inverse probability of censoring weighting based on the additive hazards model. We show that the regression parameter estimator is asymptotically normal, and that the baseline probability function estimator converges to a Gaussian process. Simulations demonstrate that our estimators have good finite sample performance. We apply our method to national end-stage liver disease data. The Canadian Journal of Statistics 48: 222–237; 2020 © 2019 Statistical Society of Canada 相似文献
123.
Wolfgang Panny 《随机性模型》2016,32(1):160-178
Two-periodic random walks have up-steps and down-steps of one unit as usual, but the probability of an up-step is α after an even number of steps and β = 1 ? α after an odd number of steps, and reversed for down-steps. This concept was studied by Böhm and Hornik[2]. We complement this analysis by using methods from (analytic) combinatorics. By using two steps at once, we can reduce the analysis to the study of Motzkin paths, with up-steps, down-steps, and level-steps. Using a proper substitution, we get the generating functions of interest in an explicit and neat form. The parameters that are discussed here are the (one-sided) maximum (already studied by Böhm and Hornik[2]) and the two-sided maximum. For the asymptotic evaluation of the average value of the two-sided maximum after n random steps, more sophisticated methods from complex analysis (Mellin transform, singularity analysis) are required. The approach to transfer the analysis to Motzkin paths is, of course, not restricted to the two parameters under consideration. 相似文献
124.
Bernard Sébastien David Hoffman Clémence Rigaux Franck Pellissier Jérôme Msihid 《Pharmaceutical statistics》2016,15(6):450-458
This article describes how a frequentist model averaging approach can be used for concentration–QT analyses in the context of thorough QTc studies. Based on simulations, we have concluded that starting from three candidate model families (linear, exponential, and Emax) the model averaging approach leads to treatment effect estimates that are quite robust with respect to the control of the type I error in nearly all simulated scenarios; in particular, with the model averaging approach, the type I error appears less sensitive to model misspecification than the widely used linear model. We noticed also few differences in terms of performance between the model averaging approach and the more classical model selection approach, but we believe that, despite both can be recommended in practice, the model averaging approach can be more appealing because of some deficiencies of model selection approach pointed out in the literature. We think that a model averaging or model selection approach should be systematically considered for conducting concentration–QT analyses. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
125.
A Note on the Large Sample Properties of Estimators Based on Generalized Linear Models for Correlated Pseudo‐observations 下载免费PDF全文
Pseudo‐values have proven very useful in censored data analysis in complex settings such as multi‐state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results. These results were studied more formally in Graw et al., Lifetime Data Anal., 15, 2009, 241 that derived some key results based on a second‐order von Mises expansion. However, results concerning large sample properties of estimates based on regression models for pseudo‐values still seem unclear. In this paper, we study these large sample properties in the simple setting of survival probabilities and show that the estimating function can be written as a U‐statistic of second order giving rise to an additional term that does not vanish asymptotically. We further show that previously advocated standard error estimates will typically be too large, although in many practical applications the difference will be of minor importance. We show how to estimate correctly the variability of the estimator. This is further studied in some simulation studies. 相似文献
126.
One particular recurrent events data scenario involves patients experiencing events according to a common intensity rate, and then a treatment may be applied. The treatment might be effective for a limited amount of time, so that the intensity rate would be expected to change abruptly when the effect of the treatment wears out. In particular, we allow models for the intensity rate, post-treatment, to be at first decreasing and then change to increasing (and vice versa). Two estimators of the location of this change are proposed. 相似文献
127.
After initiation of treatment, HIV viral load has multiphasic changes, which indicates that the viral decay rate is a time-varying process. Mixed-effects models with different time-varying decay rate functions have been proposed in literature. However, there are two unresolved critical issues: (i) it is not clear which model is more appropriate for practical use, and (ii) the model random errors are commonly assumed to follow a normal distribution, which may be unrealistic and can obscure important features of within- and among-subject variations. Because asymmetry of HIV viral load data is still noticeable even after transformation, it is important to use a more general distribution family that enables the unrealistic normal assumption to be relaxed. We developed skew-elliptical (SE) Bayesian mixed-effects models by considering the model random errors to have an SE distribution. We compared the performance among five SE models that have different time-varying decay rate functions. For each model, we also contrasted the performance under different model random error assumptions such as normal, Student-t, skew-normal, or skew-t distribution. Two AIDS clinical trial datasets were used to illustrate the proposed models and methods. The results indicate that the model with a time-varying viral decay rate that has two exponential components is preferred. Among the four distribution assumptions, the skew-t and skew-normal models provided better fitting to the data than normal or Student-t model, suggesting that it is important to assume a model with a skewed distribution in order to achieve reasonable results when the data exhibit skewness. 相似文献
128.
The aim of this study is to compare performances of commonly cointegration tests used in literature in terms of their empirical power and type I error probabilty for various sample sizes. As a result of the study, it has been found that some tests are not appropriate in testing cointegration in terms of empirical power and type I error probability. As a result of simulation study, λmax test for any values of ρ and sample sizes have been found most appropriate test in conclusion. 相似文献
129.
Two new nonparametric common principal component model selection procedures based on bootstrap distributions of the vector correlations of all combinations of the eigenvectors from two groups are proposed. The performance of these methods is compared in a simulation study to the two parametric methods previously suggested by Flury in 1988, as well as modified versions of two nonparametric methods proposed by Klingenberg in 1996 and then by Klingenberg and McIntyre in 1998. The proposed bootstrap vector correlation distribution (BVD) method is shown to outperform all of the existing methods in most of the simulated situations considered. 相似文献
130.
Dorin Drignei 《统计学通讯:模拟与计算》2016,45(9):3281-3293
Computer models with functional output are omnipresent throughout science and engineering. Most often the computer model is treated as a black-box and information about the underlying mathematical model is not exploited in statistical analyses. Consequently, general-purpose bases such as wavelets are typically used to describe the main characteristics of the functional output. In this article we advocate for using information about the underlying mathematical model in order to choose a better basis for the functional output. To validate this choice, a simulation study is presented in the context of uncertainty analysis for a computer model from inverse Sturm-Liouville problems. 相似文献