全文获取类型
收费全文 | 2393篇 |
免费 | 57篇 |
国内免费 | 13篇 |
专业分类
管理学 | 114篇 |
民族学 | 5篇 |
人口学 | 20篇 |
丛书文集 | 70篇 |
理论方法论 | 21篇 |
综合类 | 705篇 |
社会学 | 25篇 |
统计学 | 1503篇 |
出版年
2024年 | 1篇 |
2023年 | 8篇 |
2022年 | 9篇 |
2021年 | 13篇 |
2020年 | 34篇 |
2019年 | 59篇 |
2018年 | 73篇 |
2017年 | 117篇 |
2016年 | 53篇 |
2015年 | 52篇 |
2014年 | 94篇 |
2013年 | 621篇 |
2012年 | 187篇 |
2011年 | 81篇 |
2010年 | 103篇 |
2009年 | 75篇 |
2008年 | 95篇 |
2007年 | 90篇 |
2006年 | 95篇 |
2005年 | 84篇 |
2004年 | 76篇 |
2003年 | 70篇 |
2002年 | 52篇 |
2001年 | 58篇 |
2000年 | 45篇 |
1999年 | 38篇 |
1998年 | 21篇 |
1997年 | 26篇 |
1996年 | 26篇 |
1995年 | 19篇 |
1994年 | 9篇 |
1993年 | 15篇 |
1992年 | 12篇 |
1991年 | 7篇 |
1990年 | 6篇 |
1989年 | 8篇 |
1988年 | 9篇 |
1987年 | 4篇 |
1986年 | 2篇 |
1985年 | 3篇 |
1984年 | 1篇 |
1983年 | 2篇 |
1982年 | 1篇 |
1981年 | 1篇 |
1980年 | 3篇 |
1979年 | 3篇 |
1978年 | 2篇 |
排序方式: 共有2463条查询结果,搜索用时 687 毫秒
141.
基于水平集的波前扩展算法,如FMM(Fast Marching Method)、GMM(Group Marching Method),作为一类计算复杂介质波前时间的有效方法而被广泛使用。该类算法都是基于程函方程的有限差分格式来计算波传播时间,在介质离散单元尺寸较大的情况下,计算精度较低。为提高波前时间的计算精度,在一个长方体单元内,将任意点的波传播时间用已知节点上波前时间的插值函数表示,然后根据Fermat原理确定未知节点上的波前时间,再结合高效率的GMM算法,形成了一种计算三维复杂介质波前时间的有效算法。数值模拟实验表明,与原GMM算法相比,该算法大大提高了波前时间的计算精度,同时具有很强的稳定性和适应性。 相似文献
142.
Alberto Luceño 《统计学通讯:模拟与计算》2013,42(1):235-245
A well-know process capability index is slightly modified in this article to provide a new measure of process capability which takes account of the process location and variability, and for which point estimator and confidence intervals do exist that are insensitive to departures from the assumption of normal variability. Two examples of applications based on real data are presented. 相似文献
143.
Ghazi Shukur 《统计学通讯:模拟与计算》2013,42(2):419-448
Using Monte Carlo methods, the properties of systemwise generalisations of the Breusch-Godfrey test for autocorrelated errors are studied in situations when the error terms follow either normal or non-normal distributions, and when these errors follow either AR(1) or MA(1) processes. Edgerton and Shukur (1999) studied the properties of the test using normally distributed error terms and when these errors follow an AR(1) process. When the errors follow a non-normal distribution, the performances of the tests deteriorate especially when the tails are very heavy. The performances of the tests become better (as in the case when the errors are generated by the normal distribution) when the errors are less heavy tailed. 相似文献
144.
145.
Ole Klungsøyr Joe Sexton Inger Sandanger Jan F. Nygård 《Journal of applied statistics》2013,40(4):843-861
A substantial degree of uncertainty exists surrounding the reconstruction of events based on memory recall. This form of measurement error affects the performance of structured interviews such as the Composite International Diagnostic Interview (CIDI), an important tool to assess mental health in the community. Measurement error probably explains the discrepancy in estimates between longitudinal studies with repeated assessments (the gold-standard), yielding approximately constant rates of depression, versus cross-sectional studies which often find increasing rates closer in time to the interview. Repeated assessments of current status (or recent history) are more reliable than reconstruction of a person's psychiatric history based on a single interview. In this paper, we demonstrate a method of estimating a time-varying measurement error distribution in the age of onset of an initial depressive episode, as diagnosed by the CIDI, based on an assumption regarding age-specific incidence rates. High-dimensional non-parametric estimation is achieved by the EM-algorithm with smoothing. The method is applied to data from a Norwegian mental health survey in 2000. The measurement error distribution changes dramatically from 1980 to 2000, with increasing variance and greater bias further away in time from the interview. Some influence of the measurement error on already published results is found. 相似文献
146.
《Journal of Statistical Computation and Simulation》2012,82(12):1939-1969
Uncertainty and sensitivity analysis is an essential ingredient of model development and applications. For many uncertainty and sensitivity analysis techniques, sensitivity indices are calculated based on a relatively large sample to measure the importance of parameters in their contributions to uncertainties in model outputs. To statistically compare their importance, it is necessary that uncertainty and sensitivity analysis techniques provide standard errors of estimated sensitivity indices. In this paper, a delta method is used to analytically approximate standard errors of estimated sensitivity indices for a popular sensitivity analysis method, the Fourier amplitude sensitivity test (FAST). Standard errors estimated based on the delta method were compared with those estimated based on 20 sample replicates. We found that the delta method can provide a good approximation for the standard errors of both first-order and higher-order sensitivity indices. Finally, based on the standard error approximation, we also proposed a method to determine a minimum sample size to achieve the desired estimation precision for a specified sensitivity index. The standard error estimation method presented in this paper can make the FAST analysis computationally much more efficient for complex models. 相似文献
147.
Clifford H. Spiegelman 《The American statistician》2013,67(3):245-248
Modern exploratory data analysis produces models that are not based on physical theory but that are consistent with pictures of the data. When both X and Y have error this can be risky, because important features are hidden. Two examples are given that show that systematic model departures and heteroscedasticity may not be detectable with standard regression diagnostics. 相似文献
148.
This article considers the notion of the non-diagonal-type estimator (NDTE) under the prediction error sum of squares (PRESS) criterion. First, the optimal NDTE in the PRESS sense is derived theoretically and applied to the cosmetics sales data. Second, we make a further study to extend the NDTE to the general case of the covariance matrix of the model and then give a Bayesian explanation for this extension. Third, two remarks concerned with some potential shortcomings of the NDTE are presented and an alternative solution is provided and illustrated by means of simulations. 相似文献
149.
Feng-Shou Ko 《统计学通讯:理论与方法》2014,43(1):72-89
In this article, we discuss how to identify longitudinal biomarkers in survival analysis under the accelerated failure time model and also discuss the effectiveness of biomarkers under the accelerated failure time model. Two methods proposed by Shcemper et al. are deployed to measure the efficacy of biomarkers. We use simulations to explore how the factors can influence the power of a score test to detect the association of a longitudinal biomarker and the survival time. These factors include the functional form of the random effects from the longitudinal biomarkers, in the different number of individuals, and time points per individual. The simulations are used to explore how the number of individuals, the number of time points per individual influence the effectiveness of the biomarker to predict survival at the given endpoint under the accelerated failure time model. We illustrate our methods using a prothrombin index as a predictor of survival in liver cirrhosis patients. 相似文献
150.
《统计学通讯:理论与方法》2013,42(6):943-960
ABSTRACT We study the estimation of a hazard rate function based on censored data by non-linear wavelet method. We provide an asymptotic formula for the mean integrated squared error (MISE) of nonlinear wavelet-based hazard rate estimators under randomly censored data. We show this MISE formula, when the underlying hazard rate function and censoring distribution function are only piecewise smooth, has the same expansion as analogous kernel estimators, a feature not available for the kernel estimators. In addition, we establish an asymptotic normality of the nonlinear wavelet estimator. 相似文献