全文获取类型
收费全文 | 599篇 |
免费 | 8篇 |
专业分类
管理学 | 18篇 |
民族学 | 2篇 |
人口学 | 9篇 |
丛书文集 | 14篇 |
理论方法论 | 6篇 |
综合类 | 43篇 |
社会学 | 19篇 |
统计学 | 496篇 |
出版年
2023年 | 2篇 |
2022年 | 1篇 |
2021年 | 4篇 |
2020年 | 6篇 |
2019年 | 17篇 |
2018年 | 19篇 |
2017年 | 29篇 |
2016年 | 9篇 |
2015年 | 12篇 |
2014年 | 35篇 |
2013年 | 203篇 |
2012年 | 47篇 |
2011年 | 21篇 |
2010年 | 15篇 |
2009年 | 16篇 |
2008年 | 26篇 |
2007年 | 25篇 |
2006年 | 16篇 |
2005年 | 12篇 |
2004年 | 10篇 |
2003年 | 3篇 |
2002年 | 4篇 |
2001年 | 8篇 |
2000年 | 5篇 |
1999年 | 9篇 |
1998年 | 5篇 |
1997年 | 6篇 |
1996年 | 4篇 |
1995年 | 2篇 |
1993年 | 3篇 |
1992年 | 4篇 |
1991年 | 1篇 |
1990年 | 1篇 |
1989年 | 3篇 |
1988年 | 4篇 |
1985年 | 2篇 |
1984年 | 3篇 |
1983年 | 2篇 |
1982年 | 3篇 |
1980年 | 1篇 |
1979年 | 2篇 |
1978年 | 5篇 |
1977年 | 1篇 |
1975年 | 1篇 |
排序方式: 共有607条查询结果,搜索用时 63 毫秒
1.
数字经济时代,社交网络作为数字化平台经济的重要载体,受到了国内外学者的广泛关注。大数据背景下,社交网络的商业应用价值巨大,但由于其网络规模空前庞大,传统的网络分析方法 因计算成本过高而不再适用。而通过网络抽样算法获取样本网络,再推断整体网络,可节约计算资源, 因此抽样算法的好坏将直接影响社交网络分析结论的准确性。现有社交网络抽样算法存在忽略网络内部拓扑结构、容易陷入局部网络、抽样效率过低等缺陷。为了弥补现有社交网络抽样算法的缺陷,本文结合大数据社交网络的社区特征,提出了一种聚类随机游走抽样算法。该方法首先使用社区聚类算法将原始网络节点进行社区划分,得到多个社区网络,然后分别对每个社区进行随机游走抽样获取样本网 络。数值模拟和案例应用的结果均表明,聚类随机游走抽样算法克服了传统网络抽样算法的缺点,能够在降低网络规模的同时较好地保留原始网络的结构特征。此外,该抽样算法还可以并行运算,有效提升抽样效率,对于大数据背景下大规模社交网络的抽样实践具有重大现实意义。 相似文献
2.
Michael P. Fay Ji-Hyun Lee 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2006,169(1):81-96
Summary. We detail a general method for measuring agreement between two statistics. An application is two ratios of directly standardized rates which differ only by the choice of the standard. If the statistics have a high value for the coefficient of agreement then the expected squared difference between the statistics is small relative to the variance of the average of the two statistics, and inferences vary little by changing statistics. The estimation of a coefficient of agreement between two statistics is not straightforward because there is only one pair of observed values, each statistic calculated from the data. We introduce estimators of the coefficient of agreement for two statistics and discuss their use, especially as applied to functions of standardized rates. 相似文献
3.
John D. Emerson David C. Hoaglin Frederick Mosteller 《Statistical Methods and Applications》1993,2(3):269-290
Summary Meta-analyses of sets of clinical trials often combine risk differences from several 2×2 tables according to a random-effects
model. The DerSimonian-Laird random-effects procedure, widely used for estimating the populaton mean risk difference, weights
the risk difference from each primary study inversely proportional to an estimate of its variance (the sum of the between-study
variance and the conditional within-study variance). Because those weights are not independent of the risk differences, however,
the procedure sometimes exhibits bias and unnatural behavior. The present paper proposes a modified weighting scheme that
uses the unconditional within-study variance to avoid this source of bias. The modified procedure has variance closer to that
available from weighting by ideal weights when such weights are known. We studied the modified procedure in extensive simulation
experiments using situations whose parameters resemble those of actual studies in medical research. For comparison we also
included two unbiased procedures, the unweighted mean and a sample-size-weighted mean; their relative variability depends
on the extent of heterogeneity among the primary studies. An example illustrates the application of the procedures to actual
data and the differences among the results.
This research was supported by Grant HS 05936 from the Agency for Health Care Policy and Research to Harvard University. 相似文献
4.
Generalized additive models for location, scale and shape 总被引:10,自引:0,他引:10
R. A. Rigby D. M. Stasinopoulos 《Journal of the Royal Statistical Society. Series C, Applied statistics》2005,54(3):507-554
Summary. A general class of statistical models for a univariate response variable is presented which we call the generalized additive model for location, scale and shape (GAMLSS). The model assumes independent observations of the response variable y given the parameters, the explanatory variables and the values of the random effects. The distribution for the response variable in the GAMLSS can be selected from a very general family of distributions including highly skew or kurtotic continuous and discrete distributions. The systematic part of the model is expanded to allow modelling not only of the mean (or location) but also of the other parameters of the distribution of y , as parametric and/or additive nonparametric (smooth) functions of explanatory variables and/or random-effects terms. Maximum (penalized) likelihood estimation is used to fit the (non)parametric models. A Newton–Raphson or Fisher scoring algorithm is used to maximize the (penalized) likelihood. The additive terms in the model are fitted by using a backfitting algorithm. Censored data are easily incorporated into the framework. Five data sets from different fields of application are analysed to emphasize the generality of the GAMLSS class of models. 相似文献
5.
《Journal of Policy Modeling》2014,36(6):1048-1065
The stability of inflation differentials is an important condition for the smooth working of a currency area, such as the European Economic and Monetary Union. In the presence of stability, changes in national inflation rates, while holding Euro area inflation fixed contemporaneously, should be only transitory. If this is the case, the rate of inflation of the whole area can also be interpreted as a predictor, at least in the long-run, of the different national inflation rates. However, in this paper we show that this condition is satisfied only for a small number of countries, including France and Italy. Better convergence results for inflation differentials are, instead, found for the USA. Some policy implications are drawn for the Eurozone. 相似文献
6.
The components of a reliability system subjected to a common random environment usually have dependent lifetimes. This paper studies the stochastic properties of such a system with lifetimes of the components following multivariate frailty models and multivariate mixed proportional reversed hazard rate (PRHR) models, respectively. Through doing stochastic comparison, we devote to throwing a new light on how the random environment affects the number of working components of a reliability system and on assessing the performance of a k-out-of-n system. 相似文献
7.
《Journal of Statistical Computation and Simulation》2012,82(3):225-238
In many industrial quality control experiments and destructive stress testing, the only available data are successive minima (or maxima)i.e., record-breaking data. There are two sampling schemes used to collect record-breaking data: random sampling and inverse sampling. For random sampling, the total sample size is predetermined and the number of records is a random variable while in inverse-sampling the number of records to be observed is predetermined; thus the sample size is a random variable. The purpose of this papper is to determinevia simulations, which of the two schemes, if any, is more efficient. Since the two schemes are equivalent asymptotically, the simulations were carried out for small to moderate sized record-breaking samples. Simulated biases and mean square errors of the maximum likelihood estimators of the parameters using the two sampling schemes were compared. In general, it was found that if the estimators were well behaved, then there was no significant difference between the mean square errors of the estimates for the two schemes. However, for certain distributions described by both a shape and a scale parameter, random sampling led to estimators that were inconsistent. On the other hand, the estimated obtained from inverse sampling were always consistent. Moreover, for moderated sized record-breaking samples, the total sample size that needs to be observed is smaller for inverse sampling than for random sampling. 相似文献
8.
《Journal of Statistical Computation and Simulation》2012,82(3):165-180
In this article, the validity of procedures for testing the significance of the slope in quantitative linear models with one explanatory variable and first-order autoregressive [AR(1)] errors is analyzed in a Monte Carlo study conducted in the time domain. Two cases are considered for the regressor: fixed and trended versus random and AR(1). In addition to the classical t -test using the Ordinary Least Squares (OLS) estimator of the slope and its standard error, we consider seven t -tests with n-2\,\hbox{df} built on the Generalized Least Squares (GLS) estimator or an estimated GLS estimator, three variants of the classical t -test with different variances of the OLS estimator, two asymptotic tests built on the Maximum Likelihood (ML) estimator, the F -test for fixed effects based on the Restricted Maximum Likelihood (REML) estimator in the mixed-model approach, two t -tests with n - 2 df based on first differences (FD) and first-difference ratios (FDR), and four modified t -tests using various corrections of the number of degrees of freedom. The FDR t -test, the REML F -test and the modified t -test using Dutilleul's effective sample size are the most valid among the testing procedures that do not assume the complete knowledge of the covariance matrix of the errors. However, modified t -tests are not applicable and the FDR t -test suffers from a lack of power when the regressor is fixed and trended ( i.e. , FDR is the same as FD in this case when observations are equally spaced), whereas the REML algorithm fails to converge at small sample sizes. The classical t -test is valid when the regressor is fixed and trended and autocorrelation among errors is predominantly negative, and when the regressor is random and AR(1), like the errors, and autocorrelation is moderately negative or positive. We discuss the results graphically, in terms of the circularity condition defined in repeated measures ANOVA and of the effective sample size used in correlation analysis with autocorrelated sample data. An example with environmental data is presented. 相似文献
9.
《Journal of Statistical Computation and Simulation》2012,82(1-2):57-71
This paper considers the estimation problem when lifetimes are Weibull distributed and are collected under a Type-II progressive censoring with random removals, where the number of units removed at each failure time follows a uniform discrete distribution. The expected time of this censoring plan is discussed and compared numerically to that under a Type II censoring without removal. Maximum likelihood estimator of the parameters and their asymptotic variances are derived. 相似文献
10.
The purpose was to assess RDS estimators in populations simulated with diverse connectivity characteristics, incorporating the putative influence of misreported degrees and transmission processes. Four populations were simulated using different random graph models. Each population was “infected” using four different transmission processes. From each combination of population x transmission, one thousand samples were obtained using a RDS-like sampling strategy. Three estimators were used to predict the population-level prevalence of the “infection”. Several types of misreported degrees were simulated. Also, samples were generated using the standard random sampling method and the respective prevalence estimates, using the classical frequentist estimator. Estimation biases in relation to population parameters were assessed, as well as the variance. Variability was associated with the connectivity characteristics of each simulated population. Clustered populations yield greater variability and no RDS-based strategy could address the estimation biases. Misreporting degrees had modest effects, especially when RDS estimators were used. The best results for RDS-based samples were observed when the “infection” was randomly attributed, without any relation with the underlying network structure. 相似文献