首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   674篇
  免费   11篇
  国内免费   1篇
管理学   18篇
人口学   3篇
丛书文集   11篇
理论方法论   4篇
综合类   43篇
社会学   14篇
统计学   593篇
  2023年   3篇
  2022年   1篇
  2021年   3篇
  2020年   7篇
  2019年   24篇
  2018年   20篇
  2017年   40篇
  2016年   8篇
  2015年   14篇
  2014年   35篇
  2013年   224篇
  2012年   54篇
  2011年   22篇
  2010年   15篇
  2009年   16篇
  2008年   25篇
  2007年   24篇
  2006年   17篇
  2005年   12篇
  2004年   9篇
  2003年   11篇
  2002年   6篇
  2001年   11篇
  2000年   8篇
  1999年   11篇
  1998年   7篇
  1997年   7篇
  1996年   4篇
  1995年   4篇
  1994年   3篇
  1993年   4篇
  1992年   4篇
  1991年   1篇
  1990年   1篇
  1989年   3篇
  1988年   5篇
  1987年   1篇
  1985年   3篇
  1984年   3篇
  1983年   2篇
  1982年   4篇
  1980年   1篇
  1979年   2篇
  1978年   5篇
  1977年   1篇
  1975年   1篇
排序方式: 共有686条查询结果,搜索用时 31 毫秒
1.
贺建风  李宏煜 《统计研究》2021,38(4):131-144
数字经济时代,社交网络作为数字化平台经济的重要载体,受到了国内外学者的广泛关注。大数据背景下,社交网络的商业应用价值巨大,但由于其网络规模空前庞大,传统的网络分析方法 因计算成本过高而不再适用。而通过网络抽样算法获取样本网络,再推断整体网络,可节约计算资源, 因此抽样算法的好坏将直接影响社交网络分析结论的准确性。现有社交网络抽样算法存在忽略网络内部拓扑结构、容易陷入局部网络、抽样效率过低等缺陷。为了弥补现有社交网络抽样算法的缺陷,本文结合大数据社交网络的社区特征,提出了一种聚类随机游走抽样算法。该方法首先使用社区聚类算法将原始网络节点进行社区划分,得到多个社区网络,然后分别对每个社区进行随机游走抽样获取样本网 络。数值模拟和案例应用的结果均表明,聚类随机游走抽样算法克服了传统网络抽样算法的缺点,能够在降低网络规模的同时较好地保留原始网络的结构特征。此外,该抽样算法还可以并行运算,有效提升抽样效率,对于大数据背景下大规模社交网络的抽样实践具有重大现实意义。  相似文献   
2.
Summary.  We detail a general method for measuring agreement between two statistics. An application is two ratios of directly standardized rates which differ only by the choice of the standard. If the statistics have a high value for the coefficient of agreement then the expected squared difference between the statistics is small relative to the variance of the average of the two statistics, and inferences vary little by changing statistics. The estimation of a coefficient of agreement between two statistics is not straightforward because there is only one pair of observed values, each statistic calculated from the data. We introduce estimators of the coefficient of agreement for two statistics and discuss their use, especially as applied to functions of standardized rates.  相似文献   
3.
Summary Meta-analyses of sets of clinical trials often combine risk differences from several 2×2 tables according to a random-effects model. The DerSimonian-Laird random-effects procedure, widely used for estimating the populaton mean risk difference, weights the risk difference from each primary study inversely proportional to an estimate of its variance (the sum of the between-study variance and the conditional within-study variance). Because those weights are not independent of the risk differences, however, the procedure sometimes exhibits bias and unnatural behavior. The present paper proposes a modified weighting scheme that uses the unconditional within-study variance to avoid this source of bias. The modified procedure has variance closer to that available from weighting by ideal weights when such weights are known. We studied the modified procedure in extensive simulation experiments using situations whose parameters resemble those of actual studies in medical research. For comparison we also included two unbiased procedures, the unweighted mean and a sample-size-weighted mean; their relative variability depends on the extent of heterogeneity among the primary studies. An example illustrates the application of the procedures to actual data and the differences among the results. This research was supported by Grant HS 05936 from the Agency for Health Care Policy and Research to Harvard University.  相似文献   
4.
A5的元最大阶数是5,使用有限群的Lagrange定理,A5的10阶子群元的阶只可能是2,5。但由于拉格朗日定理的逆不成立,因此是否存在A5的10阶子群仍是问题。该文通过对5-循环置换各次方幂的计算及其研究,找到A5的10阶子群元的构成规律,并使用构造性方法给出了5次交错群A5的6个10阶子群。  相似文献   
5.
Generalized additive models for location, scale and shape   总被引:10,自引:0,他引:10  
Summary.  A general class of statistical models for a univariate response variable is presented which we call the generalized additive model for location, scale and shape (GAMLSS). The model assumes independent observations of the response variable y given the parameters, the explanatory variables and the values of the random effects. The distribution for the response variable in the GAMLSS can be selected from a very general family of distributions including highly skew or kurtotic continuous and discrete distributions. The systematic part of the model is expanded to allow modelling not only of the mean (or location) but also of the other parameters of the distribution of y , as parametric and/or additive nonparametric (smooth) functions of explanatory variables and/or random-effects terms. Maximum (penalized) likelihood estimation is used to fit the (non)parametric models. A Newton–Raphson or Fisher scoring algorithm is used to maximize the (penalized) likelihood. The additive terms in the model are fitted by using a backfitting algorithm. Censored data are easily incorporated into the framework. Five data sets from different fields of application are analysed to emphasize the generality of the GAMLSS class of models.  相似文献   
6.
The components of a reliability system subjected to a common random environment usually have dependent lifetimes. This paper studies the stochastic properties of such a system with lifetimes of the components following multivariate frailty models and multivariate mixed proportional reversed hazard rate (PRHR) models, respectively. Through doing stochastic comparison, we devote to throwing a new light on how the random environment affects the number of working components of a reliability system and on assessing the performance of a k-out-of-n system.  相似文献   
7.
In many industrial quality control experiments and destructive stress testing, the only available data are successive minima (or maxima)i.e., record-breaking data. There are two sampling schemes used to collect record-breaking data: random sampling and inverse sampling. For random sampling, the total sample size is predetermined and the number of records is a random variable while in inverse-sampling the number of records to be observed is predetermined; thus the sample size is a random variable. The purpose of this papper is to determinevia simulations, which of the two schemes, if any, is more efficient. Since the two schemes are equivalent asymptotically, the simulations were carried out for small to moderate sized record-breaking samples. Simulated biases and mean square errors of the maximum likelihood estimators of the parameters using the two sampling schemes were compared. In general, it was found that if the estimators were well behaved, then there was no significant difference between the mean square errors of the estimates for the two schemes. However, for certain distributions described by both a shape and a scale parameter, random sampling led to estimators that were inconsistent. On the other hand, the estimated obtained from inverse sampling were always consistent. Moreover, for moderated sized record-breaking samples, the total sample size that needs to be observed is smaller for inverse sampling than for random sampling.  相似文献   
8.

In this article, the validity of procedures for testing the significance of the slope in quantitative linear models with one explanatory variable and first-order autoregressive [AR(1)] errors is analyzed in a Monte Carlo study conducted in the time domain. Two cases are considered for the regressor: fixed and trended versus random and AR(1). In addition to the classical t -test using the Ordinary Least Squares (OLS) estimator of the slope and its standard error, we consider seven t -tests with n-2\,\hbox{df} built on the Generalized Least Squares (GLS) estimator or an estimated GLS estimator, three variants of the classical t -test with different variances of the OLS estimator, two asymptotic tests built on the Maximum Likelihood (ML) estimator, the F -test for fixed effects based on the Restricted Maximum Likelihood (REML) estimator in the mixed-model approach, two t -tests with n - 2 df based on first differences (FD) and first-difference ratios (FDR), and four modified t -tests using various corrections of the number of degrees of freedom. The FDR t -test, the REML F -test and the modified t -test using Dutilleul's effective sample size are the most valid among the testing procedures that do not assume the complete knowledge of the covariance matrix of the errors. However, modified t -tests are not applicable and the FDR t -test suffers from a lack of power when the regressor is fixed and trended ( i.e. , FDR is the same as FD in this case when observations are equally spaced), whereas the REML algorithm fails to converge at small sample sizes. The classical t -test is valid when the regressor is fixed and trended and autocorrelation among errors is predominantly negative, and when the regressor is random and AR(1), like the errors, and autocorrelation is moderately negative or positive. We discuss the results graphically, in terms of the circularity condition defined in repeated measures ANOVA and of the effective sample size used in correlation analysis with autocorrelated sample data. An example with environmental data is presented.  相似文献   
9.
Peto and Peto (1972) have studied rank invariant tests to compare two survival curves for right censored data. We apply their tests, including the logrank test and the generalized Wilcoxon test, to left truncated and interval censored data. The significance levels of the tests are approximated by Monte Carlo permutation tests. Simulation studies are conducted to show their size and power under different distributional differences. In particular, the logrank test works well under the Cox proportional hazards alternatives, as for the usual right censored data. The methods are illustrated by the analysis of the Massachusetts Health Care Panel Study dataset.  相似文献   
10.
In this article, time to immune recovery during antiretroviral therapy was estimated and compared between HIV-infected children with and without tuberculosis (TB). CD4?T-cell restoration was used as a criterion for determining immune recovery. The median residual lifetime function, which is more intuitive and robust compared to the frequently used measures of lifetime data, was used to estimate time to CD4?T-cell restoration. The median residual lifetime is not influenced by extreme observations and heavy-tailed distributions which are commonly encountered in clinical studies. Permutation-based methods were used to compare the CD4?T-cell restoration times between the two groups of patients. Our results indicate that children with TB had uniformly higher median residual lifetimes to immune recovery compared to those without TB. Although TB was associated with slower CD4?T-cell restoration, the differences between the restoration times of the two groups were not statistically significant.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号