全文获取类型
收费全文 | 971篇 |
免费 | 38篇 |
国内免费 | 2篇 |
专业分类
管理学 | 50篇 |
人口学 | 18篇 |
丛书文集 | 4篇 |
理论方法论 | 5篇 |
综合类 | 33篇 |
社会学 | 9篇 |
统计学 | 892篇 |
出版年
2023年 | 5篇 |
2022年 | 5篇 |
2021年 | 13篇 |
2020年 | 15篇 |
2019年 | 36篇 |
2018年 | 33篇 |
2017年 | 64篇 |
2016年 | 31篇 |
2015年 | 23篇 |
2014年 | 37篇 |
2013年 | 301篇 |
2012年 | 84篇 |
2011年 | 25篇 |
2010年 | 33篇 |
2009年 | 28篇 |
2008年 | 32篇 |
2007年 | 28篇 |
2006年 | 13篇 |
2005年 | 26篇 |
2004年 | 21篇 |
2003年 | 9篇 |
2002年 | 16篇 |
2001年 | 10篇 |
2000年 | 20篇 |
1999年 | 12篇 |
1998年 | 12篇 |
1997年 | 14篇 |
1996年 | 9篇 |
1995年 | 2篇 |
1994年 | 6篇 |
1993年 | 3篇 |
1992年 | 2篇 |
1991年 | 4篇 |
1990年 | 5篇 |
1989年 | 3篇 |
1988年 | 4篇 |
1987年 | 1篇 |
1986年 | 4篇 |
1985年 | 5篇 |
1983年 | 2篇 |
1982年 | 1篇 |
1981年 | 4篇 |
1980年 | 4篇 |
1979年 | 2篇 |
1978年 | 1篇 |
1976年 | 1篇 |
1975年 | 2篇 |
排序方式: 共有1011条查询结果,搜索用时 15 毫秒
1.
Modeling spatial overdispersion requires point process models with finite‐dimensional distributions that are overdisperse relative to the Poisson distribution. Fitting such models usually heavily relies on the properties of stationarity, ergodicity, and orderliness. In addition, although processes based on negative binomial finite‐dimensional distributions have been widely considered, they typically fail to simultaneously satisfy the three required properties for fitting. Indeed, it has been conjectured by Diggle and Milne that no negative binomial model can satisfy all three properties. In light of this, we change perspective and construct a new process based on a different overdisperse count model, namely, the generalized Waring (GW) distribution. While comparably tractable and flexible to negative binomial processes, the GW process is shown to possess all required properties and additionally span the negative binomial and Poisson processes as limiting cases. In this sense, the GW process provides an approximate resolution to the conundrum highlighted by Diggle and Milne. 相似文献
2.
3.
We propose testing procedures for the hypothesis that a given set of discrete observations may be formulated as a particular time series of counts with a specific conditional law. The new test statistics incorporate the empirical probability-generating function computed from the observations. Special emphasis is given to the popular models of integer autoregression and Poisson autoregression. The asymptotic properties of the proposed test statistics are studied under the null hypothesis as well as under alternatives. A Monte Carlo power study on bootstrap versions of the new methods is included as well as real-data examples. 相似文献
4.
This article proposes several estimators for estimating the ridge parameter k based on Poisson ridge regression (RR) model. These estimators have been evaluated by means of Monte Carlo simulations. As performance criteria, we have calculated the mean squared error (MSE), the mean value, and the standard deviation of k. The first criterion is commonly used, while the other two have never been used when analyzing Poisson RR. However, these performance criteria are very informative because, if several estimators have an equal estimated MSE, then those with low average value and standard deviation of k should be preferred. Based on the simulated results, we may recommend some biasing parameters that may be useful for the practitioners in the field of health, social, and physical sciences. 相似文献
5.
AbstractIn general, survival data are time-to-event data, such as time to death, time to appearance of a tumor, or time to recurrence of a disease. Models for survival data have frequently been based on the proportional hazards model, proposed by Cox. The Cox model has intensive application in the field of social, medical, behavioral and public health sciences. In this paper we propose a more efficient sampling method of recruiting subjects for survival analysis. We propose using a Moving Extreme Ranked Set Sampling (MERSS) scheme with ranking based on an easy-to-evaluate baseline auxiliary variable known to be associated with survival time. This paper demonstrates that this approach provides a more powerful testing procedure as well as a more efficient estimate of hazard ratio than that based on simple random sampling (SRS). Theoretical derivation and simulation studies are provided. The Iowa 65+ Rural study data are used to illustrate the methods developed in this paper. 相似文献
6.
数字经济时代,社交网络作为数字化平台经济的重要载体,受到了国内外学者的广泛关注。大数据背景下,社交网络的商业应用价值巨大,但由于其网络规模空前庞大,传统的网络分析方法 因计算成本过高而不再适用。而通过网络抽样算法获取样本网络,再推断整体网络,可节约计算资源, 因此抽样算法的好坏将直接影响社交网络分析结论的准确性。现有社交网络抽样算法存在忽略网络内部拓扑结构、容易陷入局部网络、抽样效率过低等缺陷。为了弥补现有社交网络抽样算法的缺陷,本文结合大数据社交网络的社区特征,提出了一种聚类随机游走抽样算法。该方法首先使用社区聚类算法将原始网络节点进行社区划分,得到多个社区网络,然后分别对每个社区进行随机游走抽样获取样本网 络。数值模拟和案例应用的结果均表明,聚类随机游走抽样算法克服了传统网络抽样算法的缺点,能够在降低网络规模的同时较好地保留原始网络的结构特征。此外,该抽样算法还可以并行运算,有效提升抽样效率,对于大数据背景下大规模社交网络的抽样实践具有重大现实意义。 相似文献
7.
CHIN-TSANG CHIANG MEI-CHENG WANG CHIUNG-YU HUANG 《Scandinavian Journal of Statistics》2005,32(1):77-91
Abstract. Recurrent event data are largely characterized by the rate function but smoothing techniques for estimating the rate function have never been rigorously developed or studied in statistical literature. This paper considers the moment and least squares methods for estimating the rate function from recurrent event data. With an independent censoring assumption on the recurrent event process, we study statistical properties of the proposed estimators and propose bootstrap procedures for the bandwidth selection and for the approximation of confidence intervals in the estimation of the occurrence rate function. It is identified that the moment method without resmoothing via a smaller bandwidth will produce a curve with nicks occurring at the censoring times, whereas there is no such problem with the least squares method. Furthermore, the asymptotic variance of the least squares estimator is shown to be smaller under regularity conditions. However, in the implementation of the bootstrap procedures, the moment method is computationally more efficient than the least squares method because the former approach uses condensed bootstrap data. The performance of the proposed procedures is studied through Monte Carlo simulations and an epidemiological example on intravenous drug users. 相似文献
8.
Yue Fang 《Journal of statistical planning and inference》2003,110(1-2):55-73
Generalized method of moments (GMM) is used to develop tests for discriminating discrete distributions among the two-parameter family of Katz distributions. Relationships involving moments are exploited to obtain identifying and over-identifying restrictions. The asymptotic relative efficiencies of tests based on GMM are analyzed using the local power approach and the approximate Bahadur efficiency. The paper also gives results of Monte Carlo experiments designed to check the validity of the theoretical findings and to shed light on the small sample properties of the proposed tests. Extensions of the results to compound Poisson alternative hypotheses are discussed. 相似文献
9.
Cédric Béguin Beat Hulliger 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2004,167(2):275-294
Summary. As a part of the EUREDIT project new methods to detect multivariate outliers in incomplete survey data have been developed. These methods are the first to work with sampling weights and to be able to cope with missing values. Two of these methods are presented here. The epidemic algorithm simulates the propagation of a disease through a population and uses extreme infection times to find outlying observations. Transformed rank correlations are robust estimates of the centre and the scatter of the data. They use a geometric transformation that is based on the rank correlation matrix. The estimates are used to define a Mahalanobis distance that reveals outliers. The two methods are applied to a small data set and to one of the evaluation data sets of the EUREDIT project. 相似文献
10.
Impacts of complex emergencies or relief interventions have often been evaluated by absolute mortality compared to international standardized mortality rates. A better evaluation would be to compare with local baseline mortality of the affected populations. A projection of population-based survival data into time of emergency or intervention based on information from before the emergency may create a local baseline reference. We find a log-transformed Gaussian time series model where standard errors of the estimated rates are included in the variance to have the best forecasting capacity. However, if time-at-risk during the forecasted period is known then forecasting might be done using a Poisson time series model with overdispersion. Whatever, the standard error of the estimated rates must be included in the variance of the model either in an additive form in a Gaussian model or in a multiplicative form by overdispersion in a Poisson model. Data on which the forecasting is based must be modelled carefully concerning not only calendar-time trends but also periods with excessive frequency of events (epidemics) and seasonal variations to eliminate residual autocorrelation and to make a proper reference for comparison, reflecting changes over time during the emergency. Hence, when modelled properly it is possible to predict a reference to an emergency-affected population based on local conditions. We predicted childhood mortality during the war in Guinea-Bissau 1998-1999. We found an increased mortality in the first half-year of the war and a mortality corresponding to the expected one in the last half-year of the war. 相似文献