首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   333篇
  免费   10篇
  国内免费   2篇
管理学   63篇
人口学   15篇
丛书文集   3篇
理论方法论   20篇
综合类   36篇
社会学   21篇
统计学   187篇
  2023年   5篇
  2022年   3篇
  2021年   2篇
  2020年   9篇
  2019年   13篇
  2018年   15篇
  2017年   25篇
  2016年   14篇
  2015年   10篇
  2014年   4篇
  2013年   58篇
  2012年   13篇
  2011年   13篇
  2010年   9篇
  2009年   15篇
  2008年   6篇
  2007年   14篇
  2006年   10篇
  2005年   18篇
  2004年   16篇
  2003年   9篇
  2002年   14篇
  2001年   6篇
  2000年   5篇
  1999年   7篇
  1998年   3篇
  1997年   3篇
  1996年   4篇
  1995年   6篇
  1994年   1篇
  1993年   2篇
  1992年   1篇
  1991年   4篇
  1990年   2篇
  1989年   2篇
  1988年   2篇
  1980年   2篇
排序方式: 共有345条查询结果,搜索用时 31 毫秒
101.
In this study, an evaluation of Bayesian hierarchical models is made based on simulation scenarios to compare single-stage and multi-stage Bayesian estimations. Simulated datasets of lung cancer disease counts for men aged 65 and older across 44 wards in the London Health Authority were analysed using a range of spatially structured random effect components. The goals of this study are to determine which of these single-stage models perform best given a certain simulating model, how estimation methods (single- vs. multi-stage) compare in yielding posterior estimates of fixed effects in the presence of spatially structured random effects, and finally which of two spatial prior models – the Leroux or ICAR model, perform best in a multi-stage context under different assumptions concerning spatial correlation. Among the fitted single-stage models without covariates, we found that when there is low amount of variability in the distribution of disease counts, the BYM model is relatively robust to misspecification in terms of DIC, while the Leroux model is the least robust to misspecification. When these models were fit to data generated from models with covariates, we found that when there was one set of covariates – either spatially correlated or non-spatially correlated, changing the values of the fixed coefficients affected the ability of either the Leroux or ICAR model to fit the data well in terms of DIC. When there were multiple sets of spatially correlated covariates in the simulating model, however, we could not distinguish the goodness of fit to the data between these single-stage models. We found that the multi-stage modelling process via the Leroux and ICAR models generally reduced the variance of the posterior estimated fixed effects for data generated from models with covariates and a UH term compared to analogous single-stage models. Finally, we found the multi-stage Leroux model compares favourably to the multi-stage ICAR model in terms of DIC. We conclude that the mutli-stage Leroux model should be seriously considered in applications of Bayesian disease mapping when an investigator desires to fit a model with both fixed effects and spatially structured random effects to Poisson count data.  相似文献   
102.
Two leading camps for studying social complexity are case-based methods (CBM) and agent-based modelling (ABM). Despite the potential epistemological links between ‘cases’ and ‘agents,’ neither camp has leveraged their combined strengths. A bridge can be built, however, by drawing on Abbott’s insight that ‘agents are cases doing things’, Byrne’s suggestion that ‘cases are complex systems with agency’, and by viewing CBM and ABM within the broader trend towards computational modelling of cases. To demonstrate the utility of this bridge, we describe how CBM can utilise ABM to identify case-based trends; explore the interactions and collective behaviour of cases; and study different scenarios. We also describe how ABM can utilise CBM to identify agent types; construct agent behaviour rules; and link these to outcomes to calibrate and validate model results. To further demonstrate the bridge, we review a public health study that made initial steps in combining CBM and ABM.  相似文献   
103.
104.
The present investigation was undertaken to study the gillnet catch efficiency of sardines in the coastal waters of Sri Lanka using commercial catch and effort data. Commercial catch and effort data of small mesh gillnet fishery were collected in five fisheries districts during the period May 1999–August 2002. Gillnet catch efficiency of sardines was investigated by developing catch rates predictive models using data on commercial fisheries and environmental variables. Three statistical techniques [multiple linear regression, generalized additive model and regression tree model (RTM)] were employed to predict the catch rates of trenched sardine Amblygaster sirm (key target species of small mesh gillnet fishery) and other sardines (Sardinella longiceps, S. gibbosa, S. albella and S. sindensis). The data collection programme was conducted for another six months and the models were tested on new data. RTMs were found to be the strongest in terms of reliability and accuracy of the predictions. The two operational characteristics used here for model formulation (i.e. depth of fishing and number of gillnet pieces used per fishing operation) were more useful as predictor variables than the environmental variables. The study revealed a rapid tendency of increasing the catch rates of A. sirm with increased sea depth up to around 32 m.  相似文献   
105.
An extended Gaussian max-stable process model for spatial extremes   总被引:1,自引:0,他引:1  
The extremes of environmental processes are often of interest due to the damage that can be caused by extreme levels of the processes. These processes are often spatial in nature and modelling the extremes jointly at many locations can be important. In this paper, an extension of the Gaussian max-stable process is developed, enabling data from a number of locations to be modelled under a more flexible framework than in previous applications. The model is applied to annual maximum rainfall data from five sites in South-West England. For estimation we employ a pairwise likelihood within a Bayesian analysis, incorporating informative prior information.  相似文献   
106.
André Robert Dabrowski, Professor of Mathematics and Dean of the Faculty of Sciences at the University of Ottawa, died October 7, 2006, after a short battle with cancer. The author of the present paper, a long‐term friend and collaborator of André Dabrowski, gives a survey of André's work on weak dependence and limit theorems in probability theory. The Canadian Journal of Statistics 37: 307–326; 2009 © 2009 Statistical Society of Canada  相似文献   
107.
Summary.  We consider the application of Markov chain Monte Carlo (MCMC) estimation methods to random-effects models and in particular the family of discrete time survival models. Survival models can be used in many situations in the medical and social sciences and we illustrate their use through two examples that differ in terms of both substantive area and data structure. A multilevel discrete time survival analysis involves expanding the data set so that the model can be cast as a standard multilevel binary response model. For such models it has been shown that MCMC methods have advantages in terms of reducing estimate bias. However, the data expansion results in very large data sets for which MCMC estimation is often slow and can produce chains that exhibit poor mixing. Any way of improving the mixing will result in both speeding up the methods and more confidence in the estimates that are produced. The MCMC methodological literature is full of alternative algorithms designed to improve mixing of chains and we describe three reparameterization techniques that are easy to implement in available software. We consider two examples of multilevel survival analysis: incidence of mastitis in dairy cattle and contraceptive use dynamics in Indonesia. For each application we show where the reparameterization techniques can be used and assess their performance.  相似文献   
108.
We examine the relationships between electoral socio‐demographic characteristics and two‐party preferences in the six Australian federal elections held between 2001 and 2016. Socio‐demographic information is derived from the Australian Census which occurs every 5 years. Since a census is not directly available for each election, an imputation method is employed to estimate census data for the electorates at the time of each election. This accounts for both spatial and temporal changes in electoral characteristics between censuses. To capture any spatial heterogeneity, a spatial error model is estimated for each election, which incorporates a spatially structured random effect vector. Over time, the impact of most socio‐demographic characteristics that affect electoral two‐party preference do not vary, with age distribution, industry of work, incomes, household mobility and relationships having strong effects in each of the six elections. Education and unemployment are among those that have varying effects. All data featured in this study have been contributed to the eechidna R package (available on CRAN).  相似文献   
109.
We introduce a point source model which may be useful for estimating point sources in spatial data. It may also be useful for modelling general spatial data, and providing a simple explanatory model for some data, whilst in other cases it may give a parsimonious representation. The model assumes that there are point sources (or sinks), usually at unknown positions, and that the mean value at a site depends on the distance from these sources. We discuss the general form of the model, and some methods for estimating the sources and the regression parameters. We demonstrate the methodology using a simulation study, and apply the model to two real data sets. Some possibilities for further research are outlined.  相似文献   
110.
The last decade saw enormous progress in the development of causal inference tools to account for noncompliance in randomized clinical trials. With survival outcomes, structural accelerated failure time (SAFT) models enable causal estimation of effects of observed treatments without making direct assumptions on the compliance selection mechanism. The traditional proportional hazards model has however rarely been used for causal inference. The estimator proposed by Loeys and Goetghebeur (2003, Biometrics vol. 59 pp. 100–105) is limited to the setting of all or nothing exposure. In this paper, we propose an estimation procedure for more general causal proportional hazards models linking the distribution of potential treatment-free survival times to the distribution of observed survival times via observed (time-constant) exposures. Specifically, we first build models for observed exposure-specific survival times. Next, using the proposed causal proportional hazards model, the exposure-specific survival distributions are backtransformed to their treatment-free counterparts, to obtain – after proper mixing – the unconditional treatment-free survival distribution. Estimation of the parameter(s) in the causal model is then based on minimizing a test statistic for equality in backtransformed survival distributions between randomized arms.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号