首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   33篇
  免费   0篇
管理学   8篇
人口学   1篇
理论方法论   1篇
社会学   6篇
统计学   17篇
  2020年   2篇
  2019年   2篇
  2018年   7篇
  2017年   2篇
  2016年   1篇
  2014年   2篇
  2013年   7篇
  2012年   2篇
  2011年   2篇
  2008年   1篇
  2007年   1篇
  2003年   1篇
  1998年   1篇
  1991年   2篇
排序方式: 共有33条查询结果,搜索用时 62 毫秒
1.
We show how the Hamiltonian Monte Carlo algorithm can sometimes be speeded up by “splitting” the Hamiltonian in a way that allows much of the movement around the state space to be done at low computational cost. One context where this is possible is when the log density of the distribution of interest (the potential energy function) can be written as the log of a Gaussian density, which is a quadratic function, plus a slowly-varying function. Hamiltonian dynamics for quadratic energy functions can be analytically solved. With the splitting technique, only the slowly-varying part of the energy needs to be handled numerically, and this can be done with a larger stepsize (and hence fewer steps) than would be necessary with a direct simulation of the dynamics. Another context where splitting helps is when the most important terms of the potential energy function and its gradient can be evaluated quickly, with only a slowly-varying part requiring costly computations. With splitting, the quick portion can be handled with a small stepsize, while the costly portion uses a larger stepsize. We show that both of these splitting approaches can reduce the computational cost of sampling from the posterior distribution for a logistic regression model, using either a Gaussian approximation centered on the posterior mode, or a Hamiltonian split into a term that depends on only a small number of critical cases, and another term that involves the larger number of cases whose influence on the posterior distribution is small.  相似文献   
2.
Non-Gaussian spatial responses are usually modeled using spatial generalized linear mixed model with spatial random effects. The likelihood function of this model cannot usually be given in a closed form, thus the maximum likelihood approach is very challenging. There are numerical ways to maximize the likelihood function, such as Monte Carlo Expectation Maximization and Quadrature Pairwise Expectation Maximization algorithms. They can be applied but may in such cases be computationally very slow or even prohibitive. Gauss–Hermite quadrature approximation only suitable for low-dimensional latent variables and its accuracy depends on the number of quadrature points. Here, we propose a new approximate pairwise maximum likelihood method to the inference of the spatial generalized linear mixed model. This approximate method is fast and deterministic, using no sampling-based strategies. The performance of the proposed method is illustrated through two simulation examples and practical aspects are investigated through a case study on a rainfall data set.  相似文献   
3.
Assuming that MNCs face a much more complex environment that local enterprizes, the paper begins by discussing what economist Beckerman called psychic distance. After a historical discussion of this concept, I also discuss what O’Gardy and Lane called psychic distance paradox. Then, I argue that these two concepts have a great deal of relevance beyond their original intent of international trade-they are also relevant to FDI and all other formas of international production and exchange. Next, I argue, as I have done in several previous papers, that behavioral economics has a great deal of relevance to FDI and international productive activity; I also argue that behavioral economics can be utilized to describe the concepts of psychic distance and psychic distance paradox. Then, I develop a behavioral economics-based model that can explain the concepts of psychic distance and psychic distance paradox, and their relevance to the modes of entry of MNCs in international markets. In doing so, I argue that FDI and MNC behavior in general need not be explained outside of economics, since, in contrast to neo-classical economics, behavioral economics is capable of capturing the complexities of global markets.  相似文献   
4.
Spatial generalised linear mixed models are used commonly for modelling non‐Gaussian discrete spatial responses. In these models, the spatial correlation structure of data is modelled by spatial latent variables. Most users are satisfied with using a normal distribution for these variables, but in many applications it is unclear whether or not the normal assumption holds. This assumption is relaxed in the present work, using a closed skew normal distribution for the spatial latent variables, which is more flexible and includes normal and skew normal distributions. The parameter estimates and spatial predictions are calculated using the Markov Chain Monte Carlo method. Finally, the performance of the proposed model is analysed via two simulation studies, followed by a case study in which practical aspects are dealt with. The proposed model appears to give a smaller cross‐validation mean square error of the spatial prediction than the normal prior in modelling the temperature data set.  相似文献   
5.
Assuming the division of behavioral economics into old and new, the paper begins to argue that old behavioral economics began with the works of two giants – George Katuna and Herbert Simon during the 1950s and early 1960s. The contributors of Herbert Simon are well established, thanks to the popularity of bounded rationality and satisficing, and his being award Noble Prize in economics. However, economists are much less familiar with the contributions of George Katona that can be viewed as the father of behavioral economics. Furthermore, the author argues that Katona was also misunderstood by various economists when he was attempting to create a psychologically based economics that rejected the mechanistic psychology of neoclassical economics and introducing the survey method to economic research that he had been using in his experimental psychology research previously. He also had influenced various economists during their debates in the 1950s without given the credit for. Many historians of behavioral economics limit Katona's contributions to the start of behavioral economics only to his contributions to macroeconomics. However, the paper demonstrates that Katona's behavioral economics included his contributions to macroeconomics (bringing realism to Keynesian consumption function and consumption behavior), micro-economics (business behavior, the rationality assumption, etc.), public finance and economic policy, and his introduction of the survey method. To demonstrate these contributions, the author argues that Katona attempted to bring realism to economic analysis – through psychological concepts – beginning with his early days of research in Germany which coincided with German hyper inflation- and continued whether working at New school for Social Research, Chicago University's Cowles Commission, the U.S. Department of Agriculture, or the University of Michigan's Survey Research Center. The author also argues that Katona's contributions went through stages, depending upon what economic problem persisted at the time, what advertises he was facing, and what institution/organization he was associated with.  相似文献   
6.
Abstract

Resource scheduling for emergency relief operations is complex as it has many constraints. However, an effective allocation and sequencing of resources are crucial for the minimization of the completion times in emergency relief operations. Despite the importance of such decisions, only a few mathematical models of emergency relief operations have been studied. This article presents a bi-objective mixed integer programming (MIP) that helps to minimize both the total weighted time of completion of the demand points and the makespan of the total emergency relief operation. A two-phase method is developed to solve the bi-objective MIP problem. Additionally, a case study of hospital network in the Melbourne metropolitan area is used to evaluate the model. The results indicate that the model can successfully support the decisions required in the optimal resource scheduling of emergency relief operations.  相似文献   
7.
Objective: To investigate the association between serum levels of testosterone and biomarkers of subclinical atherosclerosis based on data from 119 middle-aged men of the general population.

Methods: Testosterone, Apolipoprotein A-1 (ApoA-1), Apolipoprotein B (ApoB), Apolipoprotein B-to-Apolipoprotein A-1 ratio (ApoB-to-ApoA-1), high-sensitive C-reactive protein (hsCRP), and fibrinogen levels were measured. Data were also gathered based on age, BMI, waist circumference, smoking, alcohol consumption, and family history of cardiovascular diseases. Men were classified into two groups based on testosterone levels: hypogonadal (testosterone ≤12?nmol/L) and eugonadal men (testosterone >12?nmol/L).

Results: When compared to eugonadal, the hypogonadal men were significantly older (56?years vs. 55?years, p?=?.03), had greater BMI (28?kg/cm2 vs. 26?kg/cm2, p?=?.01), and higher waist circumference (104?cm vs. 100?cm, p?=?.01). Moreover, ApoB, ApoB-to-ApoA-1 ratio, and hsCRP were significantly higher in hypogonadal men compared to eugonadal men (1.1?g/L vs. 1.0?g/L, p?=?.03), (0.8 vs. 0.7, p?=?.03), (3.3?mg/L vs. 2.0?mg/L, p?=?.01), respectively. On the other hand, ApoA-1 and fibrinogen levels did not differ significantly between groups (p?>?.05). In an adjusted multivariate regression analysis model, only ApoB showed a significant negative association with testosterone levels (β?=??0.01; 95% CI?=??0.02, ?1.50; p?=?.04).

Conclusion: Testosterone levels showed an inverse relation to ApoB, a biomarker implicated in subclinical atherosclerosis. These findings support the hypothesis that low testosterone levels play a role in atherosclerosis.  相似文献   
8.
Data on the weights and heights of children 2-18 yeas old in Iran were obtained in a National Health Survey of 10 660 families in 1990-92. Data were 'cleaned' in 1 year age groups. After excluding gross outliers by inspection of bivariate scatter plots, Box-Cox power transformations were used to normalize the distributions of height and weight. If a multivariate Box-Cox power transformation to normality exists, then it is equivalent to normalizing the data variable by variable. After excluding gross outliers, exclusions based on the Mahalanobis distance were almost identical to those identified by Hadi's iterative procedure, because the percentages of outliers were small. In all, 1% of the observations were gross outliers and a further 0.4% were identified by multivariate analysis. Review of records showed that the outliers identified by multivariate analysis resulted from data-processing errors. After transformation and 'cleaning', the data quality was excellent and suitable for the construction of growth charts.  相似文献   
9.
State-of-the-art market segmentation often involves simultaneous consideration of multiple and overlapping variables. These variables are studied to assess their relationships, select a subset of variables which best represent the subgroups (segments) within a market, and determine the likelihood of membership of a given individual in a particular segment. Such information, obtained in the exploratory phase of a multivariate market segmentation study, leads to the construction of more parsimonious models. These models have less stringent data requirements while facilitating substantive evaluation to aid marketing managers in formulating more effective targeting and positioning strategies within different market segments. This paper utilizes the information-theoretic (IT) approach to address several issues in multivariate market segmentation studies. A marketing data set analyzed previously is employed to examine the suitability and usefulness of the proposed approach [12]. Some useful extensions of the IT framework and its applications are also discussed.  相似文献   
10.
For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号