首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
城市化进程改变了中国的社会、经济结构,对犯罪的影响也在发生变化,因此城市化与犯罪率之间存在线性关系的原假设也需要进行调整。利用MSVAR模型方法,对1978—2011年中国犯罪率与城市化率之间的非线性关系进行研究。研究结果表明:1978—2011年中国犯罪率与城市化率之间的非线性关系存在显著的2区制特征,在区制2(城市化快速发展),城市化进程显著地推动了犯罪率的增长;在区制1(城市化适速发展),二者不存在显著的影响关系。  相似文献   

2.
This study uses recent advances in time-series econometrics to investigate the non-stationarity and co-integration properties of violent crime series in England and Wales. In particular, we estimate the long-run impact of economic conditions, beer consumption and various deterrents on different categories of recorded violent crime. The results suggest that a long-run causal model exists for only minor crimes of violence, with beer consumption being a predominant factor.  相似文献   

3.
Usual fitting methods for the nested error linear regression model are known to be very sensitive to the effect of even a single outlier. Robust approaches for the unbalanced nested error model with proved robustness and efficiency properties, such as M-estimators, are typically obtained through iterative algorithms. These algorithms are often computationally intensive and require robust estimates of the same parameters to start the algorithms, but so far no robust starting values have been proposed for this model. This paper proposes computationally fast robust estimators for the variance components under an unbalanced nested error model, based on a simple robustification of the fitting-of-constants method or Henderson method III. These estimators can be used as starting values for other iterative methods. Our simulations show that they are highly robust to various types of contamination of different magnitude.  相似文献   

4.
Summary.  This study examines household and area effects on the incidence of total property crimes and burglaries and thefts. It uses data from the 2000 British Crime Survey and the 1991 UK census small area statistics. Results are obtained from estimated random-effects multilevel models, with an assumed negative binomial distribution of the dependent variable. Both household and area characteristics, as well as selected interactions, explain a significant portion of the variation in property crimes. There are also a large number of significant between-area random variances and covariances of household characteristics. The estimated fixed and random effects may assist in advancing victimization theory. The methods have potential for developing a better understanding of factors that give rise to crime and so assist in framing crime prevention policy.  相似文献   

5.
Error rate is a popular criterion for assessing the performance of an allocation rule in discriminant analysis. Training samples which involve missing values cause problems for those error rate estimators that require all variables to be observed at all data points. This paper explores imputation algorithms, their effects on, and problems of implementing them with, eight commonly used error rate estimators (three parametric and five non-parametric) in linear discriminant analysis. The results indicate that imputation should not be based on the way error rate estimators are calculated, and that imputed values may underestimate error rates.  相似文献   

6.
7.
Summary.  We consider the association between victimization and offending behaviour by using data from the Youth Lifestyles Survey. We consider the effect of violent and non-violent offending on the probability of being a victim of violent and non-violent crime and find a positive association between these by using univariate probit estimates. However, taking into account the endog- enous nature of offending and victimization via a bivariate probit model, we find that univariate estimates understate the association. We suggest that policy recommendations should only be informed by the bivariate analysis of the association between offending and victimization.  相似文献   

8.
We introduce two types of graphical log‐linear models: label‐ and level‐invariant models for triangle‐free graphs. These models generalise symmetry concepts in graphical log‐linear models and provide a tool with which to model symmetry in the discrete case. A label‐invariant model is category‐invariant and is preserved after permuting some of the vertices according to transformations that maintain the graph, whereas a level‐invariant model equates expected frequencies according to a given set of permutations. These new models can both be seen as instances of a new type of graphical log‐linear model termed the restricted graphical log‐linear model, or RGLL, in which equality restrictions on subsets of main effects and first‐order interactions are imposed. Their likelihood equations and graphical representation can be obtained from those derived for the RGLL models.  相似文献   

9.
This paper concerns the geometric treatment of graphical models using Bayes linear methods. We introduce Bayes linear separation as a second order generalised conditional independence relation, and Bayes linear graphical models are constructed using this property. A system of interpretive and diagnostic shadings are given, which summarise the analysis over the associated moral graph. Principles of local computation are outlined for the graphical models, and an algorithm for implementing such computation over the junction tree is described. The approach is illustrated with two examples. The first concerns sales forecasting using a multivariate dynamic linear model. The second concerns inference for the error variance matrices of the model for sales, and illustrates the generality of our geometric approach by treating the matrices directly as random objects. The examples are implemented using a freely available set of object-oriented programming tools for Bayes linear local computation and graphical diagnostic display.  相似文献   

10.
Recent research has demonstrated that information learned from building a graphical model on the predictor set of a regularized linear regression model can be leveraged to improve prediction of a continuous outcome. In this article, we present a new model that encourages sparsity at both the level of the regression coefficients and the level of individual contributions in a decomposed representation. This model provides parameter estimates with a finite sample error bound and exhibits robustness to errors in the input graph structure. Through a simulation study and the analysis of two real data sets, we demonstrate that our model provides a predictive benefit when compared to previously proposed models. Furthermore, it is a highly flexible model that provides a unified framework for the fitting of many commonly used regularized regression models. The Canadian Journal of Statistics 47: 729–747; 2019 © 2019 Statistical Society of Canada  相似文献   

11.
Statistical model learning problems are traditionally solved using either heuristic greedy optimization or stochastic simulation, such as Markov chain Monte Carlo or simulated annealing. Recently, there has been an increasing interest in the use of combinatorial search methods, including those based on computational logic. Some of these methods are particularly attractive since they can also be successful in proving the global optimality of solutions, in contrast to stochastic algorithms that only guarantee optimality at the limit. Here we improve and generalize a recently introduced constraint-based method for learning undirected graphical models. The new method combines perfect elimination orderings with various strategies for solution pruning and offers a dramatic improvement both in terms of time and memory complexity. We also show that the method is capable of efficiently handling a more general class of models, called stratified/labeled graphical models, which have an astronomically larger model space.  相似文献   

12.
The comparison of an estimated parameter to its standard error, the Wald test, is a well known procedure of classical statistics. Here we discuss its application to graphical Gaussian model selection. First we derive the Fisher information matrix and its inverse about the parameters of any graphical Gaussian model. Both the covariance matrix and its inverse are considered and a comparative analysis of the asymptotic behaviour of their maximum likelihood estimators (m.l.e.s) is carried out. Then we give an example of model selection based on the standard errors. The method is shown to produce almost identical inference to likelihood ratio methods in the example considered.  相似文献   

13.
Parametric and non-parametric lifetime data analyses in practical applications require sensitive tools if non-monotonic ageing properties (trend changes) are to be examined. The well known bathtub-shaped hazard rate is a special model for describing a trend change in ageing properties over time. The identification of trend changes in the hazard rate can be supported by graphical tools. This paper discusses the combined application of graphical tools and parametric estimation in the flexible mixed gamma distribution family to identify trend changes and model bathtub-shaped hazard rates.  相似文献   

14.
We investigate the estimation of dynamic models of criminal activity, when there is significant under-recording of crime. We give a theoretical analysis and use simulation techniques to investigate the resulting biases in conventional regression estimates. We find the biases to be of little practical significance. We develop and apply a new simulated maximum likelihood procedure that estimates simultaneously the measurement error and crime processes, using extraneous survey data. This also confirms that measurement error biases are small. Our estimation results for data from England and Wales imply a significant response of crime to both the economic and the enforcement environment.  相似文献   

15.
The problem of selecting a graphical model is considered as a performing simultaneously multiple tests. The control of the overall Type I error on the selected graph is done using the so famous Holm's procedure. We prove that when we use a consistent edge exclusion test the selected graph is asymptotically equal to the true graph with probability at least equal to a fixed level 1 ? α. This method is then used for the selection of mixed concentration graph models by performing the χ2-edge exclusion test. We also apply the method to two classical examples and to simulated data. We compare the overall error of the selected model with the one obtained using the stepwise method. We establish that the control is better when we use the Holm's procedure.  相似文献   

16.
Semiparametric regression models that use spline basis functions with penalization have graphical model representations. This link is more powerful than previously established mixed model representations of semiparametric regression, as a larger class of models can be accommodated. Complications such as missingness and measurement error are more naturally handled within the graphical model architecture. Directed acyclic graphs, also known as Bayesian networks, play a prominent role. Graphical model-based Bayesian 'inference engines', such as bugs and vibes , facilitate fitting and inference. Underlying these are Markov chain Monte Carlo schemes and recent developments in variational approximation theory and methodology.  相似文献   

17.
In this paper, we consider two well-known parametric long-term survival models, namely, the Bernoulli cure rate model and the promotion time (or Poisson) cure rate model. Assuming the long-term survival probability to depend on a set of risk factors, the main contribution is in the development of the stochastic expectation maximization (SEM) algorithm to determine the maximum likelihood estimates of the model parameters. We carry out a detailed simulation study to demonstrate the performance of the proposed SEM algorithm. For this purpose, we assume the lifetimes due to each competing cause to follow a two-parameter generalized exponential distribution. We also compare the results obtained from the SEM algorithm with those obtained from the well-known expectation maximization (EM) algorithm. Furthermore, we investigate a simplified estimation procedure for both SEM and EM algorithms that allow the objective function to be maximized to split into simpler functions with lower dimensions with respect to model parameters. Moreover, we present examples where the EM algorithm fails to converge but the SEM algorithm still works. For illustrative purposes, we analyze a breast cancer survival data. Finally, we use a graphical method to assess the goodness-of-fit of the model with generalized exponential lifetimes.  相似文献   

18.
The graphical belief model is a versatile tool for modeling complex systems. The graphical structure and its implicit probabilistic and logical independence conditions define the relationships between many of the variables of the problem. The graphical model is composed of a collection of local models:models of both interactions between the variables sharing a common hyperedge and information about single variables. These local models can be constructed with either probability distributions or belief functions. This paper takes the latter approach and describes simple models for univariate and multivariate belief functions. The examples are taken from both reliability and knowledge representation problems.  相似文献   

19.
The feasibility of a new clinical trial may be increased by incorporating historical data of previous trials. In the particular case where only data from a single historical trial are available, there exists no clear recommendation in the literature regarding the most favorable approach. A main problem of the incorporation of historical data is the possible inflation of the type I error rate. A way to control this type of error is the so‐called power prior approach. This Bayesian method does not “borrow” the full historical information but uses a parameter 0 ≤ δ ≤ 1 to determine the amount of borrowed data. Based on the methodology of the power prior, we propose a frequentist framework that allows incorporation of historical data from both arms of two‐armed trials with binary outcome, while simultaneously controlling the type I error rate. It is shown that for any specific trial scenario a value δ > 0 can be determined such that the type I error rate falls below the prespecified significance level. The magnitude of this value of δ depends on the characteristics of the data observed in the historical trial. Conditionally on these characteristics, an increase in power as compared to a trial without borrowing may result. Similarly, we propose methods how the required sample size can be reduced. The results are discussed and compared to those obtained in a Bayesian framework. Application is illustrated by a clinical trial example.  相似文献   

20.
Multivariate Gaussian graphical models are defined in terms of Markov properties, i.e., conditional independences, corresponding to missing edges in the graph. Thus model selection can be accomplished by testing these independences, which are equivalent to zero values of corresponding partial correlation coefficients. For concentration graphs, acyclic directed graphs, and chain graphs (both LWF and AMP classes), we apply Fisher's z-transform, Šidák's correlation inequality, and Holm's step-down procedure to simultaneously test the multiple hypotheses specified by these zero values. This simple method for model selection controls the overall error rate for incorrect edge inclusion. Prior information about the presence and/or absence of particular edges can be readily incorporated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号