首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This article provides an assessment over time of the incidence of newspaper-reported Internet-initiated sexual assaults among U.S. adolescents undergoing adjudication from 1996 to 2007. Of 812 newspaper reports of adjudicated Internet-initiated sexual assault, most (79.2%) victims were female, and the median age was 14 years. The incidence rate of these reports increased over the 12-year period for females but remained steady for males. The frequency of these assaults was much less than reported for other types of sexual assaults in this age group. These estimates hopefully will assist in a greater understanding of these assaults, aid in interventions to decrease their occurrence, and guide effective policymaking that will reduce all types of sexual assault among adolescents.  相似文献   

2.
Social network data usually contain different types of errors. One of them is missing data due to actor non-response. This can seriously jeopardize the results of analyses if not appropriately treated. The impact of missing data may be more severe in valued networks where not only the presence of a tie is recorded, but also its magnitude or strength. Blockmodeling is a technique for delineating network structure. We focus on an indirect approach suitable for valued networks. Little is known about the sensitivity of valued networks to different types of measurement errors. As it is reasonable to expect that blockmodeling, with its positional outcomes, could be vulnerable to the presence of non-respondents, such errors require treatment. We examine the impacts of seven actor non-response treatments on the positions obtained when indirect blockmodeling is used. The start point for our simulation are networks whose structure is known. Three structures were considered: cohesive subgroups, core-periphery, and hierarchy. The results show that the number of non-respondents, the type of underlying blockmodel structure, and the employed treatment all have an impact on the determined partitions of actors in complex ways. Recommendations for best practices are provided.  相似文献   

3.
There will be occasions in which a researcher wants to ignore some dyads in the computation of centrality in order to avoid biased or misleading results. This paper presents a principled way of computing eigenvector-like centrality scores when some dyads are not included in the calculations.  相似文献   

4.
In the field of social network analysis, there are situations in which researchers hope to ignore certain dyads in the computation of centrality to avoid biased or misleading results, but simply deleting these dyads will result in wrong conclusions. There is little work considering this particular problem except the eigenvector-like centrality method presented in 2015. In this paper, we revisit this problem and present a new degree-like centrality method which also allows some dyads to be excluded in the calculations. This new method adopts the technique of weighted symmetric nonnegative matrix factorization (abbreviated as WSNMF), and we will show that it can be seen as the generalized version of the existing eigenvector-like centrality. After applying it to several data sets, we test this new method's efficiency.  相似文献   

5.
    
Research on measurement error in network data has typically focused on missing data. We embed missing data, which we term false negative nodes and edges, in a broader classification of error scenarios. This includes false positive nodes and edges and falsely aggregated and disaggregated nodes. We simulate these six measurement errors using an online social network and a publication citation network, reporting their effects on four node-level measures – degree centrality, clustering coefficient, network constraint, and eigenvector centrality. Our results suggest that in networks with more positively-skewed degree distributions and higher average clustering, these measures tend to be less resistant to most forms of measurement error. In addition, we argue that the sensitivity of a given measure to an error scenario depends on the idiosyncracies of the measure's calculation, thus revising the general claim from past research that the more ‘global’ a measure, the less resistant it is to measurement error. Finally, we anchor our discussion to commonly-used networks in past research that suffer from these different forms of measurement error and make recommendations for correction strategies.  相似文献   

6.
    
Missing data is an important, but often ignored, aspect of a network study. Measurement validity is affected by missing data, but the level of bias can be difficult to gauge. Here, we describe the effect of missing data on network measurement across widely different circumstances. In Part I of this study (Smith and Moody, 2013), we explored the effect of measurement bias due to randomly missing nodes. Here, we drop the assumption that data are missing at random: what happens to estimates of key network statistics when central nodes are more/less likely to be missing? We answer this question using a wide range of empirical networks and network measures. We find that bias is worse when more central nodes are missing. With respect to network measures, Bonacich centrality is highly sensitive to the loss of central nodes, while closeness centrality is not; distance and bicomponent size are more affected than triad summary measures and behavioral homophily is more robust than degree-homophily. With respect to types of networks, larger, directed networks tend to be more robust, but the relation is weak. We end the paper with a practical application, showing how researchers can use our results (translated into a publically available java application) to gauge the bias in their own data.  相似文献   

7.
Structural effects of network sampling coverage I: Nodes missing at random   总被引:1,自引:0,他引:1  
Network measures assume a census of a well-bounded population. This level of coverage is rarely achieved in practice, however, and we have only limited information on the robustness of network measures to incomplete coverage. This paper examines the effect of node-level missingness on 4 classes of network measures: centrality, centralization, topology and homophily across a diverse sample of 12 empirical networks. We use a Monte Carlo simulation process to generate data with known levels of missingness and compare the resulting network scores to their known starting values. As with past studies (0035 and 0135), we find that measurement bias generally increases with more missing data. The exact rate and nature of this increase, however, varies systematically across network measures. For example, betweenness and Bonacich centralization are quite sensitive to missing data while closeness and in-degree are robust. Similarly, while the tau statistic and distance are difficult to capture with missing data, transitivity shows little bias even with very high levels of missingness. The results are also clearly dependent on the features of the network. Larger, more centralized networks are generally more robust to missing data, but this is especially true for centrality and centralization measures. More cohesive networks are robust to missing data when measuring topological features but not when measuring centralization. Overall, the results suggest that missing data may have quite large or quite small effects on network measurement, depending on the type of network and the question being posed.  相似文献   

8.
    
ABSTRACT

Social science datasets usually have missing cases, and missing values. All such missing data has the potential to bias future research findings. However, many research reports ignore the issue of missing data, only consider some aspects of it, or do not report how it is handled. This paper rehearses the damage caused by missing data. The paper then briefly considers eight different approaches to handling missing data so as to minimise that damage, their underlying assumptions and the likely costs and benefits. These approaches include complete case analysis, complete variable analysis, single imputation, multiple imputation, maximum likelihood estimation, default replacement values, weighting, and sensitivity analyses. Using only complete cases should be avoided wherever possible. The paper suggests that the more complex, modelling approaches to replacing missing data are based on questionable methodological and philosophical assumptions. And they may anyway not have clear advantages over simpler approaches like default replacements. It makes sense to report all possible forms of missing data, report everything that is known about the characteristics of cases missing values, conduct simple sensitivity analyses of the potential impact of missing data on the substantive results, and retain the knowledge of missingness when using any form of replacement value.  相似文献   

9.
10.
Abstract

Communities are looking for community-building responses to the issue of crime. Traditional social and political discourses have presented only two responses to crime, “get tough” or rehabilitate offenders. An alternative view has begun to emerge in community criminal justice discourse and practice. Restorative justice emphasizes the restoration of relationships and community peace that are damaged by the harm of crime, and the repair of these social injuries. This article presents the findings of a pilot study that was conducted to assess a community's openness to restorative justice principles.  相似文献   

11.
Comparing surveys of victims with police statistics illustrates the differences between lay and professional views of crime. Victims’expectations and the police handling of cases do not always match. The determinants of victims’ decisions to report incidents to the police are briefly summarised; and the ways the police classify victims’account and handle their complaints, examined. This analysis is based on two sets of data: a victimisation survey of a sample of 10?504 persons aged 15 and older, drawn from the Île-de-France Region; the police statistics corresponding to the same area.  相似文献   

12.
Discerning the essential structure of social networks is a major task. Yet, social network data usually contain different types of errors, including missing data that can wreak havoc during data analyses. Blockmodeling is one technique for delineating network structure. While we know little about its vulnerability to missing data problems, it is reasonable to expect that it is vulnerable given its positional nature. We focus on actor non-response and treatments for this. We examine their impacts on blockmodeling results using simulated and real networks. A set of ‘known’ networks are used, errors due to actor non-response are introduced and are then treated in different ways. Blockmodels are fitted to these treated networks and compared to those for the known networks. The outcome indicators are the correspondence of both position memberships and identified blockmodel structures. Both the amount and type of non-response, and considered treatments, have an impact on delineated blockmodel structures.  相似文献   

13.
    
Social network analysis identifies social ties, and perceptual measures identify peer norms. The social relations model (SRM) can decompose interval-level perceptual measures among all dyads in a network into multiple person- and dyad-level components. This study demonstrates how to accommodate missing round-robin data using Bayesian data augmentation, including how to incorporate partially observed covariates as auxiliary correlates or as substantive predictors. We discuss how data augmentation opens the possibility to fit SRM to network ties (potentially without boundaries) rather than round-robin data. An illustrative application explores the relationship between sorority members’ self-reported body comparisons and perceptions of friends’ body talk.  相似文献   

14.
ABSTRACT

The present study is the first to examine empirically whether required fields in online surveys impair reliability and response pattern, as participants forced to respond to all items may provide arbitrary answers. Two hundred and thirteen participants completed a survey consisting of six questionnaires testing personal and social issues and perceptions. They were randomly assigned to one of two versions of the survey: optional-fields (N = 104) or required-fields (N = 109). Comparison of the Cronbach’s alpha of the two versions revealed identical reliability values for all questionnaires, save for somatization, where a minor difference was found. Confirmatory factor analysis showed no difference in the factor structure of the two versions, and no differences were found by Bayesian t-test and Levene’s test for equality of variances. The findings suggest that required fields do not impair reliability or change the response pattern, and therefore can be used in online surveys to prevent missing data.  相似文献   

15.
16.
In the UK, particularly in England, youth crime is perceived as a serious social problem, which is always near the top of the political agenda. Since the early 1990s, ‘populist punitiveness’ (Bottoms, 1995), amounting to varying degrees of punishment and control, has been key for addressing the problem. This culminated in New Labour's flagship Crime and Disorder Act 1998 and thereafter increasing concern with anti-social behaviour. The Conservative-led coalition is continuing in this vein. It is a ‘get tough’ approach in which the role of social work has been sidelined. In this article, I argue that such an approach is counterproductive as evidenced by the riots of August 2011 in London and other major cities. Rather than notions of punishment and control being to the fore, attention should be paid to the social and economic conditions that shape young people's lives and behaviour. For social workers, this involves relationship building with young offenders and their families and this is where a radical/critical work practice comes in. It is an emancipatory practice, which resists the neoliberal present and has some vision of a more socially just and equal future world.  相似文献   

17.
    
Recent developments have made model-based imputation of network data feasible in principle, but the extant literature provides few practical examples of its use. In this paper, we consider 14 schools from the widely used In-School Survey of Add Health (Harris et al., 2009), applying an ERGM-based estimation and simulation approach to impute the network missing data for each school. Add Health's complex study design leads to multiple types of missingness, and we introduce practical techniques for handing each. We also develop a cross-validation based method – Held-Out Predictive Evaluation (HOPE) – for assessing this approach. Our results suggest that ERGM-based imputation of edge variables is a viable approach to the analysis of complex studies such as Add Health, provided that care is used in understanding and accounting for the study design.  相似文献   

18.
    
Most quantitative studies in the social sciences suffer from missing data. However, despite the large availability of documents and software to treat such data, it appears that many social scientists do not apply good practices regarding missing data. We analyzed quantitative papers published in 2017 in six top-level social science journals. Item-level missing data was found in at least 69.5% of the papers, but their presence was explicitly reported in only 44.4% of all analyzed papers. Moreover, in the majority of cases, the treatments applied to missing data were incorrect, with many uses of deletion methods that are known to produce biased results and to reduce statistical power. The impact of missing data and of their treatment on results was barely discussed. Results show that social scientists underestimate the impact of missing data on their research and that they should pay more attention to the way such data are treated.  相似文献   

19.
    
Multiple imputation (MI), a two-stage process whereby missing data are imputed multiple times and the resulting estimates of the parameter(s) of interest are combined across the completed datasets, is becoming increasingly popular for handling missing data. However, MI can result in biased inference if not carried out appropriately or if the underlying assumptions are not justifiable. Despite this, there remains a scarcity of guidelines for carrying out MI. In this paper we provide a tutorial on the main issues involved in employing MI, as well as highlighting some common pitfalls and misconceptions, and areas requiring further development. When contemplating using MI we must first consider whether it is likely to offer gains (reduced bias or increased precision) over alternative methods of analysis. Once it has been decided to use MI, there are a number of decisions that must be made during the imputation process; we discuss the extent to which these decisions can be guided by the current literature. Finally we highlight the importance of checking the fit of the imputation model. This process is illustrated using a case study in which we impute missing outcome data in a five-wave longitudinal study that compared extremely preterm individuals with term-born controls.  相似文献   

20.
We consider partially observed network data as defined in Handcock and Gile (2010). More specifically we introduce an elaboration of the Bayesian data augmentation scheme of Koskinen et al. (2010) that uses the exchange algorithm (Caimo and Friel, 2011) for inference for the exponential random graph model (ERGM) where tie variables are partly observed. We illustrate the generating of posteriors and unobserved tie-variables with empirical network data where 74% of the tie variables are unobserved under the assumption that some standard assumptions hold true. One of these assumptions is that covariates are fixed and completely observed. A likely scenario is that also covariates might only be partially observed and we propose a further extension of the data augmentation algorithm for missing attributes. We provide an illustrative example of parameter inference with nearly 30% of dyads affected by missing attributes (e.g. homophily effects). The assumption that all actors are known is another assumption that is liable to be violated so that there are “covert actors”. We briefly discuss various aspects of this problem with reference to the Sageman (2004) data set on suspected terrorists. We conclude by identifying some areas in need of further research.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号