PropertyValue
?:about
?:abstract
  • To quickly identify hate speech online, communication research offers a useful tool in the form of automatic content analysis. However, the combined methods of standardized manual content analysis and supervised text classification demand different quality criteria. This chapter shows that a more substantial examination of validity is necessary since models often learn on spurious correlations or biases, and researchers run the risk of drawing wrong inferences. To investigate the overlap of theoretical concepts with technological operationalization, explainability methods are evaluated to explain what a model has learned. These methods proved to be of limited use in testing the validity of a model when the generated explanations aim at sense-making rather than faithfulness to the model. The chapter ends with recommendations for further interdisciplinary development of automatic content analysis. (xsd:string)
?:contributor
?:dateModified
  • 2023 (xsd:gyear)
?:datePublished
  • 2023 (xsd:gyear)
?:doi
  • 10.48541/dcr.v12.23 ()
?:duplicate
?:editor
?:hasFulltext
  • true (xsd:boolean)
is ?:hasPart of
?:inLanguage
  • en (xsd:string)
?:isbn
  • 978-3-945681-12-1 ()
?:issn
  • 2198-7610 ()
?:linksDOI
?:location
is ?:mainEntity of
?:name
  • The right kind of explanation: Validity in automated hate speech detection (xsd:string)
?:provider
?:publicationType
  • Sammelwerksbeitrag (xsd:string)
  • in_proceedings (en)
?:sourceCollection
  • Challenges and perspectives of hate speech research (xsd:string)
?:sourceInfo
  • GESIS-SSOAR (xsd:string)
  • In: Challenges and perspectives of hate speech research, Berlin, 2023, 383-402 (xsd:string)
rdf:type
?:url
?:volumeNumber
  • 12 (xsd:string)