PropertyValue
?:about
?:abstract
  • The advent of social media has increased digital content - and, with it, hate speech. Advancements in machine learning help detect online hate speech at scale, but scale is only one part of the problem related to moderating it. Machines do not decide what comprises hate speech, which is part of a societal norm. Power relations establish such norms and, thus, determine who can say what comprises hate speech. Without considering this data-generation process, a fair automated hate speech detection system cannot be built. This chapter first examines the relationship between power, hate speech, and machine learning. Then, it examines how the intersectional lens - focusing on power dynamics between and within social groups - helps identify bias in the data sets used to build automated hate speech detection systems. (xsd:string)
?:contributor
?:dateModified
  • 2023 (xsd:gyear)
?:datePublished
  • 2023 (xsd:gyear)
?:doi
  • 10.48541/dcr.v12.21 ()
?:duplicate
?:editor
?:hasFulltext
  • true (xsd:boolean)
is ?:hasPart of
?:inLanguage
  • en (xsd:string)
?:isbn
  • 978-3-945681-12-1 ()
?:issn
  • 2198-7610 ()
?:linksDOI
?:location
is ?:mainEntity of
?:name
  • Machines do not decide hate speech: Machine learning, power, and the intersectional approach (xsd:string)
?:provider
?:publicationType
  • Sammelwerksbeitrag (xsd:string)
  • in_proceedings (en)
?:sourceCollection
  • Challenges and perspectives of hate speech research (xsd:string)
?:sourceInfo
  • GESIS-SSOAR (xsd:string)
  • In: Challenges and perspectives of hate speech research, Berlin, 2023, 355-369 (xsd:string)
rdf:type
?:url
?:volumeNumber
  • 12 (xsd:string)