PropertyValue
?:about
?:abstract
  • We present a multi-modal approach to speaker characterization using acoustic, visual and linguistic features. Full realism is provided by evaluation on a database of real-life web videos and automatic feature extraction including face and eye detection, and automatic speech recognition. Different segmentations are evaluated for the audio and video streams, and the statistical relevance of Linguistic Inquiry and Word Count (LIWC) features is confirmed. In the result, late multimodal fusion delivers 73, 92 and 73% average recall in binary age, gender and race classification on unseen test subjects, outperforming the best single modalities for age and race. (xsd:string)
?:contributor
?:dateModified
  • 2013 (xsd:gyear)
?:datePublished
  • 2013 (xsd:gyear)
?:doi
  • 10.1109/ICASSP.2013.6638338 ()
?:duplicate
?:hasFulltext
  • true (xsd:boolean)
is ?:hasPart of
?:inLanguage
  • en (xsd:string)
?:isbn
  • 978-1-4799-0356-6 ()
?:issn
  • 2379-190X ()
?:linksDOI
?:linksURN
is ?:mainEntity of
?:name
  • Speaker trait characterization in web videos: Uniting speech, language, and facial features (xsd:string)
?:provider
?:publicationType
  • Konferenzbeitrag (xsd:string)
?:publisher
?:sourceCollection
  • Proceedings of the 38th International Conference on Acoustics, Speech and Signal Processing (ICASSP 2013) (xsd:string)
?:sourceInfo
  • GESIS-SSOAR (xsd:string)
rdf:type
?:url
?:urn
  • urn:nbn:de:0168-ssoar-66084-2 ()