PropertyValue
?:about
?:abstract
  • In this article I describe a series of test runs to examine the contribution that the AI-based program ChatGPT in both Versions 3.5. and 4 can make to a qualitative content analysis of interview texts. A short sample text with my sample solution is presented for this purpose. Rough inputs for a rather naive use ("Conduct a qualitative content analysis!") as well as differentiated specifications with questions and more precise coding instructions (prompts) led in both versions at most to rough approximations of the sample solution with a large number of gross errors. The program did not react or reacted incorrectly to different content analysis concepts (BRAUN & CLARKE, 2006; KUCKARTZ, 2014; MAYRING, 2022a; SCHREIER, 2012), did not recognize hidden text content, and failed to check for coding agreement. The results of the software, no matter what specifications were made, mostly pointed in the direction of a rough, superficial summary in the sense of a list of topics and thus appear to be less suitable for the qualitative content analysis methods I developed (MAYRING, 2022a, 2022b). (xsd:string)
?:contributor
?:dateModified
  • 2025 (xsd:gyear)
?:datePublished
  • 2025 (xsd:gyear)
?:doi
  • 10.17169/fqs-26.1.4252 ()
?:hasFulltext
  • true (xsd:boolean)
is ?:hasPart of
?:inLanguage
  • en (xsd:string)
?:isPartOf
?:issn
  • 1438-5627 ()
?:issueNumber
  • 1 (xsd:string)
?:linksDOI
?:name
  • Qualitative Content Analysis With ChatGPT: Pitfalls, Rough Approximations and Gross Errors - A Field Report (xsd:string)
?:provider
?:publicationType
  • Zeitschriftenartikel (xsd:string)
  • journal_article (en)
?:sourceInfo
  • GESIS-SSOAR (xsd:string)
  • In: Forum Qualitative Sozialforschung / Forum: Qualitative Social Research, 26, 2025, 1 (xsd:string)
rdf:type
?:url
?:volumeNumber
  • 26 (xsd:string)