117,99 €
inkl. MwSt.
Versandkostenfrei*
Erscheint vorauss. 20. September 2025
payback
59 °P sammeln
  • Gebundenes Buch

This book provides an in-depth exploration of the effectiveness of transfer learning approaches in detecting deceptive content (i.e., fake news) and inappropriate content (i.e., hate speech). The author first addresses the issue of insufficient labeled data by reusing knowledge gained from other natural language processing (NLP) tasks, such as language modeling. He goes on to observe the connection between harmful content and emotional signals in text after emotional cues were integrated into the classification models to evaluate their impact on model performance. Additionally, since…mehr

Produktbeschreibung
This book provides an in-depth exploration of the effectiveness of transfer learning approaches in detecting deceptive content (i.e., fake news) and inappropriate content (i.e., hate speech). The author first addresses the issue of insufficient labeled data by reusing knowledge gained from other natural language processing (NLP) tasks, such as language modeling. He goes on to observe the connection between harmful content and emotional signals in text after emotional cues were integrated into the classification models to evaluate their impact on model performance. Additionally, since pre-processing plays an essential role in NLP tasks by enriching raw data—especially critical for tasks with limited data, such as fake news detection—the book analyzes various pre-processing strategies in a transfer learning context to enhance the detection of fake stories online. Optimal settings for transferring knowledge from pre-trained models across subtasks, including claim extraction and check-worthiness assessment, are also investigated.  The author shows that the findings indicate that incorporating these features into check-worthy claim models can improve overall model performance, though integrating emotional signals did not significantly affect classifier results. Finally, the experiments highlight the importance of pre-processing for enhancing input text, particularly in social media contexts where content is often ambiguous and lacks context, leading to notable performance improvements. * Explores the effectiveness of transfer learning approaches in detecting deceptive and inappropriate content; * Analyzes pre-processing strategies in a transfer learning context to enhance the detection of fake stories online; * Investigates optimal settings for transferring knowledge from pre-trained models across subtasks.
Autorenporträt
Salar Mohtaj is a Research Scientist at the German Research Center for Artificial Intelligence (DFKI) and a postcoctoral researcher in the Speech & Language Technology group. He completed his PhD at Technische Universität Berlin, focusing on fake news and hate speech detection, and hold a Master’s degree in Information Technology from Tehran Polytechnic (Amirkabir University of Technology), specializing in natural language processing. Previously, he led the development of a Persian plagiarism detection system at ICT Research Institute of Tehran. With over 40 publications in journals and conferences, Salar has made contributions to different natural language processing tasks, notably publishing research and creating datasets across various tasks—from plagiarism detection and German text readability assessment to fake news detection.