With a machine learning approach and less focus on linguistic details, this natural language processing textbook introduces the fundamental mathematical and deep learning models for NLP in a unified framework. An invaluable, accessible and up-to-date tool for the upper undergraduate and graduate student, with sample code available online.
With a machine learning approach and less focus on linguistic details, this natural language processing textbook introduces the fundamental mathematical and deep learning models for NLP in a unified framework. An invaluable, accessible and up-to-date tool for the upper undergraduate and graduate student, with sample code available online.
Yue Zhang is an associate professor at Westlake University. Before joining Westlake, he worked as a research associate at the University of Cambridge and then a faculty member at Singapore University of Technology and Design. His research interests lie in fundamental algorithms for NLP, syntax, semantics, information extraction, text generation, and machine translation. He serves as an action editor for TACL, and as area chairs of ACL, EMNLP, COLING, and NAACL. He gave several tutorials at ACL, EMNLP and NAACL, and won a best paper award at COLING in 2018.
Inhaltsangabe
Part I. Basics: 1. Introduction; 2. Counting relative frequencies; 3. Feature vectors; 4. Discriminative linear classifiers; 5. A perspective from information theory; 6. Hidden variables; Part II. Structures: 7. Generative sequence labelling; 8. Discriminative sequence labelling; 9. Sequence segmentation; 10. Predicting tree structures; 11. Transition-based methods for structured prediction; 12. Bayesian models; Part III. Deep Learning: 13. Neural network; 14. Representation learning; 15. Neural structured prediction; 16. Working with two texts; 17. Pre-training and transfer learning; 18. Deep latent variable models; Index.
Part I. Basics: 1. Introduction; 2. Counting relative frequencies; 3. Feature vectors; 4. Discriminative linear classifiers; 5. A perspective from information theory; 6. Hidden variables; Part II. Structures: 7. Generative sequence labelling; 8. Discriminative sequence labelling; 9. Sequence segmentation; 10. Predicting tree structures; 11. Transition-based methods for structured prediction; 12. Bayesian models; Part III. Deep Learning: 13. Neural network; 14. Representation learning; 15. Neural structured prediction; 16. Working with two texts; 17. Pre-training and transfer learning; 18. Deep latent variable models; Index.
Rezensionen
'An amazingly compact, and at the same time comprehensive, introduction and reference to natural language processing (NLP). It describes the NLP basics, then employs this knowledge to solve typical NLP problems. It achieves very high coverage of NLP through a clever abstraction to typical high-level tasks, such as sequence labelling. Finally, it explains the topics in deep learning. The book captivates through its simple elegance, depth, and accessibility to a wide range of readers from undergrads to experienced researchers.' Iryna Gurevych, Technical University of Darmstadt, Germany
Es gelten unsere Allgemeinen Geschäftsbedingungen: www.buecher.de/agb
Impressum
www.buecher.de ist ein Internetauftritt der buecher.de internetstores GmbH
Geschäftsführung: Monica Sawhney | Roland Kölbl | Günter Hilger
Sitz der Gesellschaft: Batheyer Straße 115 - 117, 58099 Hagen
Postanschrift: Bürgermeister-Wegele-Str. 12, 86167 Augsburg
Amtsgericht Hagen HRB 13257
Steuernummer: 321/5800/1497
USt-IdNr: DE450055826