Oops! Looks like we're having trouble connecting to our server.
Refresh your browser window to try again.
About this product
Product Identifiers
PublisherO'reilly Media, Incorporated
ISBN-101098136799
ISBN-139781098136796
eBay Product ID (ePID)17057249513
Product Key Features
Number of Pages406 Pages
Publication NameNatural Language Processing with Transformers, Revised Edition
LanguageEnglish
SubjectIntelligence (Ai) & Semantics, Natural Language Processing, Data Processing
Publication Year2022
TypeTextbook
AuthorLeandro Von Werra, Lewis Tunstall, Thomas Wolf
Subject AreaComputers
FormatTrade Paperback
Dimensions
Item Height0.9 in
Item Weight24.2 Oz
Item Length9.1 in
Item Width7.4 in
Additional Product Features
LCCN2023-275986
Dewey Edition23
IllustratedYes
Dewey Decimal006.3/5
SynopsisSince their introduction in 2017, transformers have quickly become the dominant architecture for achieving state-of-the-art results on a variety of natural language processing tasks. If you're a data scientist or coder, this practical book -now revised in full color- shows you how to train and scale these large models using Hugging Face Transformers, a Python-based deep learning library. Transformers have been used to write realistic news stories, improve Google Search queries, and even create chatbots that tell corny jokes. In this guide, authors Lewis Tunstall, Leandro von Werra, and Thomas Wolf, among the creators of Hugging Face Transformers, use a hands-on approach to teach you how transformers work and how to integrate them in your applications. You'll quickly learn a variety of tasks they can help you solve. Build, debug, and optimize transformer models for core NLP tasks, such as text classification, named entity recognition, and question answering Learn how transformers can be used for cross-lingual transfer learning Apply transformers in real-world scenarios where labeled data is scarce Make transformer models efficient for deployment using techniques such as distillation, pruning, and quantization Train transformers from scratch and learn how to scale to multiple GPUs and distributed environments