Fine Tune BERT for Text Classification with TensorFlow

4.6

165 個評分

提供方

10,855 人已註冊

在此免費指導項目中,您將:
2.5 hours
中級
無需下載
分屏視頻
英語(English)
僅限桌面

This is a guided project on fine-tuning a Bidirectional Transformers for Language Understanding (BERT) model for text classification with TensorFlow. In this 2.5 hour long project, you will learn to preprocess and tokenize data for BERT classification, build TensorFlow input pipelines for text data with the tf.data API, and train and evaluate a fine-tuned BERT model for text classification with TensorFlow 2 and TensorFlow Hub. Prerequisites: In order to successfully complete this project, you should be competent in the Python programming language, be familiar with deep learning for Natural Language Processing (NLP), and have trained models with TensorFlow or and its Keras API. Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.

必備條件

您要培養的技能

  • natural-language-processing

  • Tensorflow

  • machine-learning

  • deep-learning

  • BERT

分步進行學習

在與您的工作區一起在分屏中播放的視頻中,您的授課教師將指導您完成每個步驟:

指導項目工作原理

您的工作空間就是瀏覽器中的雲桌面,無需下載

在分屏視頻中,您的授課教師會為您提供分步指導

審閱

來自FINE TUNE BERT FOR TEXT CLASSIFICATION WITH TENSORFLOW的熱門評論

查看所有評論

常見問題