English | MP4 | AVC 1920×1080 | AAC 44KHz 2ch | 85 lectures (9h 39m) | 5.29 GB
Deploy ML Model with BERT, DistilBERT, FastText NLP Models in Production with Flask, uWSGI, and NGINX at AWS EC2
Welcome to “Deploy ML Model with BERT, DistilBERT, FastText NLP Models in Production with Flask, uWSGI, and NGINX at AWS EC2”! In this course, you will learn how to deploy natural language processing (NLP) models using state-of-the-art techniques such as BERT and DistilBERT, as well as FastText, in a production environment.
You will learn how to use Flask, uWSGI, and NGINX to create a web application that serves your machine-learning models. You will also learn how to deploy your application on the AWS EC2 platform, allowing you to easily scale your application as needed.
Throughout the course, you will gain hands-on experience in setting up and configuring an end-to-end machine-learning production pipeline. You will learn how to optimize and fine-tune your NLP models for production use, and how to handle scaling and performance issues.
By the end of this course, you will have the skills and knowledge needed to deploy your own NLP models in a production environment using the latest techniques and technologies. Whether you’re a data scientist, machine learning engineer, or developer, this course will provide you with the tools and skills you need to take your machine learning projects to the next level.
So, don’t wait any longer and enroll today to learn how to deploy ML Model with BERT, DistilBERT, and FastText NLP Models in Production with Flask, uWSGI, and NGINX at AWS EC2!
This course is suitable for the following individuals
Data scientists who want to learn how to deploy their machine learning models in a production environment.
Machine learning engineers who want to gain hands-on experience in setting up and configuring an end-to-end machine learning production pipeline.
Developers who are interested in using technologies such as NGINX, FLASK, uwsgi, fasttext, TensorFlow, and ktrain to deploy machine learning models in production.
Individuals who want to learn how to optimize and fine-tune machine learning models for production use.
Professionals who want to learn how to handle scaling and performance issues when deploying machine learning models in production.
anyone who wants to make a career in machine learning and wants to learn about production deployment.
anyone who wants to learn about the end-to-end pipeline of machine learning models from training to deployment.
anyone who wants to learn about the best practices and techniques for deploying machine learning models in a production environment.
What you will learn in this course
I will learn how to deploy machine learning models using NGINX as a web server, FLASK as a web framework, and uwsgi as a bridge between the two.
I will learn how to use fasttext for natural language processing tasks in production and integrate it with TensorFlow for more advanced machine learning models.
I will learn how to use ktrain, a library built on top of TensorFlow, to easily train and deploy models in a production environment.
I will gain hands-on experience in setting up and configuring an end-to-end machine-learning production pipeline using the aforementioned technologies.
I will learn how to optimize and fine-tune machine learning models for production use, and how to handle scaling and performance issues.
All these things will be done on Google Colab which means it doesn’t matter what processor and computer you have. It is super easy to use and plus point is that you have Free GPU to use in your notebook.
What you’ll learn
- You will learn how to deploy machine learning models on AWS EC2 using NGINX as a web server, FLASK as a web framework, and uwsgi as a bridge between the two.
- You will learn how to use fasttext for natural language processing tasks in production, and integrate it with TensorFlow for more advanced machine learning
- You will learn how to use ktrain, a library built on top of TensorFlow, to easily train and deploy models in a production environment.
- You will gain hands-on experience in setting up and configuring an end-to-end machine learning production pipeline using the aforementioned technologies.
- You will learn how to optimize and fine-tune machine learning models for production use, and how to handle scaling and performance issues.
- Complete End to End NLP Application
- How to work with BERT in Google Colab
- How to use BERT for Text Classification
- Deploy Production Ready ML Model
- Fine Tune and Deploy ML Model with Flask
- Deploy ML Model in Production at AWS
- Deploy ML Model at Ubuntu and Windows Server
- DistilBERT vs BERT
- You will learn how to develop and deploy FastText model on AWS
- Learn Multi-Label and Multi-Class classification in NLP
Table of Contents
BERT Sentiment Prediction Multi Class Prediction Problem
1 Welcome
2 Introduction
3 DO NOT SKIP IT Download Working Files
4 What is BERT
5 What is ktrain
6 Going Deep Inside ktrain Package
7 Notebook Setup
8 Must Read This
9 Installing ktrain
10 Loading Dataset
11 TrainTest Split and Preprocess with BERT
12 BERT Model Training
13 Testing Fine Tuned BERT Model
14 Saving and Loading Fine Tuned Model
Fine Tuning BERT for Disaster Tweets Classification
15 Resources Folder
16 BERT Intro Disaster Tweets Dataset Understanding
17 Download Dataset
18 Target Class Distribution
19 Number of Characters Distribution in Tweets
20 Number of Words Average Words Length and Stop words Distribution in Tweets
21 Most and Least Common Words
22 OneShot Data Cleaning
23 Disaster Words Visualization with Word Cloud
24 Classification with TFIDF and SVM
25 Classification with Word2Vec and SVM
26 Word Embeddings and Classification with Deep Learning Part 1
27 Word Embeddings and Classification with Deep Learning Part 2
28 BERT Model Building and Training
29 BERT Model Evaluation
DistilBERT Faster and Cheaper BERT model from Hugging Face
30 What is DistilBERT
31 Notebook Setup
32 Data Preparation
33 DistilBERT Model Training
34 Save Model at Google Drive
35 Model Evaluation
36 Download Fine Tuned DistilBERT Model
37 Flask App Preparation
38 Run Your First Flask Application
39 Predict Sentiment at Your Local Machine
40 Build Predict API
41 Deploy DistilBERT Model at Your Local Machine
Deploy Your DistilBERT ML Model at AWS EC2 Windows Machine with Flask
42 Create AWS Account
43 Create Free Windows EC2 Instance
44 Connect EC2 Instance from Windows 10
45 Install Python on EC2 Windows 10
46 Must Read This
47 Install TensorFlow 2 and KTRAIN
48 Run Your First Flask Application on AWS EC2
49 Transfer DistilBERT Model to EC2 Flask Server
50 Deploy ML Model on EC2 Server
51 Make Your ML Model Accessible to the World
Deploy Your DistilBERT ML Model at AWS Ubuntu Linux Machine with Flask
52 Install Git Bash and Commander Terminal on Local Computer
53 Create AWS Account
54 Launch Ubuntu Machine on EC2
55 Connect AWS Ubuntu Linux from Windows Computer
56 Install PIP3 on AWS Ubuntu
57 Update and Upgrade Your Ubuntu Packages
58 Must Read This
59 Install TensorFlow 2 KTRAIN and Upload DistilBert Model
60 Create Extra RAM from SSD by Memory Swapping
61 Deploy DistilBERT ML Model on EC2 Ubuntu Machine
Deploy Robust and Secure Production Server with NGINX uWSGI and Flask
62 NGINX Introduction
63 Virtual Environment Setup
64 Setting Up Flask Server
65 NGINX Running Flask Application
66 NGINX Running uWSGI Application
67 Configuring uWSGI Server
68 Start API Services at System Startup
69 Configuring NGINX with uWSGI and Flask Server
70 Congrats You Have Deployed ML Model in Production
MultiLabel Classification Deploy Facebooks FastText NLP Model in Production
71 What is MultiLabel Classification
72 FastText Research Paper Review
73 Notebook Setup
74 Data Preparation
75 FastText Model Training
76 FastText Model Evaluation and Saving at Google Drive
77 Creating Fresh Ubuntu Machine
78 Setting Python3 and PIP3 Alias
79 Creating 4GB Extra RAM by Memory Swapping
80 Making Your Server Ready
81 Preparing Prediction APIs
82 Testing Prediction API at Local Machine
83 Testing Prediction API at AWS Ubuntu Machine
84 Configuring uWSGI Server
85 Deploy FastText Model in Production with NGINX uWSGI and Flask
Download from free file storage