Starting a new Lecture Notes Series on Building LLMs from scratch
.png)
.png)
Youtube Lecture Playlist CreditsChannel Name: edureka!
So Let Us Start to This Journey of Learning
Building LLMs from scratch By Lecture Notes together!
Lecture 4: Lecture 4: What are transformers?
Lecture 5: Lecture 5: How does GPT-3 really work?
Lecture 10: Lecture 10: What are token embeddings?
Lecture 14: Lecture 14: Simplified Attention Mechanism - Coded from scratch in Python | No trainable weights
Lecture 21: GELU Activation Function in the LLM Architecture
Lecture 22: Shortcut connections in the LLM Architecture
Lecture 23: Coding the entire LLM Transformer Block
Lecture 24: Coding the 124 million parameter GPT-2 model
Lecture 25: Coding GPT-2 to predict the next token
Lecture 26: Measuring the LLM loss function
Lecture 28: Coding the entire LLM Pre-training Loop
Lecture 30: Top-k sampling in Large Language Models
Lecture 32: Loading pre-trained weights from OpenAI GPT-2
Lecture 39: Dataloaders in Instruction Fine-tuning
Lecture 42: Evaluating fine-tuned LLM using Ollama
Lecture 43: Build LLMs from scratch 20 minutes summary

.png)

.png)

.png)
.png)

.png)
.png)
.png)
.png)


.png)