Final Showcase
You'll present one small project you've built this year. The point isn't to dazzle. It's to see the year in retrospect: what you covered, what you did, where you might go next.
This is the end-of-year celebration. Faculty, parents, and friends are invited. There will be food.
Format. Each person gets 4 to 6 minutes: 3 minutes presentation, 1 to 2 minutes Q&A.
Project scope
Projects should be small. Goal: "I built and understood something," not "recreate AlphaGo."
Pick one of these formats:
A trained model. Predicting club attendance from features. Predicting your next month's Spotify listening from past listening. Sentiment classifier for tweets. Movie rating predictor.
A built application. Web app or notebook demonstrating an ML concept. "Draw a digit, see what the model sees." An attention visualizer. An interactive overfitting explorer.
A presentation about a paper or system. If you don't want to code: pick a recent ML paper or system, present what it does, why it matters, what surprised you. AlphaFold. Anthropic's interpretability research. The latest open-source model release.
An ethical analysis. Pick an AI deployment in the news (a hiring system, court risk score, content moderation, a generative AI controversy). Explain how it works, what's gone wrong, what you'd change.
Project planning timeline
You'll pitch your project idea at the end of Chapter 14. Two weeks before showcase, there's a 5-minute check-in. By then you should have:
- A clear question or goal
- A dataset (if applicable) or chosen paper / system
- A baseline result (even if weak), if it's a coding project
- One slide describing what you've done so far
If you don't have a clear project by check-in time, you'll be paired with someone and given a starter idea. Nobody fails this. Everyone presents.
Presentation structure
Aim for roughly this structure for your 3 minutes:
- The question (15 seconds). "I wanted to know if I could predict X from Y."
- The data (30 seconds). Where it came from, what it looks like, how big.
- The approach (45 seconds). What model, what features, how trained.
- The result (45 seconds). What worked, what didn't. A chart or two.
- What I learned (45 seconds). One thing you want the room to take away.
Slide count: 5 to 8 slides max. Don't bury the audience in text. One idea per slide.
Questions you'll be asked
A few prompts that might come up during your Q&A:
- "Why did you pick this project?"
- "What was the hardest part?"
- "If you had another month, what would you do next?"
- "What surprised you most?"
- "What would you tell a 10th grader thinking about doing this?"
Have rough answers in mind. The Q&A is short and friendly, not adversarial.
Year in review
After everyone has presented, the last 10 minutes recap what you've covered. Reading it back here in textbook form:
- Phase 1 asked "why care." You saw what ML is, the three flavors of learning, and why data is everything.
- Phase 2 was your first real model. You derived gradient descent from y = mx + b. That algorithm is in every system you use now.
- Phase 3 was classification. You derived sigmoid and log loss, implemented logistic regression from scratch, and built the Titanic project.
- Phase 4 was neural networks. You derived backprop using the chain rule, implemented it in numpy, and trained a real neural network on spirals.
- Phase 5 was the modern era. You saw transformers and attention. You can now read "Attention Is All You Need" and follow it.
What's next
Three concrete recommendations for staying with ML after the year:
- Andrew Ng's Machine Learning Specialization on Coursera, if you want the full math treatment, taught by one of the field's clearest explainers.
- Fast.ai's Practical Deep Learning for Coders, if you want code-first depth. Top-down approach: build big things first, understand them later.
- Build a project of your own. Find a dataset that interests you. Try. The best way to keep learning is to keep doing.
Six months ago, most of you couldn't have explained "machine learning." Now you can derive its core algorithms, implement them in code, and explain what's inside ChatGPT. Whatever you do next, you understand the technology reshaping your world. That's a permanent change.
What you can do now
By the end of this book, if you worked through it consistently, you can:
- Derive linear regression's MSE and its gradient from y = mx + b
- Implement gradient descent in numpy from scratch and watch it converge
- Derive sigmoid and log loss, and explain why they fit together
- Implement logistic regression from scratch
- Detect overfitting and apply L2 regularization
- Implement backpropagation for a 2-layer network
- Use scikit-learn and PyTorch for real projects
- Explain the math of self-attention and outline a transformer
- Articulate the limits of ML (bias, overfitting, hallucinations) with examples
That's a real foundation. It's also a foundation for the rest of your life, whichever way you go. Good luck.
End of textbook.