Build a Real Model with Google Teachable Machine
So far you've learned about machine learning. Now you're going to build it. By the end of this lesson, you will have trained your own ML model that recognizes things from your webcam — no code, no math, no fees. The tool is Google Teachable Machine, a free browser-based platform that's been used in over 100 countries to teach the basics of ML.
This is the lesson where ML stops being abstract and becomes something you can actually point at and say: "I made that."
What You'll Learn
- How to train an image classifier in 10 minutes using Google Teachable Machine
- Why this teaches you the same workflow professional ML engineers follow
- How to export and embed your model in real apps (still no code)
- Three project ideas you can complete this week and add to your portfolio
Step 1: Open Teachable Machine
Go to teachablemachine.withgoogle.com. No sign-up required. You'll see three project types:
- Image Project — train on photos (we'll use this)
- Audio Project — train on sounds
- Pose Project — train on body positions
Click Image Project, then Standard image model.
Step 2: Define Your Classes
A "class" is a category your model will learn to recognize. By default Teachable Machine gives you two — Class 1 and Class 2 — but you can add more.
For your first project, let's build a "Pen vs No Pen" detector (or something equally silly that you can demo to friends). Rename:
- Class 1 → Holding pen
- Class 2 → Empty hand
The trick is to pick classes you can actually capture with your webcam right now.
Step 3: Capture Training Data
For each class, click Webcam to capture image samples. Aim for about 30–50 images per class to start (you can always add more).
Best practices:
- Vary the angle, lighting, and background slightly
- Move your hand to different positions in the frame
- Capture a few "borderline" cases (e.g., a pen barely visible)
You can also upload images instead of using your webcam. This is useful if you want to train on something larger (food types, plant species, brand logos).
Step 4: Train the Model
Click Train Model. This takes about 30 seconds. Behind the scenes, Google's servers are training a small neural network (a transfer-learning model based on MobileNet) on your images. You don't have to know what those words mean yet — just notice how fast it is.
You'll see a progress bar, then the model is ready.
Step 5: Test It Live
Right side of the screen shows a live webcam preview with confidence bars under each class. Hold the pen — bar for "Holding pen" should jump to nearly 100%. Drop the pen — the other bar should win.
Notice this is inference in action. The model is making predictions in real time on data it has never seen. This is the moment where ML clicks for most people.
If it gets confused, that's a learning opportunity:
- Add more diverse training images
- Add a third class for ambiguous cases
- Make sure your training data wasn't all in identical lighting
You're now doing what's called iterative model improvement — the core loop of every ML project.
Step 6: Export and Use Your Model
Click Export Model in the top right. You have several free options:
- TensorFlow.js — run the model in any web page
- TensorFlow Lite — run on mobile devices
- TensorFlow — for desktop / server use
If you click "Upload (shareable link)" Google hosts your model and gives you a URL. You can drop that URL into:
- Glitch (glitch.com) — copy a Teachable Machine starter project, paste your URL, and you have a live web app in minutes
- Scratch — Google's coding tool for kids has Teachable Machine support
- AppSheet — Google's no-code app builder, can call your model
You've now exported a real ML model and can deploy it in a real app. This is what some companies pay engineers to do.
What You Just Did (in ML Vocabulary)
Here's the same workflow in the words a recruiter or hiring manager will recognize:
- Defined the problem as image classification with two classes
- Collected and labeled training data via the webcam
- Trained a transfer-learning model (MobileNet under the hood)
- Validated the model on live test inputs
- Iterated to improve performance on edge cases
- Exported the model for deployment in a downstream app
That's a complete supervised ML pipeline. You can put this on your resume.
Three Project Ideas to Try This Week
Pick one and actually finish it. Working projects beat half-finished ones every time.
1. Recycling Sorter (Sustainability)
Classes: paper, plastic, metal, glass. Use it to demo how AI could help in waste sorting. Bonus: deploy it as a web app friends can try.
2. Posture Detector (Health)
Use Pose Project instead. Classes: good posture, slouching, leaning forward. Have it run continuously and remind you to sit up straight.
3. Custom "Yes / No / Maybe" Hand Gesture App
Train classes for thumbs up, thumbs down, flat hand. Use it as a fun voting tool, presentation controller, or party trick.
For each idea, capture diverse training images and deliberately try to fool your model — that's how you find weaknesses.
Why This Project Matters
Three reasons it's worth more than a typical tutorial:
- It proves ML is approachable. You went from zero to deployed model in under an hour.
- It builds intuition. Watching the confidence bars move teaches you something equations can't.
- It's resume-worthy. "Built and deployed an image classification model using transfer learning on Google Teachable Machine" is real, true, and impressive on a beginner's CV.
Cross-Reference with AI Tools
Try this prompt in ChatGPT or Claude:
"I just trained an image classifier on Google Teachable Machine using 50 webcam images per class. The underlying model is MobileNet via transfer learning. Explain in 5 short paragraphs:
- What 'transfer learning' means in plain English
- Why 50 images is enough (it would not have been 10 years ago)
- What MobileNet is and why it's fast
- What I should be skeptical about with such a small training set
- What I'd need to do differently for a production deployment"
You'll fill in the technical context behind your hands-on experience — and that pairing is how real understanding grows.
Key Takeaways
- Google Teachable Machine lets you train a real image / audio / pose classifier in your browser, free, with no code
- The workflow you used (define classes → collect data → train → test → iterate → export) is the same workflow professional ML engineers follow
- You can export your model and embed it in real web apps via tools like Glitch
- Three classroom-ready projects: recycling sorter, posture detector, hand-gesture app
- Pair the hands-on work with AI-tool follow-up prompts to build intuition for the underlying ML concepts
Next lesson: predictions inside spreadsheets you already use. Google Sheets and Excel have hidden AI features that let you predict, classify, and forecast — also without writing code.

