Google Teachable Machine Basketball Science Fair Project

January 18, 2026

Can AI Recognize Good Basketball Shooting Form?

What if a computer could see the difference between strong basketball shooting form… and form that needs work?


That was the problem statement behind my youngest son's science fair project this year. And as his mom, watching him turn basketball practice into a real research question was one of those parenting moments you don’t forget.


I’m also writing this as the author of Saving Curiosity, a forthcoming book about helping kids stay curious (and confident) in the AI era. This project was a perfect example of what I mean regarding how AI doesn’t have to replace learning; it can create it.


We didn’t start with complicated technology. We started with a simple curiosity:


Can a simple AI model learn to recognize good shooting form using labeled images?


My husband helped coach the form, and that mattered. He’s a former Nebraska high school basketball state champion, so he knows what “good form” really looks like with balanced feet, elbow in, steady release, and follow-through that stays consistent. So while my son brought the curiosity… Dad brought the championship-level details.


We used Google Teachable Machine, a free tool that lets kids train a mini AI model without coding.


It works like this:


  1. You show the computer examples

   2. You label them clearly

   3. The AI learns patterns and makes predictions


Simple enough for elementary school, but still real machine learning.


For the experiment, we picked clear shooting-form categories that the AI could learn. Next, we collected training examples by turning this into an “AI practice drill,” where my son recorded short video clips while keeping conditions as consistent as possible using the same background, similar lighting, a full-body view, one pose at a time, and correct labels every time because the biggest lesson we learned was that AI is only as good as the data you feed it. Finally, we trained the model (which took about a minute) and tested it live with slightly different angles, tiny stance changes, and even a different location to see how well it could recognize shooting form in more realistic conditions.


What we learned was that the AI didn’t magically get smarter; instead, it revealed the real science fair takeaway was that data quality matters more than people think. When we controlled the environment, the model improved, but when lighting or angles changed, the AI got confused, and that wasn’t a failure — it was learning — because that’s exactly how real-world AI behaves: inconsistent data leads to unreliable predictions, while controlled data leads to stronger accuracy.


Why this project matters (beyond basketball) is that although basketball was the theme, the skills were much bigger, because my son practiced asking a strong research question, building categories and hypotheses, collecting consistent data, testing variables, and learning AI as a tool rather than a shortcut. And as a parent, I loved seeing him realize that he can test, adjust, and improve the same way real researchers do.


A quick note from me as the author of Saving Curiosity. This project is exactly what I’m trying to protect in kids, curiosity, and the confidence to ask, “What if?”, “How can I test it?”, and “What happens if I change one thing?” because that mindset will matter more than ever in the AI era. Our kids won’t just use AI someday. They will need to understand it, challenge it, improve it, and ultimately lead with it.


If this post resonates with you, I would love for you to follow along for more kid-friendly science fair ideas and AI learning projects, and to stay connected as my book Saving Curiosity releases in 2026 and focuses on raising curious thinkers in the age of AI.


Maharlika Connor

Author of Saving Curiosity

January 18, 2026
Every story we publish serves a purpose.