Why AI Still Needs Future Inventors – Piper

Why AI Still Needs Future Inventors

Demystifying AI for Today's STEM Education Leaders to Inspire Future Inventors

It was 2AM when I sat down to write the email that had been stressing me out for the last week – the cursor blinking on a blue glow of the blank window, almost as though it was waiting, expectantly.

More accurately, this was my tenth attempt at this email, each effort punctuated by a sigh of frustration as I contemplated the exact nuance of that one exclamation point or how that one comma may be interpreted when indeed all I had typed was one solitary sentence.

Every attempt had ended the same way, with me frustratedly giving up and wandering downstairs to find Cheez-Its.

But this time, with a deadline looming, I had a different plan. 

(Totally Normal) Feelings and Fears Surrounding AI

With a few keystrokes, ChatGPT popped up and, with a simple contextualizing prompt, I had a ready-made email on which, with a few quick tweaks which took a grand total of five minutes, I triumphantly hit “send”.

It was a simple touch with artificial intelligence (“AI”) – almost like a little boost for a beleaguered human at 2AM. And indeed, it was only the latest innovation in technology created to make our lives easier — the next iteration from physically writing a letter by candlelight and mailing it, to sending via email, to now having a first draft of an email written for you.

But AI feels different.

What Exactly is Under the Umbrella of AI?

While it’s easy to understand how you go from written words on paper to typed words on a screen, how a machine “creates” words (sometimes better than you) feels much murkier. In fact, at around 3AM while eating Cheez-its and watching Mission Impossible: Dead Reckoning, it might be downright scary.

But in an age of naming fears, it’s hard to even put our fingers on what scares us about AI — maybe in part because AI serves as a large umbrella term for many things.

Some AI systems are like Swiss Army knives (versatile capability-based AI) — they can do a little bit of everything, whether it’s answering your questions, helping with tasks, or playing games.

Then there are other AI systems (functionality-based AI) that are more like a hammer or a screwdriver—great at one specific job, like recognizing faces or translating languages.

Add some of the different branches on top — machine learning (a self-iterating form of AI), neural networks (a special kind of machine learning model), and large language models (neural networks that specialize in understanding and generating language) — and it can be overwhelming.

But all these pieces work together to create the AI tools we use today.

If AI Should Scare Us, What Precisely Should We Be More Afraid Of?

Siri using machine learning to analyze voice snippets of millions of other people asking for directions home to recognize that I, too, want directions home? Microsoft Co.Pilot using generative AI to create graphics that would have taken a skilled human graphic designer hours to make? Where would more innocuously useful things — like your email filter recognizing that email from a Nigerian prince is probably fake — fit in?

Ultimately, in a world where we feed things into a black box and get out something useful, should we trust something we don’t understand? And maybe the biggest question of all — what do we need to know about AI to use it?

We have enough to worry about nowadays, so adding to the list with “figuring out what the next generation of inventors in a world of AI needs to worry about” feels like a step too far.

But that definitely doesn’t stop me — like so many educators and parents out there — from worrying as I fitfully drift off to sleep at 4:30AM amidst a mess of Cheez-It crumbles…

Future inventors need human skills to take on ai

So, in honor of that the struggle many of us face when we envision the next generation of inventors in a world of AI , here are a few (of many) new human skills that using AI will need us to hone:

understanding when to use AI (and when not!)

If you see AI as a handy technological friend, everything we do can often be better with a friend’s help.

From the simple things, like writing a 2AM email, to the more complex things, like helping create startups, figuring out where AI can help can be incredibly valuable.

But there are times when it’s best not to rely on AI, especially when the situation calls for empathy, ethical judgment, or a deep understanding of context. For example, when making decisions that could seriously impact someone’s life, offering emotional support, or dealing with complex issues, AI just can’t replace the human touch.

AI doesn’t understand emotions or the nuances of human values, so using it in these situations could lead to unintended or even harmful consequences. It’s important to know when to keep human oversight front and center, especially when people’s well-being is on the line.

how to ask questions well

My 2AM prompt to ChatGPT may have been simple (“write me an email to a colleague to say hello”) but, as AI-driven companies crop up around the world, crafting those exact questions becomes quintessentially more important.

It turns out that, much like a friend who has heard you complain about this colleague’s excessive love of musical theater vs a random person, some assistance is more nuanced than others.

In a fun twist, figuring out exactly how to ask for help has become a valuable skill, one which many companies are hiring for under the job title of “prompt engineer”.

understanding biases in ai

One of my favorite phrases in computer science is ”garbage in, garbage out” (GIGO), or the idea that if your inputs are bad, your outputs will be too. This holds especially true for AI — if models are trained on incorrect or biased data, it will "learn” to reinforce those biases.

If tech companies train AI to make hiring choices with mostly resumes from male applicants, the AI “learns” to select for male applicants, passing up talented other applicants.

In high-stakes worlds like law enforcement and healthcare, historical human biases in training sets can be dangerously amplified by AI.

what is real?

Navigating reality gets trickier with AI around.

These days, AI can whip up hyper-realistic images, texts, and even voices that are almost impossible to tell apart from the real thing. This means it's getting harder to figure out what's genuine and what's just been manipulated.

With deepfakes and misinformation spreading like wildfire, even experts are struggling to keep up.

So, as AI keeps advancing, developing the skills to critically assess and discern what's real has become more important than ever.

teaching and learning ai with play piper

How precisely can we teach these skills, you ask?

Ultimately, this is the question that we think about often at Play Piper – and we’ve been working on for a while.

Curious? Stay tuned to learn more!