top of page
helen design_ARCHIfinal.png

What Happens When AI Stops Being a Tool and Starts Being a Teammate?

A person in black stands beside a red TED AI San Francisco banner. An event schedule for Oct 22 is shown with panels from 1-7 PM.

That question came alive during the TED AI panel discussions on day 2 of the conference (see my take on day one here) , where founders, CEOs, and futurists wrestled with what it means to design, lead, and govern in a world of “alien intelligence”, per Manoj Saxena definition.


The discussion wasn’t about hype - it was about responsibility, creativity, and adaptation.

Below are the insights that resonated most with me. And because it is about AI, I had to use the tools at hand. 


So I took my handwritten notes (image below) using the Nebo app, created a pdf file, uploaded on Chat GPT and asked it to create key insights.



I was blown away by its ability to read my handwritten notes in a mind mapping format and make sense of it all. Of course it still required a lot of editing (it took ChatGPT 10 seconds and me an hour because some of the work was either inaccurate, vague or was missing some key insights). 


1️⃣ AI is moving from single tasks to systems that become autonomous and learn over time. Claude for example can work for 20 hours autonomously. Progress is undeniable but so is the increase in risk, alignment, and accountability.  It’s not about locking it down, it’s about ensuring it aligns with human intent and values.


2️⃣ Can Governance evolve as fast as innovation? In the panel discussion renamed “Will AI kill us? Should we govern and how?”, we heard different opinions from being worried that things may “ break all loose within two years” (Max Saxena) to feeling that we can design so levels of risk and trust can be matched as we cannot afford to slow down in this race with other countries (particularly with China).


Manoj Saxena (CEO, Trustwise) perspective is that “AI isn’t a security problem; it’s a safety problem.” and it compares it to a “nuclear facility with no dome on top”.

Eva Nahari emphasized that regulation should be built into AI systems, not added afterward: “Build the control in, make it easier to do the right thing.” Rajeev Ronanki (CEO, Lyric) asked the critical question: “How do we design systems where trust grows faster than risks?”


While China is the first country with an AI domestic governance and is pushing for an international governance, other countries like the US are fragmented since each state has a different approach and there is a feeling that we cannot afford to slow down the race for legislation. Overall, they made the case for continuous “intent audits,” real-time monitoring, and human oversight as the backbone of safe AI.


3️⃣ The workforce is being reshaped not replaced. Anthony Abbatiello (Future of Work Leader, PwC) emphasized that because of the pace of AI adoption “there’s no time to sit back.”


While human creativity, empathy, and communication will matter more than ever, human jobs will need to bring value to your business. As one speaker warned, “Talent with AI skills will take your job.” It’s not AI replacing people, it’s people who know how to partner with AI who will thrive. And there were shared concerns about entry level jobs which may disappear which makes it harder for young people to get the training they need.

Interestingly some job functions may merge more and intelligence “is not a criteria” anymore  (Aparna Chennapragada) since a lot of the technical skills can be outsourced to AI.


At the team level, the most effective teams will be those who can make agents work in the most effective way.


4️⃣ Leadership mindset is the biggest lever. AI transformation isn’t just about technology - it’s about culture. Paul Baier and Prem Natarajan both highlighted the power of experimentation and transparency. Prem posed a simple but piercing question: “Do you want your kids to do this job? If not, AI should.” It’s a call for leaders to focus AI where it can elevate, not erode, human work.


And the leaders have a critical role to manage the fear of the unknown and be able to articulate “ the long term value proposition of the human” (Robin Braun) 


5️⃣ The best AI organizations are learning through experimentation. From eBay’s offering users a faster way to list their products powered by AI to Oracle’s creative agent platforms, the most successful stories came from teams that experiment constantly. As Paul Baier put it, “Be imaginative with AI - give your people a safe space to play.”


We’re entering an era where human and machine intelligence will co-evolve. As Manoj Saxena put it, “AI isn’t artificial - it’s alien.” It demands humility, curiosity, and collaboration. As Paul Stathacopoulos mentioned “we should assume they there should always be a human in the loop with AI-Like with humans” .


Personally, I hope there will be humans in the loop and we can find a way to collaborate with AI without being excluded, but I wonder about how far and fast the new “species” can grow and how will humans stay relevant. 


What do you think? 


Smiling person with curly hair in an orange circle, green background. Text: "TO YOUR CREATIVITY, Helene" on white background.

Comments


helen design_ARCHIfinalXX-98.png

Want to get practical tips and articles about innovation and invitations to events, webinars and workshops?  Sign for our newsletter below.

*Mandatory

Thanks

helen design_final.png

Design - Just Amazing

© 2023 Fire Up Innovation Consulting

bottom of page