How AI is changing innovation is a lens through which I view everything these days.
While I do not have the answer (nobody does), I hope my recently published book, Fire Up Innovation, will inspire you to stay with the question. By understanding the framework and concepts about innovation, you may be better equipped for the exponential pace of change we are facing.
In my everyday discussions, I see many people either ignoring AI altogether or describing how the tools help them get started in their work. I do the same and use AI for research or as a starting point for some of my writing.
However, I am conscious that this is just “baby AI” and that this technology is growing rapidly. We do not know what it is going to look like shortly (and it may not resemble something so human).
Here is what I have learned in the past six months:
In April 2023, I attended the TED conference and was shocked by what I heard about AI. Top experts and executives from leading AI companies all shared that introducing Chat-GPT (and now all the competitors and related applications) was creating the fastest pace of change ever in the history of humanity. They had no clear idea where we were going nor the possible impact it may have on the world, both good and bad.
Six months later, I attended TED AI in San Francisco, and now the talks have shifted from AI to AGI (Artificial General Intelligence). While there are still many uncertainties, all speakers agreed that we are going through the biggest change ever. While engineers are excited by the machines and the technology, some voices (mostly women) are raising issues of ethics, fairness, legal concerns, and safety risks.
Here are a few highlights from TED AI:
In the highly optimistic camp, Andrew Ng viewed AI as the universal solution to all current challenges. He said, “AI is not the problem; it is the solution,” and continued suggesting that this is how we will solve global warming.
On the more cautious side, Percy Liang emphasized the importance of all of us having a voice early on. This enables us to create transparency and define the values we want AI to embody (instead of decisions being made solely by “the owners of the castle”). He stressed the importance of attribution so that our work and data are not freely used or discarded.
On the very concerned side, Max Tegmark suggests we need to stop training “larger models we do not understand” and “stop obsessively trying to fly to the sun.” Instead, he recommends using a program synthesis approach where “humans write a spec…and create the tool and proof,” making it easier to verify. Liv Boere pleaded for “leaders who are willing to flip the Moloch playbook,” refusing to engage in a race she describes as a lose-lose approach. She advocates for organizations taking time to include enough safety testing and embrace regulations, even if it may not be immediately beneficial to them.
My take is that while there are many amazing possibilities for AI, the continuation of the race with little or limited regulations and insufficient understanding of the impact of the changes raises critical questions for humanity.
There is no going back once AI is becoming part of everything beyond tech from health to utilities and transportation...
The question becomes how can we team up with AI in a way that acknowledge the critical part of humans, and do it in a fair and balanced manner.
While I do not know what this world will be, I do believe that team collaboration is probably more critical than ever (or at least for a while) as it evolves to include AI. Highly functional and diverse teams are going to be critical to be sure all perspectives are taking into considerations when innovating.
We devote several chapters of my new book Fire Up Innovation: Sparking and Sustaining Innovation Teams to the topics of understanding team diversity broadly and giving teams the framework and tools to foster better and more efficient collaboration. You can order a copy here.
How do you think AI will change the way you tackle your innovation projects and teamwork?