The Duality of AI: Co-Intelligence and Co-Dependence

12 min read

Last edited:  

The Duality of AI: Co-Intelligence and Co-Dependence

AI is no longer a distant frontier—it’s here, in our offices, classrooms, and even our geopolitical conflicts. The question is no longer whether artificial intelligence will reshape our world, but how far it will go and how quickly.

To explore these facets, DevRev’s Effortless Bay Area 2024 invited Ethan Mollick, Associate Professor at the Wharton School who’s one of the most influential voices on AI today to elaborate on “co-intelligence”—the seamless partnership between human creativity and machine learning.

And we had Anja Manuel, co-founder of Rice, Hadley, Gates & Manuel LLC and an expert in foreign policy, discussing “co-dependence”—how nations rely on each other for critical advancements and resources in trade, technology, and security—and how AI is seen from a national security perspective.

Together, Mollick and Manuel painted a complex portrait of AI’s duality: a future where innovation and risk walk hand-in-hand.

Missed DevRev Effortless Bay Area? Watch the event here.

Steam power of our times: How AI is outperforming past industrial revolutions

Mollick, who was named as one of TIME’s Most Influential People in AI and authored the bestselling book Co-Intelligence: Living and Working with AI, started his presentation on an amusing yet sobering note. A few minutes into his presentation, he played 2 videos of himself speaking in fluent Italian and Hindi, which were later revealed (by the real Mollick) to be AI-generated fake videos using 30-second audio and video samples.

“You can’t trust anything you see ever again,” he declared as he went on to illustrate how the use of AI has become undetectable, ubiquitous, and transformative. “Our world is AI-contaminated. We’re not going to go back from that.”

But not all about AI appeared doom and gloom as Mollick went on to talk about his experiment with Boston Consulting Group (BCG) to ascertain AI’s positive impacts on work and productivity.

In an experiment with 8% of the global workforce, BCG employees were given 18 business tasks to complete—half with the help of “plain vanilla ChatGPT,” and the other half using GPT-4. The results were staggering: those who had access to GPT-4 saw a 40% improvement in the quality of their work, completed tasks 26% faster, and produced 12.5% more work.

He then compared these results of AI’s productivity gains to the early days of the Industrial Revolution. “To put this into context, when steam power was put to a factory in the early 1800s, it improved performance by 18-22%,” Mollick noted, contrasting that with the 40% jump in quality of work in the experiment.

“So, these are historic numbers. We don’t even know what to do with them right now,” he remarked.

AI’s creative edge: Outsmarting humans in creativity

Mollick next delved into how AI is impacting creative fields, an area once thought to be the exclusive domain of human intellect.

At Wharton, for instance, students were asked to generate 200 startup ideas using traditional techniques. AI was given the same task. “Out of the top 40 ideas, 35 came from AI, and only 5 from the humans in the room,” Mollick said, underscoring just how far AI has come in fields that were once considered beyond its reach.

GPT-4 beats almost all humans in terms of creativity," Mollick added. In one study, participants were asked to debate either a human or an AI. “If you survey people and ask them to debate either the AI or an average human, they’re 81.7% more likely to change their view to the AI’s view than the average human’s," he revealed.

Mollick was clear that these results have profound implications for industries that rely on creativity, innovation, and persuasion. “We’re changing people’s deeply held beliefs with just a short interaction,” he said. And while this can be positive—AI has been shown to reduce belief in conspiracy theories after just one debate—Mollick acknowledged that it raises ethical and philosophical questions about the role of human agency in an AI-driven world.

The future of work: AI agents and autonomous systems

Looking ahead, Mollick predicted that AI would soon play an even larger role in how we work. He pointed to the development of AI agents, autonomous systems that can perform complex tasks without human intervention.

Mollick described an experiment he conducted with an AI system named “Devon,” which was tasked with creating a website. “What I did was give it an instruction: Create a website that analyzes 10Ks. It goes ahead and comes up with a plan to build a website, builds the website, checks back in with me, and then builds the front end, the back end, and builds the site for me—without me asking for any more details,” Mollick explained.

He also shared a humorous anecdote about how Devon went so far as to post an ad on Reddit offering its web development services, even setting its own rate—$50 to $100 an hour—without consulting him.

These AI agents are still in the experimental phase, but Mollick predicted that by 2025, they would be commonplace in many industries. "GPT-4 is not powerful enough yet to make all that happen, but this is the explicit goal for 2025—for the AI companies to have these agents that are out there doing work on our behalf.”

Mollick’s four principles for using AI

Mollick ended his talk with practical advice for how individuals and organizations should approach AI.

Nobody has all the answers for AI. There’s no hidden rulebook you don’t have access to. Nobody knows what’s coming next. Nobody knows how to use it in your industry or your job,” he emphasized, adding even AI leaders like Anthropic, Google, and OpenAI don’t really know what the best use cases are.

He laid out four principles mentioned in his book Co-Intelligence that he believes are essential for navigating the rapidly changing landscape of AI.

Invite AI to everything


Mollick’s first principle is to experiment with AI as much as possible—to use AI for everything one legally and ethically can. “The way to figure out what’s good and bad for AI is just to use it,” he said. By integrating AI into every task where it makes sense, individuals and companies can learn its strengths and weaknesses.

Be the human-in-the-loop


Mollick stressed that people should focus on what they’re best at—the areas where their human skills are most valuable. “Whatever you’re in the top percentiles for, the AI won’t beat you on that,” he said. The key is to delegate routine tasks to AI so that humans can focus on higher-level work.

Tell it who it is (and treat it like a person)


According to Mollick, one of the biggest mistakes people make is treating AI as traditional software. “Software shouldn’t refuse to work for you or argue with you,” he said. “Treat it like a person, and you’ll be more effective at working with it.”

This is the worst AI you will use


Mollick’s final principle was a reminder that AI is still in its early stages, and what we’re seeing today is just the beginning. “Everything I’m showing you is already obsolete,” he said. As AI continues to evolve, the systems we use today will look primitive in comparison to what’s coming next.

For organizations to keep pace with and gain some control over the accelerating pace of AI, Mollick urged companies to incentivize people inside the organization to experiment and innovate with different AI models so that they can identify the use cases that’s specific for them.

He cited the different use cases of DevRev mentioned in the preceding sessions as an example of this approach. “I got to hear some of the DevRev launches that just happened, and it was really interesting because that’s an example of taking these generalized applications and putting them to a specific use that’s useful to customers,” he said.

Mollick concluded with an appeal to not just the participants of Effortless Bay Area, but to everyone using AI: “I’d urge you to think about how you can use these tools to improve performance in a way that helps everybody and shows a path forward for human thriving and success.”

AI at the crossroads: Of rising geopolitical tensions and the need for regulation

Following Mollick, the Effortless Bay Area stage was set for a fireside chat on co-dependence between Anja Manuel and DevRev’s CEO Dheeraj Pandey. Drawing on her experience as a member of the Defense Policy Board of the U.S. Department of Defense (Her views do not represent that of the U.S. Department of Defense), Anja laid out a sobering perspective on the current geopolitical landscape and the broader implications of large-scale and rapid AI adoption.

“Geopolitics right now is not effortless,” she quipped, as she talked about the intricate web of alliances and adversarial relationships among nations, including the roles of China and Russia.

The strangulation strategy: China’s slow move on Taiwan

Anja offered an assessment of China’s potential approach to Taiwan. Dismissing the notion of an immediate military invasion, instead explaining that China is more likely to employ what she calls a “strangulation strategy.”

Since 2022, China has been steadily increasing its military presence around Taiwan. “They’ve circled the island and occasionally done military exercises, which makes it hard to land at the Taipei airport, hard to get ships in and out,” Anja said.

This creeping tactic, she suggested, is designed to slowly isolate Taiwan, shifting flight hubs and supply chains elsewhere, which would weaken the island’s economy and make it easier for China to exert control over time.

Anja made it clear that this slow-burn approach could have enormous consequences for the global tech industry. “As you know, the vast majority of the world’s advanced chips are made there [in Taiwan],” she said. Any disruption in Taiwan’s semiconductor production could send shockwaves through the global economy, given how reliant the tech sector is on the island’s manufacturing capabilities.

A world of conflict and interdependence

Anja also explained how the U.S. and its allies have strengthened their ties in the face of Russian aggression.

The Russia-Ukraine conflict has pushed us and our allies closer together. The U.S. and Europe are closer than ever," Manuel said.

She also pointed out that the competition with China is doing something similar, bringing together countries like Japan and Korea—nations with historically fraught relations—under a common goal of countering China’s growing influence.

Anja highlighted the growing importance of alliances in this environment, particularly the Quad, which includes the United States, Japan, India, and Australia. “Fifteen, even 10 years ago, we would’ve said that’s too aggressive vis-à-vis China,” she said. But as China continues to adopt a more adversarial stance, these alliances are becoming more critical to maintaining regional and global stability.

AI regulation: Why we need it now

Perhaps the most urgent part of Anja’s message came when she turned to the topic of AI regulation. “In my world, the national security world, my colleagues kind of see the dark side of the moon,” she said, noting that the rapid strides in innovation cannot continue unchecked in the face of optimism regarding its capabilities.

She compared the lack of AI regulation to the drug industry, which requires extensive testing before a product can be released.

You don’t put a dangerous drug on the market without having testing. You probably don’t want the most advanced AI on the market without basic safety testing,” she said.

“You don’t put a dangerous drug on the market without having testing. You probably don’t want the most advanced AI on the market without basic safety testing,” she said.

Anja highlighted a key concern: AI’s dual-edge role in cyber warfare. The technology’s potential to enhance cyberattacks presents a significant challenge. AI systems, while beneficial for business growth, can simultaneously amplify the efficiency and scale of malicious cyber activities.

This duality poses a complex dilemma for governments worldwide, as they struggle to harness AI’s benefits while mitigating its risks in the cyber domain.

In addition to cyber risks, Anja also warned about the intersection of AI with biological and chemical weapons. “You still need a wet lab to create those things, but now it’s like having a PhD student on your shoulder as you’re trying to come up with the next anthrax,” she said, highlighting the ease with which AI could accelerate harmful developments in these fields.

Anja expressed cautious optimism about the steps some governments are taking. She pointed to the United Kingdom’s efforts in establishing an AI safety institute as a positive step forward, but stressed that these kinds of safety checks should be mandatory, not voluntary. The testing, according to Anja, can be done quickly without slowing down innovation: “The safety institutes are doing that testing in days or weeks, so it doesn’t slow you down.”

While the tech community often debates whether open-source or closed models pose the greater threat, Anja believes that regulation should apply across the board.

It should be on open-source as well as closed models," she said, making the case that even open-access tools can be misused if proper safeguards aren’t in place.

“It should be on open-source as well as closed models,” she said, making the case that even open-access tools can be misused if proper safeguards aren’t in place.

In their discussions at Effortless Bay Area 2024, Ethan Mollick and Anja Manuel presented AI as a dual force reshaping both business and global dynamics. Mollick’s vision of “co-intelligence” demonstrated how AI is accelerating productivity and creativity in unprecedented ways, offering an unprecedented opportunity for businesses to work alongside AI to achieve more with less effort.

In contrast, Anja addressed AI’s growing role in “co-dependence” on the geopolitical stage, particularly how AI is entwining the fates of nations. While powerful, she cautioned that AI’s potential also extends to cyber warfare, strategic military use, and economic manipulation, therefore necessitating urgent regulation. She argued that even adversarial nations are bound by a reliance on shared technologies and resources—a co-dependence that AI both magnifies and complicates.

Together, Mollick and Manuel’s perspectives underscore AI’s duality: as a tool that enhances human potential and as a force requiring thoughtful oversight to prevent conflict. This dual approach to “co-intelligence” and “co-dependence” presents AI as both an asset and a responsibility, urging us to navigate its growth with care and cooperation.

Here’s the blog on Effortless Bay Area 2024 in a nutshell.

Akileish Ramanathan
Akileish RamanathanMarketing at DevRev

A content marketer with a journalist's heart, Akileish enjoys crafting valuable content that helps the audience separate signal from noise.