Google CEO Sundar Pichai was recently put in the awkward position of explaining to Congress why photos of Donald Trump were being presented when people searched for the word “idiot” on Google’s world famous search engine.

On Tuesday, December 11th, 2018, refuting Rep. Zoe Lofgren’s (Democrat-California) tongue-in-cheek suggestion that “a little man sitting behind a curtain” was manipulating Google’s SERPS, or “Search Engine Result Pages,” Sundar Pichai said that Google doesn’t “manually intervene” when it comes to searches.

Pichai explained:

“We provide search today for any time you type in a keyword. We, as Google, have gone out and crawled and stored copies of billions of their pages in our index, and we take the keyword and match it against the pages and rank them based on over 200 signals.

“Things like relevance, freshness, popularity, how other people are using it. And based on that, you know, at any given time, we try to find the best results for that query.”

Pichai added that we, as human beings, should be worried about the downsides of artificial intelligence.

In an interview with Washington Post, Pichai said that AI tools will need a set of ethical guardrails and will require companies to think deeply about how technology can be abused, otherwise, we’re all in for a world of trouble.

“I think tech has to realize it just can’t build it and then fix it,” Pichai said. “I think that doesn’t work.” Pichai noted that tech giants in positions of power ensure that artificial intelligence with “agency of its own” doesn’t harm humankind.

The tech executive, who is himself placed in a position of power and runs a whole company that uses AI in many of its products, including its powerful search engine, said he’s optimistic about the long-term benefits of technology, however, his assessment of AI’s potential downsides correlates with that of critics who’ve warned about the potential for misuse and abuse.

Advocates and technologists have been warning folks about how the power of artificial intelligence could be used to embolden authoritarian regimes, empower mass surveillance and spread misinformation, among other possibilities.

In fact, SpaceX and Tesla Founder Elon Musk once said that one day, AI could prove to be “far more dangerous than nukes.”

He told the Post: “Sometimes I worry people underestimate the scale of change that’s possible in the mid- to long-term, and I think the questions are actually pretty complex.” Pichai added that AI, if handled properly, could have “tremendous benefits,” including helping doctors detect issues such as eye disease and other ailments through automated scans of health data.

“Regulating a technology in its early days is hard, but I do think companies should self-regulate,” he told the newspaper. “This is why we’ve tried hard to articulate a set of AI principles. We may not have gotten everything right, but we thought it was important to start a conversation.”

Pichai, who joined Google in 2004 and became the company’s CEO just 11 years later, said AI is “one of the most important things that humanity is working on” and said the technology could be “more profound” for human society than “electricity or fire.”

However, Pichai said that even with scientists’ good intentions with AI, the race to build machines that can operate without human interference has rekindled fears that Silicon Valley’s disruption culture could someday result in technology that harms people and eliminates jobs.

Be the first to receive breaking news alerts and more stories like this by subscribing to our mailing list.