This could be humanity's greatest threat – and it's not climate change

Once confined to science fiction, artificial intelligence is today transforming how firms do business in ways never before thought possible. The revolution is already underway, with thousands, if not millions, of firms deploying AI and machine-learning technology to enter new markets, learn more about their customers and help reduce operating costs.

While visions of terminators and androids dominate the popular imagination, the reality is, for the most part, less outlandish, with chatbots, language and voice-recognition tools and predictive customer service forming the basis of a new way to do global business.

The upside of AI’s rapid rise is clear, but dangers, often unforeseen, also exist. These include more automated weapons, supercharged state surveillance, highly realistic fake news, and privacy concerns from governments scraping data off citizens’ digital devices.

“People have conscience, empathy and a sense of right and wrong; machines don’t,” – Mohan Thite

For businesses there are also risks in rushing to AI, starkly illustrated by the case of UK consulting firm Cambridge Analytica, which became engulfed in global scandal after using AI to harvest the Facebook data of millions of people without their consent.

AI may be ‘humanity’s greatest threat’

Rob LoCascio, Founder and CEO of tech giant LivePerson, concedes that AI could become “humanity’s greatest threat”, but nonetheless remains upbeat about its potential.

The New York-native is especially positive about its role in business, particularly its application to so-called “conversational commerce”, which describes the trend towards consumers interacting with businesses through messaging and chat apps.

When it comes to the ethics of this type of artificial intelligence-driven tool, LoCascio says the centrality of humans in its deployment acts as a bulwark against machine-led misuse. “The internet could go away very quickly and be replaced with conversation-based user experiences, but that would need designers – the people who are behind the technology,” he tells The CEO Magazine.

“We’re going to need people creating those experiences until we get to the point where all those conversational experiences are created and we get on to automating.” But even with people designing and overseeing artificial intelligence, LoCascio admits there are issues that must be addressed for the technology to be ethical into the future.

One such issue is what he terms “biased AI”, that is, the tendency for humans to import their own prejudices, whether racial, gendered or ideological, into the programs they build.

It’s for this reason that the LivePerson boss founded EqualAI, an organisation that brings together experts, influencers, technology providers and businesses to write a set of standards for how to create unbiased AI so brands can have full faith in it. “I want to democratise AI,” LoCascio explains.

“I want to put it in the hands of every company in the world and I want people within those companies to use it to better the communications they’re having with their customers.”

LivePerson is far from the only tech heavyweight that knows artificial intelligence needs ethics for consumers to fully embrace it. Google, for instance, recently announced it was launching a global advisory council to consider ethical issues like racial bias around AI, while Facebook recently had to address the issue head-on when a prototype of the firm’s video chat device, Portal, had a problem identifying people with darker skin.

“The internet could go away very quickly and be replaced with conversation-based user experiences, but that would need designers – the people who are behind the technology,” – Rob LoCascio

In addition to technical fixes, solutions to these problems are likely to involve changing the face of workplaces in the tech space. That’s because, according to a recent report from New York University’s AINow Institute, the tech industry’s mainly white male coding workforce – only 15% of AI research staff at Facebook and 10% at Google are women – is driving much of the current bias in algorithms.

In addition to tech behemoths working on AI ethics, some of the globe’s wealthiest individuals are also doing their part. Stephen Schwarzman, US billionaire and Co-Founder of private equity firm Blackstone, recently gave Oxford University US$188 million (A$278 million) for a new institute to explore the ethics of AI, while eBay Founder Pierre Omidyar has set up London-based organisation Luminate, which works to support data and digital rights.Indeed, philanthropy targeted at making AI ethical kicked off in a major way back in 2015 when Tesla billionaire Elon Musk and US entrepreneur Sam Altman established OpenAI, a not-for-profit promoting “safe artificial general intelligence”. 

Issues go beyond ‘AI bias’

For Griffith University’s Mohan Thite, bias is just one of the headaches for businesses keen to embrace AI. Indeed, he identifies three other dangers that AI poses for businesses – AI-engineered social media campaigns, data manipulation and information security breaches.

Thite says to combat these risks, CEOs – including those of leading global tech firms like Google, Apple, Microsoft, Facebook and Amazon – need to start partnering with independent “civic liberty agencies” to shape the future of “ethically driven AI tools”.

It’s essential, he says, that these partnerships keep humans, not machines, at their core. “AI is designed by people but run by machines,” he tells The CEO Magazine.

“People have conscience, empathy and a sense of right and wrong; machines don’t. Therefore, leadership plays a major role in mandating that people keep ethical considerations at the forefront of their design, coding and deployment of AI.”

Stephen Molloy, author of How Apps Are Changing The World, echoes Thite’s sentiments, arguing that good AI ethics boils down to good business leadership – by real people.

“I don’t believe AI will ever fully develop as a separate thing from people because I don’t think we’ll ever allow it to,” – Stephen Molloy

Molloy, who also owns Australian app development firm LOMAH Studios, says artificial intelligence “loses its integrity” when lead designers “program bad culture” into what they build. “People only lessen their integrity when senior counterparts have programmed bad culture or something goes wrong. This is the same with programming AI,” he tells The CEO Magazine.

Along with suboptimal culture, lack of budget, time and the absence of a business case for including consideration of ethical issues around AI also contributes, he says.

Despite the pitfalls, Molloy cautions against entertaining “nightmarish scenarios” around artificial intelligence. “What everyone fails to appreciate in these fever dreams is that human beings are the most adaptable, clever and aggressive predators in the known universe,” he says.

 

Enjoyed this article? Stay informed by joining our newsletter!

Comments
Shivank Porwal - Oct 6, 2019, 9:01 AM - Add Reply

Intersting article...wanna increase your blog visitors (1k-1lac)???? 9999722964

You must be logged in to post a comment.

You must be logged in to post a comment.

About Author
Recent Articles
Apr 24, 2024, 12:33 AM Umer Sharif
Apr 23, 2024, 4:13 PM David
Apr 23, 2024, 4:07 AM Buraq1
Apr 21, 2024, 2:47 AM Comfort Shiled Heat & Air