Blog article
See all stories »

Ethical innovation in AI: Why is it so elusive?

Do you remember the sensation around the launch of Microsoft’s chatbot, Tay? It was an exciting breakthrough in AI and people across the globe were logging on to chat with Tay and test her knowledge. However, in a matter of hours, Tay took on racist overtones, fed by inputs from her very users. The incident of Tay demonstrates the inherent fears people have about AI and one of its key flaws. AI systems learn from humans and, inevitably, reflect our own biases – be it race, sex, gender, or socio-economic status. The problem here is that the inferences and results generated by an AI algorithm are based on the data fed into and consumed by it. This poses a significantly high risk of bias whereby results can be skewed to suit any one person’s goals. Is this cause for concern? It could be, particularly if the goals are malicious.

So, even as businesses evaluate how they can use automation, robotics, machine learning, and artificial intelligence to drive business growth, it is important to be aware of the risks and prepare for them.

Man versus machine: re-skilling human resources

In the UK alone, nearly 30% of jobs will be at risk due to automation in 2030, particularly in the transportation, retail and manufacturing sectors. Many of the jobs taken over by automation, robots and AI will be repetitive ones. So, the key question on everyone’s mind is: What will the displaced workforce do? It is for organisations to re-skill/up-skill their human resourcesand find newer roles for such employees. In fact, companies such as Boxed are already setting an example on how to achieve this. Recently, Boxed, an online retail shopping company, automated its entire fulfillment centre. But instead of down-sizing employees, the company has created new roles in customer service, troubleshooting and equipment servicing for all displaced employees. Thus, just as organisations aggressively find ways to adopt AI, they need to also take proactive steps to ensure their employees are ready for other roles through relevant training so they can prepare for the disruption of AI.

Killer robots and AI-based viruses: reviewing security

Some leaders and thinkers of our time have publicly voiced their fears of AI-led destruction through killer robots. Despite the benefits of AI, any tool in the wrong hands can be wielded to cause more harm than good. This threat also holds true for cyber security: when used maliciously, AI products can take cyber crime to the next level, posing serious concerns for industries. In fact, Symantec has already issued warnings of the possibility of increasingly sophisticated AI-based viruses and attacks to steal personal data and compromise networks.

So, organisations will need to review their approach to security and take appropriate measures to safeguard their data from such attacks. The recent attack on Equifax where hackers accessed personal data of nearly 143 million customers highlights the pressing need for such innovation. Despite being aware of vulnerabilities within their applications – a flaw that the hackers exploited – Equifax was unable to patch the issues in time, leading to the massive data breach. Robust security solutions that use machine learning algorithms can accelerate the implementation of security patches, thereby preventing such attacks. Thus, the need for intelligent protocols is significant, particularly as cyber crime becomes more sophisticated where AI can be used to mask malicious malware or even train machines to hack other machines.

Regulation versus innovation: defining the rules

As a nascent technology, AI continues to confound regulators from a compliance perspective. What kind of controls are needed and how can they be enforced? Will regulations be global or industry-specific? For instance, will the requirements for manufacturers using AI be different from those for healthcare providers? While governments and regulatory bodies face pressure to institute regulations, others deem it unnecessary stating that enforcing regulations at this stage will thwart AI-led innovation.

In my opinion, AI-led innovation must be nurtured while ensuring fairness. A step in the right direction is the formation of the “partnership on AI to benefit people and society”. Comprising some of the great minds, companies and institutions of our age, this society aims to ensure that AI is developed and practiced in a safe and ethical manner.

Beyond the concerns, AI holds immense promise. It has the potential to solve some of the most pressing problems we face today such as poverty, inequality and climate change, to name a few. From an industrial standpoint, it can revolutionise the way we do business and truly amplify the human potential. Many Silicon Valley companies such as Apple and Facebook are already dabbling in AI innovation to improve their products and enhance customer experience. Recently, Amazon Web Services released several new machine learning features that will help its customers and users easily and affordably create in-house AI algorithms and applications. Such general purpose AI tools can be customised to suit the needs of different organisations, expanding the reach of AI to various industries.

Ultimately, it is up to us – the users, developers and innovators – to ensure its ethical development and use so that AI continues to be a tool for the growth and betterment of the human race, rather than a threat to mankind.

10278

Comments: (0)

Member since

0

Location

0

More from member

This post is from a series of posts in the group:

Innovation in Financial Services

A discussion of trends in innovation management within financial institutions, and the key processes, technology and cultural shifts driving innovation.


See all

Now hiring