Artificial Intelligence: how to have it in your strategy with no risks

by

Technologies are going through an immense evolution in a super fast way if compared to the past. Cars and autonomous driving, immediate translation and mobile phones which can do everything we want with just a verbal command, autonomous things such as drones, robots, ships and tools and human augmentation are just some of the Artificial Intelligence (AI) examples.

To remain competitive, organisations often implement new technologies in a very fast way, without really understanding some of the risks involved.

It is quite fundamental to motivate organisations to learn more about AI – how to use it within the company and how to make the most of it – and also about the risks it has if it is not used in the right way.

To make sure risks associated to AI are well known, the World Economic Forum, together with the Accenture’s Center for the Fourth Industrial Revolution Network Fellows, BBVA, IBM and Suntory Holdings, has worked in the last year together with more than 100 companies and technology experts to create the Empowering AI Toolkit.

This is a framework to understand and evaluate the risks of Artificial Intelligence according to the priorities and the objectives a company has. It doesn’t give easy solutions nor alternatives, but it helps understanding senior management on how to use Artificial Intelligence more effectively. This allows decisions to be taken in a faster way and it also reduces the need of hiring external consultants who might be a further and quite expensive cost.

The toolkit has been divided in four different pillars which analyse the impact of AI on technical, organisational, brand and governance. Such framework helps and supports businesses in the decisions of which AI solutions can be adopted in the marketing strategies and in understanding the whole potential of AI to make a business become even more successful.

Artificial Intelligence promises to solve some of the most urgent issues that our society will face soon, and this means guaranteeing a more equal trade, reducing consumers’ waste, foreseeing natural catastrophes, helping in obtaining a faster diagnosis for patients with cancer, etc. There still is a shadow of gloom about all the scandals about the violation of privacy and the uncontrolled use of data which happened recently. Such situations can highly damage people’s trust and have a very negative impact.

As an example, the crisis happened back in 2018 which affected Facebook and Cambridge Analytica made every one of us aware of the risks of personal data handled by private organisations. The analytics company based in London, which we would have never heard of if it hadn’t been for such scandal, managed to collect data of 50 million users of the social network without the express consent of the owners of such data. This generated a huge public scandal and Facebook lost $50b in just one week.

To add even more doubts there have also been some high profile statements according to which some AI systems used by governments and businesses have not been thoroughly checked and they negatively influenced some decisions people have made about their lives. One case, in particular, happened at Amazon. Its own recruiting system which uses AI, twisted the process generating bias around ethnicity and gender of candidates.

There is some level of awareness that the new technologies can be somewhat unpredictable if they are not used with the right degree of attention, but it can also be quite difficult to foresee where and when the mistake is.

“It cannot happen to us” is still quite a widespread attitude even when all the alarming signs are there. Genesys, a customer experience company, has recently published an interview with 5000 employers in six different countries about AI and the results show that 54% of them was quite “relaxed about a non-ethical use of Artificial Intelligence in their companies”.

Considering the enormous amount of possibilities available and the facts that it keeps on expanding, AI needs to be thoroughly studied—there are many technological solutions we use daily than what we might even think of, after all.

In the last years, many different companies have created their own internal working groups on AI, ethical commissions, special committees which evaluate policies about its use, possible risks and strategies to adopt. A recent survey carried out by KPMG showed that 44% of business has an ethical code on Artificial Intelligence and about 30% of companies is working to have a code in place. AI is an emerging technology and the risks are everywhere: every single company should have a roadmap in place.

One of the biggest risks for businesses today is the use of “Inscrutable Black Box Algorithms”, as the World Economic Forum defines them. The majority of algorithms works in a way which is only understood by the developers who created them. Such algorithms are often seen as super precious intellectual property, something which strengthens the need of maintaining the internal mechanisms secret and they are therefore removed from control and governance.

In order to correctly help and deal with such issues some no profit organisations have been created such as Partnership on AI. It was created by technology giants such as Amazon, DeepMind, Facebook, Google, IBM and Microsoft to produce the best practices which can guarantee that AI systems are actually helping our society.

Last year, the Belfer Center for Science and International Affairs of Harvard Kennedy School hosted the opening meeting about “Responsible Use of Artificial Intelligence”. Government and businesses representatives together with others from the academic world and of the society to analyse the policies on how to use AI took part at the event.

Despite all the different efforts the widespread availability and the (fast) evolution of Artificial Intelligence pose a tangible difficulty in effectively creating rules and identifying possible risks. Regulations should be flexible to changes and easily accessible. The new Empowering AI Toolkit by WEF is available for free and it can have immediate value at a global level for those who are about to define their use policies.

You might also like:

Big Data e Intelligenza Artificiale: Lesson Learned
5G e IoT: la quarta Rivoluzione Industriale
Industry 4.0: meno robot e più people!

Latest articles

AI, MACHINE LEARNING, IOT. MY DAY AT AWS SUMMIT 2023

AI, MACHINE LEARNING, IOT. MY DAY AT AWS SUMMIT 2023

On Thursday, June 22, I was at the Milan Convention Center at the annual event organized by Amazon Web Services to promote its evolving Cloud services: AWS SUMMIT. Amidst so much IoT, Servitization, Cloud Computing above the clouds (in space!), and Supercomputing, one...

TECHNOLOGY AND SUSTAINABILITY: A CONVERSATION WITH ERIC EZECHIELI

TECHNOLOGY AND SUSTAINABILITY: A CONVERSATION WITH ERIC EZECHIELI

Sustainability cannot simply be a word that is tossed around in our daily conversations without meaningful action behind it. Technology has a critical role to play in helping companies achieve their sustainability goals. Recently, we had a lengthy conversation with...

HOSPITALITY 4.0: UP TO SPEED WITH THE DIGITAL TRANSITION

HOSPITALITY 4.0: UP TO SPEED WITH THE DIGITAL TRANSITION

The Hospitality industry is constantly changing and is a crucial component of the global economy. However, the pandemic has presented numerous difficulties for the industry, such as making reservations and sanitizing environments, which have required industry...