If you trust a machine to make your decisions, you’d better be prepared, writes Nigel Phair.

Artificial Intelligence (AI) is the science and engineering of making intelligent machines, especially intelligent computer programs. Many movies have sensationalised the topic, but AI is real, and it’s already part of our lives.

How prepared are we for the possible consequences of relying on AI to make our decisions for us? Because to a certain extent, that’s just what AI does.

As a director, you need to ask yourself some tough questions about the risks this poses to your company.

What is AI?

Artificial intelligence is broad term, covering many approaches to making computers mimic an ability to work on problems normally requiring human intellect. how humans think. A key goal of AI is to develop systems capable of operating, at least to some degree, independently of human control.

For example, DeepMind, owned by Google’s parent company, Alphabet, has taken AI a step further by developing a neural network which can not only learn, but remember and recall facts. The system can then make choices based on memory and logic.

In a paper published in October 2016, DeepMind reported that the  system could navigate through London’s Underground without human input.

Making mistakes faster than ever before

The biggest risks associated with AI are:

  • Although it may speed up our processes and increase productivity, it will also therefore speed up the negatives associated with business, and expose or magnify errors and losses.
  • There’s the potential for mismanagement of AI by human agents
  • There exists the possibility of AI malfunction with negative consequences
  • An added complication is that the systems are new, and continually evolving, so it’s almost impossible for computer engineers to keep up with understanding and managing the risks.

What might an AI that can make its own decisions do to your business?

Let’s not lose sight of the humans. Many of your clients will be worried by the concept of AI, knowing little about it other than the Hollywood concepts. There’s every possibility they won’t accept the non-human contact. While AI like Apple’s Siri captured public imagination, we also know that robot voices at the end of the telephone support line are unpopular.

How AI will challenge the board

Boards may be comfortable with traditional areas such as human resources or finance, but few directors are as comfortable dealing with technology. Understanding AI systems is yet another challenge, as is being prepared for possible attacks or malfunction without even knowing how this might happen.

How can a board manage a risk when they don’t understand enough about it to pinpoint danger zones?

The other question is about who holds the ultimate responsibility for AI.

“Paralysis occurs because AI does not fit neatly into any specific area of responsibility; it is not exclusively for the chief information officer, the chief marketing officer or even the chief operating officer.”
Josh Sutton, global head of AI practice. Publicis.Sapient

Organisations which are “agile” can adapt quickly, yet a 2015 McKinsey and Co survey of 1000 organisations showed that only 20 percent were truly agile.

Perhaps the board’s focus should be on producing agile companies, which can incorporate and take advantage of AI and technology, rather than focusing on the technology itself.

Preparing directors for AI

In 2014, Japanese venture capital firm Deep Knowledge Ventures responded to the AI challenge appointing an algorithm its board of directors. The AI, named Vital, was tasked to sift through large amounts of data to identify the best vehicles for VC funding.

While that could be the way of the future, plenty of groundwork needs to be done first.

To minimise risk, directors need to consider the composition and structure of the board itself, to ensure it has the skills and competency to deal with AI and other technological advances.

The application of AI is broad, so it may take two or three specialist directors to fill the knowledge gap.

It’s important to understand , however, that these directors must be involved in the regular work of the board, so their expertise is applied at the right time and place. By working together, the directors can minimise the risk to people, jobs and systems.

Never forget that AI must integrate with the overall business strategy of the organisation, rather than interfere with it. The responsibility will fall to the directors to develop a sound strategy to incorporate AI, so it enhances your ability to reach the overall goals.

The strategy must also include a risk management plan, to identify and minimise risk.

It may be that the board needs to create a sub-committee with a special focus on technology and AI. This will enable the board to be alerted to critical issues early enough to act upon, and to help embed AI into the organisation’s processes with minimal disruption.

However you approach the topic, it’s critical that every board member is able to understand the possible impact of AI on your organisation, its productivity and its market competitiveness.

Without this, the board will be ineffective, unable to pinpoint the questions, let alone answer them.

Updated 26 October 2017.