Home

  –

Understanding the applications of artificial intelligence in business

Understanding the applications of artificial intelligence in business

The rapid growth of AI has astonished the world but building a business case for it requires going beyond the basics in order to build an understanding of how it can help organisations, whether through streamlining operations or significantly transforming operations.

As everyone has heard by now, artificial intelligence (AI) is here – and it is here to stay. Indeed, while the hype can feel overwhelming, the truth is that we are really only in the early days of understanding the impact that AI will have.

Wild predictions, from mass job losses to nascent sentience, have tended to cloud the reality of AI, a reality that, while significantly less apocalyptic, is no less dramatic.

Put simply, AI is one of the most disruptive and transformative technologies of our time. It has the potential to impact every aspect of our lives, from how we communicate and learn, to how we work and play. And there is more to come: AI will soon become a key driver of innovation and competitiveness for businesses across various sectors and industries.

Which is all very well, but the question is: what can it do for you? In order to answer that question, however, it is necessary to develop an understanding of the nature and capabilities of AI. Only then will it be possible to harness the power and benefits of AI and work out how it can be applied to their specific needs and goals.

In this first article, part of a series of five, I want to briefly tell the story of how we got here, as well as pose the question of where AI is likely to take us.

From Dartmouth to Deep Learning

As long as machines have existed, humans have been putting them to work doing things that, previously, we had to do for ourselves. It’s little wonder then that computers, when they arrived, seemed to point inexorably toward a day when machines would think. The dream stayed with us as we progressed from mechanical calculating devices to analogue electronic and, finally, digital computers, but cognition always remained the next step. It seemed that thinking machines were forever on the horizon, just out of reach.

As a technology, AI had its genesis at Dartmouth College in New Hampshire in the 1950s. It was there and then that cognitive scientists Marvin Minsky and John McCarthy, along with mathematician Claude Shannon and computer scientist Nathaniel Rochester, as part of the ‘Summer Research Project on Artificial Intelligence’, the first ever AI workshop.

The group’s proposals were anything but modest. McCarthy, for instance, is known for his statement: “Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions, and concepts, solve kinds of problems now reserved for humans, and improve themselves”.

Despite the confidence, undoubtedly buoyed by a century of impressive technological developments, progress was slow. Certainly, Minsky’s 1967 claim that the “problem of creating ‘artificial intelligence’ will substantially be solved” inside a generation proved overly optimistic.

And yet, AI did arrive – and not in 2023 when OpenAI unleashed ChatGPT to a stunned public. 

Interest – and investment – in AI ebbed and flowed over the decades, with particularly fallow periods coming to be known as ‘AI winters’. In fact, such was the disappointment as AI’s failure to deliver that, in the 1980s, some businesses and researchers working in the field attempted to rename it ‘expert systems’, fearing that using the term AI led to funding droughts.

Work continued nonetheless, both in the lab and in business. Notable milestones included, in 1997, IBM’s Deep Blue AI defeating world chess champion Garry Kasparov in a six-game match, demonstrating the power of AI in game-playing. In 2012, AlexNet, a deep convolutional neural network, won the ImageNet Large Scale Visual Recognition Challenge, achieving a breakthrough in computer vision and deep learning.

Skip forward a decade and AI is neither a toy for playing games, nor is it confined to the university research lab.

Today, AI is everywhere. It powers our smartphones, it is used in e-commerce and information security, and it is our first point of contact for online customer services. It is also at the point where it can enable new applications and solutions that were previously unimaginable or impractical, such as virtual assistants, facial recognition, natural language generation and more.

Its impact is already widely felt, and it is only going to increase: AI is now transforming a range of domains and industries, from healthcare to education and from finance to manufacturing.

What are the Types of AI?

There are three essential three key types of AI today: machine learning (ML), deep learning and generative AI.

  • ML is a form of statistical analysis where algorithms use historical data as input to predict new output values. This allows software applications to become more accurate in predicting outcomes without being explicitly programmed to do so. ML is used in a wide range of applications, including recommendation engines, fraud detection, spam filtering, malware detection, process automation (including tasks such as customer service and inventory management), and predictive maintenance.
  • Deep learning is a subset of machine learning that uses artificial neural networks, inspired by the human brain and composed of layers of interconnected nodes that can learn to recognise patterns in data, to learn from data. Deep learning is used for applications including image recognition, natural language processing including translating or summarising text, speech recognition, medical diagnosis and financial trading.
  • The newest form of AI, and the one with the greatest potential to transform a wide range of business applications, processes and activities, is generative AI. Generative AI is a branch of artificial intelligence that uses machine intelligence to create or ‘generate’ new and original content, such as images, text, music, or code. Put simply, generative AI uses algorithms that learn from data and then generates outputs that are not part of the existing data.

Generative AI has already been put to use for a wide range of purposes, including translation, artistic expression and web searching. For example, Microsoft has added OpenAI’s ChatGPT technology to its Bing search engine, pointing to an entirely new way to access and sort information online.

This is not the only application of generative AI, though. Microsoft has used the same technology to develop Microsoft 365 Copilot, an intelligent assistant for workplace activities. In addition, its launch of Azure OpenAI has allowed developers (including us here at TEKenable) to develop AI solutions for the business market, as well as integrate with tools such as GitHub Copilot and Power BI.

Why you need to level up?

Clearly we live in exciting times: AI is not only changing the world around us but also creating new opportunities and challenges for businesses. 
On a macroeconomic scale, the significance of AI appears clear. According to a report by McKinsey, AI could potentially deliver “16% higher cumulative GDP”. And that was in 2018, before we saw the explosion of generative AI applications in 2022. This year, Goldman Sachs predicted AI would “drive a 7% (or almost US$7 trillion) increase in global GDP and lift productivity growth by 1.5 percentage points over a 10-year period”.

Similarly, in June  the Official Monetary and Financial Institutions Forum (OMFIF) think-tank, while warning of potential negative effects on labour bargaining power, said the wider impact of AI was “potentially significant” and could result in “improved productivity growth after decades of sluggishness”.

What this means for individual businesses is less clear, of course. All that can be said with certainty is that AI will give some businesses an edge over others. What precise form this takes is no small question, however.

In order to capture value and gain a competitive edge in the market, businesses need to go beyond the basics of AI and build an understanding of how it can help them achieve their objectives and solve their problems. 

Needless to say, this is not a one-size-fits all situation, but in all cases it will require a strategic approach that considers the following aspects:

  • The business case: Organisations need to identify specific use cases and applications of AI that are relevant to their goals. They need to assess the feasibility and viability of implementing AI solutions based on data availability and quality, their technical capabilities and those of their partners, ethical standards, legal compliance, organisational culture, and expected return on investment.
  • The implementation: Organisations need to develop AI solutions that are effective and efficient for these use cases. They need to choose the right type and form of AI, the right tools and platforms for deployment (e.g. cloud services vs. open source frameworks), and the right methods and processes.
  • The impact: Organisations need to be able to measure and evaluate the impact of AI on their performance and outcomes. Key metrics and indicators (such as accuracy, speed, cost, revenue, customer satisfaction, and employee engagement) must be defined in order to assess success and progress. They also need to monitor and manage the potential risks and challenges (such as bias, privacy, security, transparency, accountability, or regulation) that may arise from using AI.

Rapid Development

One thing that businesses need to get to grips with is the pace of change AI is delivering, including in relation to its own capabilities.

While AI has not exceeded general human capabilities across a broad range of tasks, it has already exceeded them in specific tasks, such as image recognition, language understanding or generation of text. 
We can expect it to keep growing, too. AI’s rate of growth has surpassed Moore’s law, which observed that the number of transistors in an integrated circuit doubles about every two years, thus making computers faster and cheaper over time. According to a report by OpenAI, the amount of computational power (‘compute’) used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time since 2012.

This should not be understood to mean that AI development will slow. In fact, and in light of semiconductor manufacturers now designing chips specifically for AI use, these trends indicate that AI is advancing at an unprecedented pace and scale. 

This will create even greater possibilities and opportunities for businesses to leverage AI for their benefit. However, it will also pose fresh challenges and questions for businesses about how to cope with the complexity and uncertainty of AI and how businesses can keep up with the fast-changing landscape.

Over the next four articles, I will look in detail at the issues that businesses need to address as they adopt and apply AI in their operations, starting with the next article in this series, in which I will take a deep dive into generative AI.

Did you find your read useful? Stay up to Date with our Insights.

    Be our Next Succesful Study

    Further Reading

    Be our Next Successful Study

    Get in Touch with TEKenable

    Message John Bosworth Directly!

    Message Ken Byrne Directly!

    Get in Touch with TEKenable