Skip to content

Commit

Permalink
Updated A Few Reflections On Ai Live
Browse files Browse the repository at this point in the history
  • Loading branch information
[email protected] authored and Siteleaf committed Sep 13, 2023
1 parent ad874ac commit 1b1fe72
Showing 1 changed file with 34 additions and 0 deletions.
34 changes: 34 additions & 0 deletions _drafts/a-few-reflections-on-ai-live.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,44 @@ categories:
tags:
- blog
- artificial intelligence
- Events
- ''
- events
author: gosko
---

Last week I was lucky enough to head along with nearly 100 other people to AI Live, an event aiming to bring together thinkers and doers who are involved in the world of Artificial Intelligence. Run by Chief Disruptor, the speakers at this London event took us on a journey covering 1000-year nuclear strategies, to how AI could be used to better manage office space, via a detour to the benefits of baking bread at 3.00pm in Waitrose.

Clearly there was a huge amount of content and some really interesting topics and people there, and I’ve been reflecting on some of the themes, ideas and issues that were shared and which jumped out at me.

**AI isn’t coming for you**

In a fascinating slot featuring the Nuclear Decommissioning Authority's Carl Dalby, he responded to fears that AI is going to put us all out of work. So many people are sharing scary stories about how, within a few years, most people will be replaced by AI, when the truth is far less intimidating.

It feels very much like this sort of doomsday message may sell copies of the red top tabloids, but that it is very wide of the mark. AI, at least as it currently stands, is nowhere near ready to do any of the things scaremongers attribute to it. As Carl said, AI is far less Artificial Intelligence and actually much closer to Augmented Intelligence. The AI tools being built and rolled out now are great at augmenting the skills, knowledge and experience of staff in all sorts of roles, but it remains a mere tool to be used rather than some sort of magical answer to unlock and deliver the future. This is a topic that some of my Scott Logic colleagues addressed in a recent episode of the [Beyond the Hype podcast](https://blog.scottlogic.com/2023/05/02/beyond-the-hype-is-generative-ai-coming-for-programming-jobs.html).

AI simply doesn’t think for itself. It needs training and teaching and work, and still makes mistakes all the time. Everything spat out by AI still needs a skilled and experienced human brain to provide a sense check and a sniff test to make sure it’s actually usable and true, and this will likely be the case for many years to come. Plus, ChatGPT is rubbish at building solid brickwork and sorting out my plumbing.

### Ignore the AI hype - it’s all about data

AI is huge news at the moment. From self-driving cars, to ChatGPT, to Elon Musk telling everyone that we need to pause AI development (probably just so his own companies can catch up), AI is all over the place and grabbing enough headlines that decision makers are taking notice. AI projects are getting more funding and support than ever before, and more than most digital projects have in the past.

But, when it’s broken down, the work being done isn’t AI. No, it’s actually all around data. Whether it’s data engineering, science, cleansing, reporting, linking or anything else, the actual work is simply how we can identify, collate, cleanse, combine and use data to drive better decisions. Calling it “AI” is unlocking budgets and senior backing, but in reality the overwhelming majority of work being undertaken under that banner is nothing of the sort. But it is helping highlight and address challenges, such as how data can securely and legally be moved and used around the globe (a particular challenge facing people like David McHugh).

It may feel a little Machiavellian, but it could well be worth exploring how this hype train could be used to unlock projects and programmes that have been neglected or deprioritised for far too long. Are there ways of fixing our data foundations now by talking about AI, knowing that doing this work will massively help when real AI rolls round in a number of years time?

### 85% of AI projects fail

This was a shocking statistic for me. How can this be? How can it be allowed?! If you translated that success rate to anywhere else in an organisation it would be under intense scrutiny and pressure to buck its ideas up and start showing results.

This is where the newness of AI comes into play. It is an emerging field, full of unknowns and without centuries of work to look back and build on. Yes, AI has sort-of been around since the McCarthy era of the mid-50s, but in reality it’s only with recent advances in technology that we’ve been able to deliver on the ambitions of our forebears. This means we’re making things up as we go along. We’re doing something that’s only ever been dreamed of, and that means we’re going to make a lot of mistakes along the way.

Failure is multi-faceted of course. As James Tomkins from the Met Office pointed out, even in those failures we are learning huge amounts, which makes success next time more likely. The scale of ambition he and his team have is breathtaking, and they’ve got plans to test things out which could change the world (as grandiose as that sounds). But they don’t know it will definitely work, or even if it works whether it will be a good thing. For me this shows immense bravery and is really quite exciting.

### So, what next?

What next for AI? What will it hold for us over the coming months, years, decades and centuries? Will we see chatbots taking all of our jobs, leading us to live like Wall-E style people on hover chairs, living a life without work and responsibilities? Will it be the end of civilisation in a Terminator-style apocalyptic scenario? Or will it actually end up simply providing some more powerful tools that will help us do our jobs better? What data work needs doing before anything can actually truly be classed as AI?

And how will legislation support or hinder this journey? What do we need to consider building into our AI projects now in order to prepare for inevitable legislation? What role does the public sector play in both making the most of this new technology as a step change in our efforts to improve people’s lives, as well as in making sure that ethically it is used for good and not to do harm?

It’s sparking a whole load of thinking for me and tonnes of discussions back at Scott Logic Towers, and I suspect I may end up tapping out a load more random thoughts over the coming weeks.

0 comments on commit 1b1fe72

Please sign in to comment.