Skip to content

Commit

Permalink
Merge pull request #58 from nhsengland/aib-improve-tag-consistency
Browse files Browse the repository at this point in the history
unifying tag formats, and combining duplicate projects
  • Loading branch information
amaiaita authored Feb 9, 2024
2 parents e4aa020 + 45a0dd7 commit f6bcc23
Show file tree
Hide file tree
Showing 53 changed files with 1,045 additions and 1,098 deletions.
2 changes: 2 additions & 0 deletions docs/our_work/Publications.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@
layout: base
title: Connected Publications
permalink: publications.html
summary: Our work has produced a number of publications
tags: ['PUBLICATIONS']
---

List of pre-releases and publications connected to our work
Expand Down
72 changes: 70 additions & 2 deletions docs/our_work/adrenal-lesions.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,82 @@ title: 'Using deep learning to detect adrenal lesions in CT scans'
summary: 'This project explored whether applying AI and deep learning augment the detection of adrenal incidentalomas in patients’ CT scans.'
category: 'Projects'
origin: 'Skunkworks'
tags: ['classification','lesion detection','vision AI']
tags: ['CLASSIFICATION','LESION DETECTION','COMPUTER VISION','AI']
---

<figure markdown >
![Adrenal flow of transfer](../images/Flow_of_transfer.width-800.png) </a>
</figure>


Many cases of adrenal lesions, known as adrenal incidentalomas, are discovered incidentally on CT scans performed for other medical conditions. These lesions can be malignant, and so early detection is crucial for patients to receive the correct treatment and allow the public health system to target resources efficiently. Traditionally, the detection of adrenal lesions on CT scans relies on manual analysis by radiologists, which can be time-consuming and unsystematic.

The main aim of this study was to examine whether or not using AI can improve the detection of adrenal incidentalomas in CT scans. Previous studies have suggested that AI has the potential in distinguishing different types of adrenal lesions. In this study, we specifically focused on detecting the presence of any type of adrenal lesion in CT scans. To demonstrate this proof-of-concept, we investigated the potential of applying deep learning techniques to predict the likelihood of a CT abdominal scan presenting as ‘normal’ or ‘abnormal’, the latter implying the presence of an adrenal lesion.
## Results

# Case Study

This is a backup of the case study published [here](https://transform.england.nhs.uk/ai-lab/explore-all-resources/develop-ai/using-deep-learning-to-detect-adrenal-lesions-in-ct-scans/) on the NHS England Transformation Directorate website.

Many cases of adrenal lesions, known as adrenal incidentalomas, are discovered incidentally on CT scans performed for other medical conditions. These lesions can be malignant, and so early detection is crucial for patients to receive the correct treatment and allow the public health system to target resources efficiently. Traditionally, the detection of adrenal lesions on CT scans relies on manual analysis by radiologists, which can be time-consuming and unsystematic.


**The challenge**
Can applying AI and deep learning augment the detection of adrenal incidentalomas in patients’ CT scans?


### Overview
Autopsy studies reveal a statistic that as many as 6% of all natural deaths displayed a previously undiagnosed adrenal lesion. Such lesions are also found incidentally (and are therefore referred to as adrenal incidentalomas) in approximately 1% of chest or abdominal CT scans. These lesions affect approximately 50,000 patients annually in the United Kingdom, with significant impact on patient health, including 10% to 15% of cases of excess hormone production, or 1% to 5% of cases of cancer.

It is a significant challenge for the health care system to, in a standardised way, promptly reassure the majority of patients, who have no abnormalities, whilst effectively focusing on those with hormone excess or cancers. Issues include over-reporting (false positives), causing patient anxiety and unnecessary investigations (wasting resources of the health care system), and under-reporting (missed cases), with potentially fatal outcomes. This has major impacts on patient well-being and clinical outcomes, as well as cost-effectiveness.

The main aim of this study was to examine whether or not using Artificial Intelligence (AI) can improve the detection of adrenal incidentalomas in CT scans. Previous studies have suggested that AI has the potential in distinguishing different types of adrenal lesions. In this study, we specifically focused on detecting the presence of any type of adrenal lesion in CT scans. To demonstrate this proof-of-concept, we investigated the potential of applying deep learning techniques to predict the likelihood of a CT abdominal scan presenting as ‘normal’ or ‘abnormal’, the latter implying the presence of an adrenal lesion.

### What we did
Using the data provided by University Hospitals of North Midlands NHS Trust, we developed a 2.5D deep learning model to perform detection of adrenal lesions in patients’ CT scans (binary classification of normal and abnormal adrenal glands). The entire dataset is completely anonymised and does not contain any personal or identifiable information of patients. The only clinical information taken were the binary labels for adrenal lesions (‘normal’ or ‘abnormal’) for the pseudo-labelled patients and their CT scans.

#### 2.5D images
A 2.5D image is a type of image that lies between a typical 2D and 3D image. It can retain some level of 3D features and can potentially be processed as a 2D image by deep learning models. A greyscale 2D image is two dimensional with a size of x × y, where x and y are the length and width of the 2D image. For a greyscale 3D image (e.g., a CT scan), with a size of x × y × n, it can be considered as a combination of a stack of n number of greyscale 2D images. In other words, a CT scan is a 3D image consisting of multiple 2D images layered on top of each other. The size of a 2.5D image is x × y × 3, and it represents a stack of 3 greyscale 2D images.

Typically, an extra dimension of pixel information is required to record and display 2D colour images in electronic systems, such as the three RGB (red, green, and blue) colour channels. This increases the size of a 2D image to x × y × 3, where the 3 represents the three RGB channels. Many commonly used families of 2D deep learning algorithms (e.g., VGG, ResNet, and EfficientNet) have taken colour images into account and have the ability to process images with the extra three channels. Taking the advantage of the fact that pixel volumes have the same size between 2D colour images and 2.5D images, converting our 3 dimensional CT scan data to 2.5D images can allow us to apply 2D deep learning models on our images.

#### Why using a 2.5D model
Due to the intrinsic nature of CT scans (e.g., a high operating cost, limited number of available CT scanners, and patients’ exposure to radiation), the acquisition of a sufficient amount of CT scans for 3D deep learning models training is challenging. In many cases, the performance of 3D deep learning models is limited by the small and non-diversified dataset. Training, validating, and testing the model with a small dataset can lead to many disadvantages, for example, a high risk of overfitting the training-validation set (low prediction ability on an unseen test set), and evaluating the model performance within the ambit of a small number statistic (underrepresented test set results in the test accuracy much lower/higher than the underlying model performance).

To overcome some of the disadvantage of training a 3D deep learning model, we took a 2.5D deep learning model approach in this case study. Training the model using 2.5D images enables our deep learning model to still learn from the 3D features of the CT scans, while increasing the number of training and testing data points in this study. Moreover, we can apply 2D deep learning models to the set of 2.5D images, which allow us to apply transfer learning to train our own model further based on the knowledge learned by other deep learning applications (e.g., ImageNet, and the NHS AI Lab’s National COVID-19 Chest Imaging Database).

![Adrenal flow of transfer](../images/Flow_of_transfer.width-800.png)

#### Classification of 3D CT scans
To perform the binary classification on the overal CT scans (instead of a single 2.5D image), the classification results from each individual 2.5D image that make up a CT scan are considered.

To connect the classification prediction results from the 2.5D images to the CT scan, we introduce an operating value for our model to provide the final classification. The CT scans are classified as normal if the number of abnormal 2.5D images is lower than the threshold operating value. For example, if the operating value is defined to be X, a CT scan will be considered as normal if there are more than X of its 2.5D images classified as normal by our model.

#### Processing the CT scans to focus on the adrenal glands

To prepare the CT scans for this case study (region of interest focus on the adrenal grands), we also developed a manual 3D cropping tool for CT scans. This cropping applied to all three dimensions, including a 1D cropping to select the appropriate axial slices and a 2D cropping on each axial slice. The final cropped 3D image covered the whole adrenal gland on both sides with some extra margin on each side.

![Adrenal cropping](../images/Cropping_process.width-800.png)

### Outcomes and lessons learned

[The resulting code, released as open source on our](https://github.com/nhsx/skunkworks-adrenal-lesions-detection) Github (available to anyone to re-use), enables users to:

- Process CT scans to focus on the region of interest (e.g., adrenal glands),
- Transform 3D CT scans to sets of 2.5D images,
- Train a deep learning model with the 2.5D images for adrenal lesion detection (classification: normal vs. abnormal),
- Evaluate the trained deep learning model on an independent test set.

This proof-of-concept model demonstrates the ability and potential of applying such deep learning techniques in the detection of adrenal lesions on CT scans. It also shows an opportunity to detect adrenal incidentalomas using deep learning.

> An AI solution will allow for lesions to be detected more systematically and flagged for the reporting radiologist. In addition to enhanced patient safety, through minimising missed cases and variability in reporting, this is likely to be a cost-effective solution, saving clinician time.
– Professor Fahmy Hanna, Professor of Endocrinology and Metabolism, Keele Medical School and University Hospitals of North Midlands NHS Trust


### Who was involved?

This project was a collaboration between the NHS AI Lab Skunkworks, within the Transformation Directorate at NHS England and NHS Improvement, and University Hospitals of North Midlands NHS Trust.

## Links

Output|Link
---|---
Expand Down
86 changes: 86 additions & 0 deletions docs/our_work/ai-deep-dive.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,11 @@ title: 'AI Deep Dive'
summary: 'The NHS AI Lab Skunkworks team have developed and delivered a series of workshops to improve confidence working with AI.'
category: 'Playbooks'
origin: 'Skunkworks'
tags : ['AI', 'GUIDANCE', 'BEST PRACTICE']
---

## Playbook

### Motivation

A series of practical workshops designed to increase confidence, trust and capability of implementing AI within the NHS and Social Care sector, based on the experience of the AI Lab Skunkworks team.
Expand Down Expand Up @@ -144,5 +147,88 @@ To understand the next steps in launching your AI Experiment

If you'd like to arrange an AI Deep Dive with your team, please [get in touch](mailto:[email protected]?subject=AI%20Deep%20Dive%20enquiry).

# Case Study

## Info
This is a backup of the case study published [here](https://transform.england.nhs.uk/ai-lab/explore-all-resources/understand-ai/sharing-ai-skills-and-experience-through-deep-dive-workshops/) on the NHS England Transformation Directorate website.

### Case Study Overview
The NHS AI Lab Skunkworks team provides public sector health and social care organisations with artificial intelligence (AI) support and technical expertise. The team of data scientists and AI project specialists has been helping others with their explorations with AI solutions for a range of problems, [from supporting hospital bed allocation to detecting CT scan anomalies](https://transform.england.nhs.uk/ai-lab/ai-lab-programmes/skunkworks/ai-skunkworks-projects/).

It became clear from engaging with organisations in these projects that the NHS AI Lab has an important role to play in increasing the trust and confidence of healthcare staff with AI tools. Both in their creation and in their everyday use. By exploring the possibilities for AI together with the organisations who will use them, the AI Skunkworks team aims to bring some clarity to the potential of AI and diminish some of the hype.

### The challenge
Despite increasing interest in the use of AI technologies within the NHS, it is difficult for busy teams to develop the skills and experience necessary to start new experimentation with AI or manage a successful AI project. Even when a potential use for AI is identified, ideas are often thwarted by the complexity of using AI technologies, lack of suitable data, concerns about patient data security or the burden of achieving regulatory approvals of AI as a medical device.

The Digital team at University Hospital Southampton (UHS) approached us with a request to explore the ethical and safety considerations of applying AI in their work. It is especially critical with AI in health and care that the people affected by its use are confident the tools are robust, and any support for decisions is fair for all patients.

UHS Digital has been exploring different applications of AI in healthcare for some time. Previously, UHS had applied to the AI Lab Skunkworks Team to see if AI could assist in prioritising patients for endoscopy procedures. With a clear interest in the topic, UHS wanted to learn more about AI key terms and fundamentals. They also wanted to increase confidence in the organisation with regard to identifying the kind of problems that AI could support, and confirm the practicalities and considerations when launching and running an AI experiment.

The Skunkworks programme aims to test the development and, when appropriate, adopt AI technologies into all areas of health and care. In addition to supporting the development of proof-of concept AI solutions, and providing [open source code on Github](https://nhsx.github.io/skunkworks) for others to re-purpose, we are also providing teams like UHS with a series of AI deep dive workshops.

> This was a great opportunity to get a frontline NHS IT team thinking about applied AI inside the system. It has certainly served to inspire the team to try new things.
– Matt Stammers, Clinical Lead, University Hospital Southampton Data Science

### AI deep dive workshops
The workshops provide organisations with the relevant knowledge and tools to understand how to safely launch an AI experiment in healthcare. We provide guidance on identifying a potential real application of AI and use this idea to create a problem statement and identify AI solutions. We also consider the practicalities of running the experiment and the ethical and information governance considerations that are so vital for producing safe and effective technologies.

No previous experience or knowledge of AI is necessary as the series of workshops provide an introduction to the key terms, types and applications of AI in healthcare. Hence, this workshop series is open to anyone who is interested in how AI can support their organisation, and this can include clinicians, technology teams, operations teams and senior stakeholders.

The 5-part series of virtual and interactive workshops covers:

- an introduction to AI and healthcare case studies
- how to identify potential applications of AI and write up a patient and user focused problem statement
- practicalities when starting and running an AI experiment, including who needs to be involved in an AI experiment for example ethics, information governance and medical device regulation
- agile ways of working to ensure the problem and the solution is always patient and user focused.
- innovation methodologies, for example the Amazon ‘working backward’ press release.

### What we did
Having piloted the deep dive process with colleagues across the NHS Transformation Directorate, we arranged calls with the Southampton team to understand their needs.

We began by establishing a workshop group of up to 12 participants who would reflect the likely members of staff to be involved in running a data-driven digital transformation initiative. UHS provided a diverse group of participants from teams including electronic patient record (EPR), business intelligence, database/IT, APEX development, clinicians and research data science.

In particular, the group wanted support with ….

- being more confident in discussion about AI in healthcare
- embracing the idea of experimentation with AI in healthcare
- understanding the practical steps required to start experimenting
- creating a detailed plan for an AI project.

We set up weekly workshops, delivered online over a period of 5 weeks. The workshops looked to identify one problem that was worked on through the series.

The running order for this weekly series was:

- Workshop 1: AI fundamentals - establish a baseline understanding of AI and the art of the possible.
- Workshop 2: Problem Discovery - develop skills to identify and communicate problems.
- Workshop 3: Solution Discovery - identify solutions and potential AI technologies to solve problems.
- Workshop 4: Practicalities - understand the practical aspects of AI projects.
- Workshop 5: Launching your AI experiment - understand the next steps in launching an AI project.

You can read more about the [deep dive workshop agenda on our Github website](https://nhsx.github.io/skunkworks/ai-deep-dive).

We involved the group in interactive elements using tools such as Mentimeter and Google Jamboard, allowing the groups to collaborate, share ideas and aid discussions.

We included a number of innovation approaches such as the Amazon ‘working backward’ press release product development approach, which helps to imagine what the desired end result will look like. We also introduced the [lean canvas method](https://leanstack.com/lean-canvas) to clearly capture what the problem and potential solution could be, including identifying alternative solutions that may already exist.


### Outcomes and lessons learned
The workshops provided valuable insights to the NHS AI Lab Skunkworks team about the importance of group engagement. Having a diverse AI project team that includes people from technical, governance and frontline backgrounds is important to ensure you fully understand the problem you’re trying to solve.

The experience also demonstrated the value of discussing topics such as “build or buy?” The team at UHS were keen to invest wisely in any AI developments and to learn how to find out about existing tools. With so many AI applications already in existence, there may be tools you can use “off the shelf” or valuable lessons to learn from previous investigations into similar issues.

The workshops gave us a good opportunity to stress the importance of using AI safely and ethically. The data you use and the testing and governance processes you apply must all result in AI that benefits all patients safely and ethically.

As a result of the deep dive sessions:

- 67% of participants felt more confident in their baseline understanding of AI and Machine Learning.
- 71% of participants felt more confident in identifying potential solutions.
- 60% of participants felt more confident in identifying the data needs of an AI project.
- 75% of participants felt more confident conducting business and technical due diligence.

The team at University Hospital Southampton also reported:

- a need for additional support when identifying and launching AI experiments
- the importance of diverse groups who represent different roles and teams in order to help the group explore the problem from different perspectives.

[comment]: <> (The below header stops the title from being rendered (as mkdocs adds it to the page from the "title" attribute) - this way we can add it in the main.html, along with the summary.)
#
Loading

0 comments on commit f6bcc23

Please sign in to comment.