Skip to content

Commit

Permalink
Merge pull request #62 from nhsengland/hs_fix_spelling
Browse files Browse the repository at this point in the history
HS Spellcheck and markdownlint
  • Loading branch information
amaiaita authored Feb 14, 2024
2 parents 92d0834 + 7383f51 commit 494a6da
Show file tree
Hide file tree
Showing 52 changed files with 508 additions and 451 deletions.
23 changes: 12 additions & 11 deletions CONTRIBUTE.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ If you want to contribute to our resources:
5. Check how your change looks on our website by hosting the website locally (follow [the steps below](#contribute-to-nhs-england-data-science-website) on how to do this)
6. Push to your fork and [submit a pull request][pr]

Your pull request will then be reviewed. You may receive some feedback and suggested changes before it can be approved and your pull request merged.
Your pull request will then be reviewed. You may receive some feedback and suggested changes before it can be approved and your pull request merged.

To increase the likelihood of your pull request being accepted:

Expand Down Expand Up @@ -48,28 +48,29 @@ To host the website locally to view the live changes, run the command:

To add a new file to the repository and website:

* Add any files for new pages to the relevant folder in [`docs`](./docs/).
* Add any images you'll use in the [`docs/images`](./docs/images/) folder.
* Because this website uses the `awesome-pages` mkdocs addon, we don't need to update the 'nav' in mkdocs.yml - it will happen automatically when the website is built.
* Don't forget to check that the links, images, headings, and contents are all working correctly on both the website and in the GitHub repo.
- Add any files for new pages to the relevant folder in [`docs`](./docs/).
- Add any images you'll use in the [`docs/images`](./docs/images/) folder.
- Because this website uses the `awesome-pages` mkdocs addon, we don't need to update the 'nav' in mkdocs.yml - it will happen automatically when the website is built.
- Don't forget to check that the links, images, headings, and contents are all working correctly on both the website and in the GitHub repo.

The website currently uses the [Material for MkDocs](https://squidfunk.github.io/mkdocs-material/getting-started/) theme. This sets the layout, colour, font, search bar, header, footer, navigation bar and contents. You can follow the documentation to make any changes (e.g. change the [colour scheme](https://squidfunk.github.io/mkdocs-material/setup/changing-the-colors/)) as it is simple to use and also easy to overwrite. There is a separate stylesheet, [extra.css](./docs/stylesheets/extra.css), which is used to overwrite the colours, fonts and some of the sizing for some elements.
Here is a good [cheat sheet](https://yakworks.github.io/docmark/cheat-sheet/) for what features can be used in MkDocs and also interesting features in [Material for MkDocs](https://squidfunk.github.io/mkdocs-material/reference/).

#### Blog / Article

Creating new articles / blog posts is easy:
* add a markdown file under the [`docs/articles/posts`](./docs/articles/posts) folder.
* Note, you do not have to add the blog pages to the `mkdocs.yml` - it gets added to the nav bar automatically.
* add yourself to the [`docs/articles/posts/.authors.yml`], so your face and info appears next to the article.
* The markdown file should have some metadata at the start, like the below. For more info on these parameters, see the [mkdocs material blog plugin guidance](https://squidfunk.github.io/mkdocs-material/plugins/blog/).

```
- add a markdown file under the [`docs/articles/posts`](./docs/articles/posts) folder.
- Note, you do not have to add the blog pages to the `mkdocs.yml` - it gets added to the nav bar automatically.
- add yourself to the [`docs/articles/posts/.authors.yml`], so your face and info appears next to the article.
- The markdown file should have some metadata at the start, like the below. For more info on these parameters, see the [mkdocs material blog plugin guidance](https://squidfunk.github.io/mkdocs-material/plugins/blog/).

```markdown
---
title: Why we’re getting our data teams to RAP
authors: [SamHollings]
date: 2023-01-05
categories:
categories:
- RAP
- Python
links:
Expand Down
13 changes: 6 additions & 7 deletions docs/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,11 @@ hide:

![Image title](../images/DS_team_photo_smaller.jpeg){ width="450" alt-tex="Picture of the Data Science team stood on some steps in London." align=right }

We are the [NHS England](https://www.england.nhs.uk/) Data Science Team.

We are the [NHS England](https://www.england.nhs.uk/) Data Science Team.
**Our vision is**:

**Our vision is**:

> For NHS England to lead on embracing data science in the NHS for the betterment of all our patients and all our staff​.
> For NHS England to lead on embracing data science in the NHS for the betterment of all our patients and all our staff​.
<br/>

Expand All @@ -23,7 +22,6 @@ We are the [NHS England](https://www.england.nhs.uk/) Data Science Team.

<br/>


</div>

## How are we different from analytical teams?
Expand Down Expand Up @@ -52,7 +50,7 @@ We are the [NHS England](https://www.england.nhs.uk/) Data Science Team.

---

We have the remit to be open and collaborative and have the aim of sharing products with the wider healthcare community.
We have the remit to be open and collaborative and have the aim of sharing products with the wider healthcare community.

</div>

Expand All @@ -74,6 +72,7 @@ We are the [NHS England](https://www.england.nhs.uk/) Data Science Team.
<h3 style="text-align: center;"> **Devise a great place to work where group work solves great problems.​**</h3>

## Our Members

| Name | Role |Team | Github |
| -------- | ------- | ------ | ------ |
|Sarah Culkin | Deputy Director |Central Data Science Team|[SCulkin-code](https://github.com/SCulkin-code)|
Expand Down Expand Up @@ -131,4 +130,4 @@ We are the [NHS England](https://www.england.nhs.uk/) Data Science Team.
|Scarlett Kynoch | Data Science Officer | Central Data Science Team | [scarlett-k-nhs](https://github.com/scarlett-k-nhs) |
|Jennifer Struthers | Data Science Officer | Central Data Science Team | [jenniferstruthers1-nhs](https://github.com/jenniferstruthers1-nhs) |
|Matthew Taylor | Data Science Officer | Central Data Science Team | [mtaylor57](https://github.com/mtaylor57) |
|Elizabeth Kelly | Data Science Officer | National SDE Team | [ejkcode](https://github.com/ejkcode) |
|Elizabeth Kelly | Data Science Officer | National SDE Team | [ejkcode](https://github.com/ejkcode) |
2 changes: 1 addition & 1 deletion docs/articles/.authors.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ authors:
description: Principle Data Scientist # Author description
avatar: https://avatars.githubusercontent.com/u/52575338?v=4 # Author avatar
slug: SamHollings # Author profile slug
url: https://github.com/SamHollings # Author website URL
url: https://github.com/SamHollings # Author website URL
2 changes: 1 addition & 1 deletion docs/codebases.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,4 @@ hide:
- navigation
---

# Codebases
# Codebases
4 changes: 2 additions & 2 deletions docs/meta_page.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@ hide:

# Meta Page


## Contribution

This uses:

- mkdocs-material
- https://github.com/lukasgeiter/mkdocs-awesome-pages-plugin
- https://github.com/lukasgeiter/mkdocs-awesome-pages-plugin
4 changes: 2 additions & 2 deletions docs/our_work/Publications.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,11 @@ List of pre-releases and publications connected to our work

[4] [https://doi.org/10.1101/2023.08.31.23294903](https://doi.org/10.1101/2023.08.31.23294903) - (Pre-Print)

**Represneting Multimorbid Disease Progressions using directed hypergraphs**
**Representing Multimorbid Disease Progressions using directed hypergraphs**

**Jamie Burke**, Ashley Akbari, Rowena Bailey, **Kevin Fasusi**, Ronan A.Lyons, **Jonathan Pearson**, James Rafferty, and **Daniel Schofield**

*To introduce directed hypergraphs as a novel tool for assessing the temporal relationships between coincident diseases,addressing the need for a more accurate
*To introduce directed hypergraphs as a novel tool for assessing the temporal relationships between coincident diseases,addressing the need for a more accurate
representation of multimorbidity and leveraging the growing availability of electronic healthcare databases and improved computational resources.*

---
Expand Down
25 changes: 13 additions & 12 deletions docs/our_work/adrenal-lesions.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@ tags: ['CLASSIFICATION','LESION DETECTION','COMPUTER VISION','AI']
![Adrenal flow of transfer](../images/Flow_of_transfer.width-800.png) </a>
</figure>


Many cases of adrenal lesions, known as adrenal incidentalomas, are discovered incidentally on CT scans performed for other medical conditions. These lesions can be malignant, and so early detection is crucial for patients to receive the correct treatment and allow the public health system to target resources efficiently. Traditionally, the detection of adrenal lesions on CT scans relies on manual analysis by radiologists, which can be time-consuming and unsystematic.

The main aim of this study was to examine whether or not using AI can improve the detection of adrenal incidentalomas in CT scans. Previous studies have suggested that AI has the potential in distinguishing different types of adrenal lesions. In this study, we specifically focused on detecting the presence of any type of adrenal lesion in CT scans. To demonstrate this proof-of-concept, we investigated the potential of applying deep learning techniques to predict the likelihood of a CT abdominal scan presenting as ‘normal’ or ‘abnormal’, the latter implying the presence of an adrenal lesion.
Expand All @@ -21,35 +20,38 @@ This is a backup of the case study published [here](https://transform.england.nh

Many cases of adrenal lesions, known as adrenal incidentalomas, are discovered incidentally on CT scans performed for other medical conditions. These lesions can be malignant, and so early detection is crucial for patients to receive the correct treatment and allow the public health system to target resources efficiently. Traditionally, the detection of adrenal lesions on CT scans relies on manual analysis by radiologists, which can be time-consuming and unsystematic.


**The challenge**
**The challenge**
Can applying AI and deep learning augment the detection of adrenal incidentalomas in patients’ CT scans?


### Overview

Autopsy studies reveal a statistic that as many as 6% of all natural deaths displayed a previously undiagnosed adrenal lesion. Such lesions are also found incidentally (and are therefore referred to as adrenal incidentalomas) in approximately 1% of chest or abdominal CT scans. These lesions affect approximately 50,000 patients annually in the United Kingdom, with significant impact on patient health, including 10% to 15% of cases of excess hormone production, or 1% to 5% of cases of cancer.

It is a significant challenge for the health care system to, in a standardised way, promptly reassure the majority of patients, who have no abnormalities, whilst effectively focusing on those with hormone excess or cancers. Issues include over-reporting (false positives), causing patient anxiety and unnecessary investigations (wasting resources of the health care system), and under-reporting (missed cases), with potentially fatal outcomes. This has major impacts on patient well-being and clinical outcomes, as well as cost-effectiveness.

The main aim of this study was to examine whether or not using Artificial Intelligence (AI) can improve the detection of adrenal incidentalomas in CT scans. Previous studies have suggested that AI has the potential in distinguishing different types of adrenal lesions. In this study, we specifically focused on detecting the presence of any type of adrenal lesion in CT scans. To demonstrate this proof-of-concept, we investigated the potential of applying deep learning techniques to predict the likelihood of a CT abdominal scan presenting as ‘normal’ or ‘abnormal’, the latter implying the presence of an adrenal lesion.

### What we did

Using the data provided by University Hospitals of North Midlands NHS Trust, we developed a 2.5D deep learning model to perform detection of adrenal lesions in patients’ CT scans (binary classification of normal and abnormal adrenal glands). The entire dataset is completely anonymised and does not contain any personal or identifiable information of patients. The only clinical information taken were the binary labels for adrenal lesions (‘normal’ or ‘abnormal’) for the pseudo-labelled patients and their CT scans.

#### 2.5D images

A 2.5D image is a type of image that lies between a typical 2D and 3D image. It can retain some level of 3D features and can potentially be processed as a 2D image by deep learning models. A greyscale 2D image is two dimensional with a size of x × y, where x and y are the length and width of the 2D image. For a greyscale 3D image (e.g., a CT scan), with a size of x × y × n, it can be considered as a combination of a stack of n number of greyscale 2D images. In other words, a CT scan is a 3D image consisting of multiple 2D images layered on top of each other. The size of a 2.5D image is x × y × 3, and it represents a stack of 3 greyscale 2D images.

Typically, an extra dimension of pixel information is required to record and display 2D colour images in electronic systems, such as the three RGB (red, green, and blue) colour channels. This increases the size of a 2D image to x × y × 3, where the 3 represents the three RGB channels. Many commonly used families of 2D deep learning algorithms (e.g., VGG, ResNet, and EfficientNet) have taken colour images into account and have the ability to process images with the extra three channels. Taking the advantage of the fact that pixel volumes have the same size between 2D colour images and 2.5D images, converting our 3 dimensional CT scan data to 2.5D images can allow us to apply 2D deep learning models on our images.

#### Why using a 2.5D model
Due to the intrinsic nature of CT scans (e.g., a high operating cost, limited number of available CT scanners, and patients’ exposure to radiation), the acquisition of a sufficient amount of CT scans for 3D deep learning models training is challenging. In many cases, the performance of 3D deep learning models is limited by the small and non-diversified dataset. Training, validating, and testing the model with a small dataset can lead to many disadvantages, for example, a high risk of overfitting the training-validation set (low prediction ability on an unseen test set), and evaluating the model performance within the ambit of a small number statistic (underrepresented test set results in the test accuracy much lower/higher than the underlying model performance).

Due to the intrinsic nature of CT scans (e.g., a high operating cost, limited number of available CT scanners, and patients’ exposure to radiation), the acquisition of a sufficient amount of CT scans for 3D deep learning models training is challenging. In many cases, the performance of 3D deep learning models is limited by the small and non-diversified dataset. Training, validating, and testing the model with a small dataset can lead to many disadvantages, for example, a high risk of overfitting the training-validation set (low prediction ability on an unseen test set), and evaluating the model performance within the ambit of a small number statistic (under-represented test set results in the test accuracy much lower/higher than the underlying model performance).

To overcome some of the disadvantage of training a 3D deep learning model, we took a 2.5D deep learning model approach in this case study. Training the model using 2.5D images enables our deep learning model to still learn from the 3D features of the CT scans, while increasing the number of training and testing data points in this study. Moreover, we can apply 2D deep learning models to the set of 2.5D images, which allow us to apply transfer learning to train our own model further based on the knowledge learned by other deep learning applications (e.g., ImageNet, and the NHS AI Lab’s National COVID-19 Chest Imaging Database).

![Adrenal flow of transfer](../images/Flow_of_transfer.width-800.png)

#### Classification of 3D CT scans
To perform the binary classification on the overal CT scans (instead of a single 2.5D image), the classification results from each individual 2.5D image that make up a CT scan are considered.

To perform the binary classification on the overall CT scans (instead of a single 2.5D image), the classification results from each individual 2.5D image that make up a CT scan are considered.

To connect the classification prediction results from the 2.5D images to the CT scan, we introduce an operating value for our model to provide the final classification. The CT scans are classified as normal if the number of abnormal 2.5D images is lower than the threshold operating value. For example, if the operating value is defined to be X, a CT scan will be considered as normal if there are more than X of its 2.5D images classified as normal by our model.

Expand All @@ -63,17 +65,16 @@ To prepare the CT scans for this case study (region of interest focus on the adr

[The resulting code, released as open source on our](https://github.com/nhsx/skunkworks-adrenal-lesions-detection) Github (available to anyone to re-use), enables users to:

- Process CT scans to focus on the region of interest (e.g., adrenal glands),
- Transform 3D CT scans to sets of 2.5D images,
- Train a deep learning model with the 2.5D images for adrenal lesion detection (classification: normal vs. abnormal),
- Evaluate the trained deep learning model on an independent test set.
- Process CT scans to focus on the region of interest (e.g., adrenal glands),
- Transform 3D CT scans to sets of 2.5D images,
- Train a deep learning model with the 2.5D images for adrenal lesion detection (classification: normal vs. abnormal),
- Evaluate the trained deep learning model on an independent test set.

This proof-of-concept model demonstrates the ability and potential of applying such deep learning techniques in the detection of adrenal lesions on CT scans. It also shows an opportunity to detect adrenal incidentalomas using deep learning.

> An AI solution will allow for lesions to be detected more systematically and flagged for the reporting radiologist. In addition to enhanced patient safety, through minimising missed cases and variability in reporting, this is likely to be a cost-effective solution, saving clinician time.
– Professor Fahmy Hanna, Professor of Endocrinology and Metabolism, Keele Medical School and University Hospitals of North Midlands NHS Trust


### Who was involved?

This project was a collaboration between the NHS AI Lab Skunkworks, within the Transformation Directorate at NHS England and NHS Improvement, and University Hospitals of North Midlands NHS Trust.
Expand All @@ -87,4 +88,4 @@ Case Study|[Case Study](https://transform.england.nhs.uk/ai-lab/explore-all-reso
Technical report|[medRxiv](https://www.medrxiv.org/content/10.1101/2023.02.22.23286184v1)

[comment]: <> (The below header stops the title from being rendered (as mkdocs adds it to the page from the "title" attribute) - this way we can add it in the main.html, along with the summary.)
#
#
Loading

0 comments on commit 494a6da

Please sign in to comment.