Skip to content

Commit

Permalink
Capitalising bullet point entries
Browse files Browse the repository at this point in the history
  • Loading branch information
harrietrs authored Sep 2, 2024
1 parent e36aadd commit 2978790
Showing 1 changed file with 10 additions and 10 deletions.
20 changes: 10 additions & 10 deletions docs/our_work/ai-ethics.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,25 +24,25 @@ This paper proposes that data scientists specifically focus on trying to ensure

Our practical suggestions of how we can work towards embedding these characteristics in working practices include:

* trialling tools and frameworks on live and emerging projects and to share learnings with the wider community. This will eventually constitute a portfolio of real examples that can inform future projects, similar to the use cases features in the [OECD's Catalogue of Tools & Metrics for Trustworthy AI](https://oecd.ai/en/catalogue/tool-use-cases).
* coordinating a series of interactive workshops to build awareness of ethical risks, and to create and sustain a shared vocabulary to document and help mitigate these risks effectively along each project’s lifecycle.
* developing standardised resources that can be flexible to different types of data science projects, but ensure a minimum level of consideration and proportionate action.
* mapping the development processes and involved actors of AI at the NHS and growing a platform to share knowledge and experiences. This will help us to identify how to incorporate ethical considerations into the data science lifecycle.
* Trialling tools and frameworks on live and emerging projects and to share learnings with the wider community. This will eventually constitute a portfolio of real examples that can inform future projects, similar to the use cases features in the [OECD's Catalogue of Tools & Metrics for Trustworthy AI](https://oecd.ai/en/catalogue/tool-use-cases).
* Coordinating a series of interactive workshops to build awareness of ethical risks, and to create and sustain a shared vocabulary to document and help mitigate these risks effectively along each project’s lifecycle.
* Developing standardised resources that can be flexible to different types of data science projects, but ensure a minimum level of consideration and proportionate action.
* Mapping the development processes and involved actors of AI at the NHS and growing a platform to share knowledge and experiences. This will help us to identify how to incorporate ethical considerations into the data science lifecycle.

## Outputs

We have:

* developed a [Model Card Template](https://github.com/nhsengland/model-card).
* coordinated the publication of a record for the [Automatic Moderation of Ratings & Reviews project](./ratings-and-reviews.md) on the government's [Algorithmic Transparency Recording Standard](https://www.gov.uk/algorithmic-transparency-records/nhs-england-nhs-dot-uk-reviews-automoderation-tool).
* written a (currently internal) White Paper defining the scope of operationalising AI Ethics in NHS England.
* Developed a [Model Card Template](https://github.com/nhsengland/model-card).
* Coordinated the publication of a record for the [Automatic Moderation of Ratings & Reviews project](./ratings-and-reviews.md) on the government's [Algorithmic Transparency Recording Standard](https://www.gov.uk/algorithmic-transparency-records/nhs-england-nhs-dot-uk-reviews-automoderation-tool).
* Written a (currently internal) White Paper defining the scope of operationalising AI Ethics in NHS England.

## In progress

We are currently exploring:

* how we can use the [Data Hazards project](https://datahazards.com/) to communicate potential harms of our work
* the development of a generic statement detailing how projects have taken ethical considerations into account
* supporting our information governance teams on whether additional instruments (such as model cards) can help inform a Data Protection Impact Assessment for AI use cases
* Using the [Data Hazards project](https://datahazards.com/) to communicate potential harms of our work.
* Developing a generic statement detailing how projects have taken ethical considerations into account.
* Supporting our information governance teams on whether additional instruments (such as model cards) can help inform a Data Protection Impact Assessment for AI use cases.

#

0 comments on commit 2978790

Please sign in to comment.