From 2978790c610d1df81b16a1f9dbaf30d710834d0e Mon Sep 17 00:00:00 2001 From: Harriet Sands Date: Mon, 2 Sep 2024 15:47:38 +0000 Subject: [PATCH] Capitalising bullet point entries --- docs/our_work/ai-ethics.md | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/docs/our_work/ai-ethics.md b/docs/our_work/ai-ethics.md index c48008a7..0cb7efcb 100644 --- a/docs/our_work/ai-ethics.md +++ b/docs/our_work/ai-ethics.md @@ -24,25 +24,25 @@ This paper proposes that data scientists specifically focus on trying to ensure Our practical suggestions of how we can work towards embedding these characteristics in working practices include: -* trialling tools and frameworks on live and emerging projects and to share learnings with the wider community. This will eventually constitute a portfolio of real examples that can inform future projects, similar to the use cases features in the [OECD's Catalogue of Tools & Metrics for Trustworthy AI](https://oecd.ai/en/catalogue/tool-use-cases). -* coordinating a series of interactive workshops to build awareness of ethical risks, and to create and sustain a shared vocabulary to document and help mitigate these risks effectively along each project’s lifecycle. -* developing standardised resources that can be flexible to different types of data science projects, but ensure a minimum level of consideration and proportionate action. -* mapping the development processes and involved actors of AI at the NHS and growing a platform to share knowledge and experiences. This will help us to identify how to incorporate ethical considerations into the data science lifecycle. +* Trialling tools and frameworks on live and emerging projects and to share learnings with the wider community. This will eventually constitute a portfolio of real examples that can inform future projects, similar to the use cases features in the [OECD's Catalogue of Tools & Metrics for Trustworthy AI](https://oecd.ai/en/catalogue/tool-use-cases). +* Coordinating a series of interactive workshops to build awareness of ethical risks, and to create and sustain a shared vocabulary to document and help mitigate these risks effectively along each project’s lifecycle. +* Developing standardised resources that can be flexible to different types of data science projects, but ensure a minimum level of consideration and proportionate action. +* Mapping the development processes and involved actors of AI at the NHS and growing a platform to share knowledge and experiences. This will help us to identify how to incorporate ethical considerations into the data science lifecycle. ## Outputs We have: -* developed a [Model Card Template](https://github.com/nhsengland/model-card). -* coordinated the publication of a record for the [Automatic Moderation of Ratings & Reviews project](./ratings-and-reviews.md) on the government's [Algorithmic Transparency Recording Standard](https://www.gov.uk/algorithmic-transparency-records/nhs-england-nhs-dot-uk-reviews-automoderation-tool). -* written a (currently internal) White Paper defining the scope of operationalising AI Ethics in NHS England. +* Developed a [Model Card Template](https://github.com/nhsengland/model-card). +* Coordinated the publication of a record for the [Automatic Moderation of Ratings & Reviews project](./ratings-and-reviews.md) on the government's [Algorithmic Transparency Recording Standard](https://www.gov.uk/algorithmic-transparency-records/nhs-england-nhs-dot-uk-reviews-automoderation-tool). +* Written a (currently internal) White Paper defining the scope of operationalising AI Ethics in NHS England. ## In progress We are currently exploring: -* how we can use the [Data Hazards project](https://datahazards.com/) to communicate potential harms of our work -* the development of a generic statement detailing how projects have taken ethical considerations into account -* supporting our information governance teams on whether additional instruments (such as model cards) can help inform a Data Protection Impact Assessment for AI use cases +* Using the [Data Hazards project](https://datahazards.com/) to communicate potential harms of our work. +* Developing a generic statement detailing how projects have taken ethical considerations into account. +* Supporting our information governance teams on whether additional instruments (such as model cards) can help inform a Data Protection Impact Assessment for AI use cases. #