From 1b5212388fa5a216216645c8bd0bf578e2b1cdde Mon Sep 17 00:00:00 2001 From: Denys Zariaiev Date: Sat, 1 Jun 2019 15:42:46 +0200 Subject: [PATCH 1/4] Prepare a WG application draft --- documents/wg-application.md | 93 +++++++++++++++++++++++++++++++++++++ 1 file changed, 93 insertions(+) create mode 100644 documents/wg-application.md diff --git a/documents/wg-application.md b/documents/wg-application.md new file mode 100644 index 0000000..9d072c0 --- /dev/null +++ b/documents/wg-application.md @@ -0,0 +1,93 @@ +# CUDA WG - draft + +## What value do you want to bring to the project? + +The Working Group is an attempt to combine efforts to bring reliable support of writing safe GPGPU CUDA code with Rust. +The work that will be done is supposed to clear the path for other GPGPU platforms in future. + +## Why should it be put into place? + +First steps to basic support of the feature have already been made by individual developers. +Regardless of recent progress, there are still a lot of questions about safety and soundness of SIMT (Single Instruction, Multiple Threads) code and they require a lot of collaboration to find solutions. + +## What is the working group about? + +We want to work together on getting a solid foundation for writing CUDA code. +Getting a safe and production-ready development experience on Stable Rust is our primary goal! + +Also, currently a major obstacle for developing GPGPU applications and algorithms in Rust is a lack of learning resources. +We plan to solve the "documentation debt" with a broad range of tutorials, examples and references. + +## What is the working group not about? + +The WG is not focused on promoting or developing "standard" frameworks. +Instead, we want to provide basic and reliable support of the feature and inspire the community to start using it. +This should lead to experimenting with different approaches on how to use it and creating awesome tooling. + +## Is your WG long-running or temporary? + +In our current vision, the WG should live until we fulfill our goals. + +In the end, we hope the WG will evolve into another one to cover similar topics: +to support other GPGPU platforms or to create higher-level frameworks to improve end-to-end experience based on community feedback. + +## What is your long-term vision? + +Having a reliable and safe CUDA development experience is our ultimate goal. +This should include: + +* Getting `nvptx64-nvidia-cuda` to a [Tier 2](https://forge.rust-lang.org/platform-support.html) support state. +* Test infrastructure on a real hardware and running the tests during Rust CI process. +* Rich documentation, references, tutorials and examples. +* A broad set of new compile errors, warnings and lints for SIMT execution model to avoid pitfalls and ensure code soundness. + +## How do you expect the relationship to the project be? + +The WG will be responsible for CUDA related issues and user requests. +This includes [already reported issues](https://github.com/rust-lang/rust/issues?q=is%3Aopen+is%3Aissue+label%3AO-NVPTX) and those that will be opened due to the work made by the WG. +An important aspect is that the WG will take care of the state of the `rust-ptx-linker` and will do its best to avoid blocking someone else's work (like it, unfortunately, happened in [rust#59752](https://github.com/rust-lang/rust/pull/59752)). + +### How do you want to establish accountability? + +For that purpose, we will publish an agenda and made decisions from our meetings. +Once we achieve important milestones we plan to make public announces. + +## Which other WGs do you expect to have close contact with? + +We would like to cooperate with Core or Compiler Teams in discussions about safety in SIMT code. +Additionally, it would be very important to discuss a strategy of reliable deployment of `rust-ptx-linker` (likely as a `rustup` component). + +On the other hand, proposed [Machine Learning WG](https://internals.rust-lang.org/t/enabling-the-formation-of-new-working-groups/10218/11) could leverage CUDA accelerated computing capabilities, so we can discuss their use cases. + +## What are your short-term goals? + +Mainly, short-term goals are about evaluating and discussing how can we achieve long-term goals: + +* Make a poll about community experiences and use cases for CUDA. +* Deploy the `rust-ptx-linker` as a `rustup` component. +* Collect soundness and safety issues that can happen in SIMT execution model. +* Decide testing approach on a real hardware. + +## Who is the initial leadership? + +> TBD... + +## How do you intend to make your work accessible to outsiders? + +Excessive learning materials and retrospective about made decisions should help to get more people involved in either discussions or further experimenting. + +> TBD... something else? + +## Everything that is already decided upon + +We already have a [`rust-cuda`](https://github.com/rust-cuda) Github organization and a [`rust-cuda`](https://rust-cuda.zulipchat.com) Zulip server. + +> TBD... would it make sense to move to a `rust-lang` Zulip server? + +## Where do you need help? + +> TBD... + +## Preferred way of contact/discussion + +> TBD... From ba685d41a2909f7df675bc7752516c16856aa296 Mon Sep 17 00:00:00 2001 From: Denys Zariaiev Date: Mon, 3 Jun 2019 22:11:03 +0200 Subject: [PATCH 2/4] Update documents/wg-application.md Co-Authored-By: gnzlbg --- documents/wg-application.md | 25 ++++++++++++++++++------- 1 file changed, 18 insertions(+), 7 deletions(-) diff --git a/documents/wg-application.md b/documents/wg-application.md index 9d072c0..74fdece 100644 --- a/documents/wg-application.md +++ b/documents/wg-application.md @@ -2,17 +2,28 @@ ## What value do you want to bring to the project? -The Working Group is an attempt to combine efforts to bring reliable support of writing safe GPGPU CUDA code with Rust. +The Working Group's goal is to deliver first-class CUDA support to Rust users - see our [Roadmap]. + +[Roadmap]: https://github.com/rust-cuda/wg/blob/master/documents/roadmap.md The work that will be done is supposed to clear the path for other GPGPU platforms in future. ## Why should it be put into place? -First steps to basic support of the feature have already been made by individual developers. +The WG is already in place as an unofficial WG, see our [Github], [Roadmap], [Zulip], and some PRs by WG members: [rust@denzp], [rust@peterhj], [stdsimd], [libc], etc. + +Being an official WG would allow us to use the official communication channels like the rust-lang zulip, which would give us more visibility and allow us to attract more contributors. + +[Github]: https://github.com/rust-cuda +[Zulip]: https://rust-cuda.zulipchat.com/ +[libc]: https://github.com/rust-lang/libc/pull/1126 +[stdsimd]: https://github.com/rust-lang-nursery/stdsimd/pulls?q=is%3Apr+nvptx+is%3Aclosed +[rust@denzp]: https://github.com/rust-lang/rust/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aclosed+author%3Adenzp +[rust@peterhj]: https://github.com/rust-lang/rust/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aclosed+author%3Apeterhj Regardless of recent progress, there are still a lot of questions about safety and soundness of SIMT (Single Instruction, Multiple Threads) code and they require a lot of collaboration to find solutions. ## What is the working group about? -We want to work together on getting a solid foundation for writing CUDA code. +Right now, this WG is about making the already-existing Rust CUDA support minimally reliable. Getting a safe and production-ready development experience on Stable Rust is our primary goal! Also, currently a major obstacle for developing GPGPU applications and algorithms in Rust is a lack of learning resources. @@ -22,11 +33,11 @@ We plan to solve the "documentation debt" with a broad range of tutorials, examp The WG is not focused on promoting or developing "standard" frameworks. Instead, we want to provide basic and reliable support of the feature and inspire the community to start using it. -This should lead to experimenting with different approaches on how to use it and creating awesome tooling. +This WG is only about CUDA support - other GPGPU targets are out-of-scope. Our focus is on making the current CUDA target more reliable. Everything that goes beyond that (e.g. higher-level CUDA libraries, CUDA frameworks, etc.) is also out-of-scope. ## Is your WG long-running or temporary? -In our current vision, the WG should live until we fulfill our goals. +The CUDA WG is long-running. We have a [Roadmap] for an MVP, but there are many issues worth solving once that MVP is achieved (e.g. foundational libraries). In the end, we hope the WG will evolve into another one to cover similar topics: to support other GPGPU platforms or to create higher-level frameworks to improve end-to-end experience based on community feedback. @@ -80,7 +91,7 @@ Excessive learning materials and retrospective about made decisions should help ## Everything that is already decided upon -We already have a [`rust-cuda`](https://github.com/rust-cuda) Github organization and a [`rust-cuda`](https://rust-cuda.zulipchat.com) Zulip server. +We work in the open, see our [Github]. > TBD... would it make sense to move to a `rust-lang` Zulip server? @@ -90,4 +101,4 @@ We already have a [`rust-cuda`](https://github.com/rust-cuda) Github organizatio ## Preferred way of contact/discussion -> TBD... +[Github] issues or [Zulip]. From bbaff9c9854a10838ee930be4e996992a34f1c36 Mon Sep 17 00:00:00 2001 From: Denys Zariaiev Date: Mon, 3 Jun 2019 22:42:13 +0200 Subject: [PATCH 3/4] More suggestions from @gnzlbg --- documents/wg-application.md | 16 ++++++---------- 1 file changed, 6 insertions(+), 10 deletions(-) diff --git a/documents/wg-application.md b/documents/wg-application.md index 74fdece..bf306bc 100644 --- a/documents/wg-application.md +++ b/documents/wg-application.md @@ -5,11 +5,12 @@ The Working Group's goal is to deliver first-class CUDA support to Rust users - see our [Roadmap]. [Roadmap]: https://github.com/rust-cuda/wg/blob/master/documents/roadmap.md + The work that will be done is supposed to clear the path for other GPGPU platforms in future. ## Why should it be put into place? -The WG is already in place as an unofficial WG, see our [Github], [Roadmap], [Zulip], and some PRs by WG members: [rust@denzp], [rust@peterhj], [stdsimd], [libc], etc. +The WG is already in place as an unofficial WG, see our [Github], [Roadmap], [Zulip], and some PRs by WG members: [rust@denzp], [rust@peterhj], [stdsimd], [libc], etc. Being an official WG would allow us to use the official communication channels like the rust-lang zulip, which would give us more visibility and allow us to attract more contributors. @@ -19,11 +20,10 @@ Being an official WG would allow us to use the official communication channels l [stdsimd]: https://github.com/rust-lang-nursery/stdsimd/pulls?q=is%3Apr+nvptx+is%3Aclosed [rust@denzp]: https://github.com/rust-lang/rust/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aclosed+author%3Adenzp [rust@peterhj]: https://github.com/rust-lang/rust/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aclosed+author%3Apeterhj -Regardless of recent progress, there are still a lot of questions about safety and soundness of SIMT (Single Instruction, Multiple Threads) code and they require a lot of collaboration to find solutions. ## What is the working group about? -Right now, this WG is about making the already-existing Rust CUDA support minimally reliable. +Right now, this WG is about making the already-existing Rust CUDA support minimally reliable. Getting a safe and production-ready development experience on Stable Rust is our primary goal! Also, currently a major obstacle for developing GPGPU applications and algorithms in Rust is a lack of learning resources. @@ -37,7 +37,7 @@ This WG is only about CUDA support - other GPGPU targets are out-of-scope. Our f ## Is your WG long-running or temporary? -The CUDA WG is long-running. We have a [Roadmap] for an MVP, but there are many issues worth solving once that MVP is achieved (e.g. foundational libraries). +The CUDA WG is long-running. We have a [Roadmap] for an MVP, but there are many issues worth solving once that MVP is achieved (e.g. foundational libraries). In the end, we hope the WG will evolve into another one to cover similar topics: to support other GPGPU platforms or to create higher-level frameworks to improve end-to-end experience based on community feedback. @@ -65,8 +65,8 @@ Once we achieve important milestones we plan to make public announces. ## Which other WGs do you expect to have close contact with? -We would like to cooperate with Core or Compiler Teams in discussions about safety in SIMT code. -Additionally, it would be very important to discuss a strategy of reliable deployment of `rust-ptx-linker` (likely as a `rustup` component). +We would like to cooperate with *Language Team* in discussions about safety in SIMT code. +Additionally, it would be very important to discuss with *Infra team* a strategy of reliable deployment of `rust-ptx-linker` (likely as a `rustup` component). On the other hand, proposed [Machine Learning WG](https://internals.rust-lang.org/t/enabling-the-formation-of-new-working-groups/10218/11) could leverage CUDA accelerated computing capabilities, so we can discuss their use cases. @@ -95,10 +95,6 @@ We work in the open, see our [Github]. > TBD... would it make sense to move to a `rust-lang` Zulip server? -## Where do you need help? - -> TBD... - ## Preferred way of contact/discussion [Github] issues or [Zulip]. From fc552b93e723ac5ec91a2a18e1c5fbf09f32b163 Mon Sep 17 00:00:00 2001 From: Denys Zariaiev Date: Wed, 5 Jun 2019 23:37:51 +0200 Subject: [PATCH 4/4] Several WG applications edits --- documents/wg-application.md | 45 +++++++++++++++---------------------- 1 file changed, 18 insertions(+), 27 deletions(-) diff --git a/documents/wg-application.md b/documents/wg-application.md index bf306bc..7c4c16f 100644 --- a/documents/wg-application.md +++ b/documents/wg-application.md @@ -6,8 +6,6 @@ The Working Group's goal is to deliver first-class CUDA support to Rust users - [Roadmap]: https://github.com/rust-cuda/wg/blob/master/documents/roadmap.md -The work that will be done is supposed to clear the path for other GPGPU platforms in future. - ## Why should it be put into place? The WG is already in place as an unofficial WG, see our [Github], [Roadmap], [Zulip], and some PRs by WG members: [rust@denzp], [rust@peterhj], [stdsimd], [libc], etc. @@ -33,7 +31,10 @@ We plan to solve the "documentation debt" with a broad range of tutorials, examp The WG is not focused on promoting or developing "standard" frameworks. Instead, we want to provide basic and reliable support of the feature and inspire the community to start using it. -This WG is only about CUDA support - other GPGPU targets are out-of-scope. Our focus is on making the current CUDA target more reliable. Everything that goes beyond that (e.g. higher-level CUDA libraries, CUDA frameworks, etc.) is also out-of-scope. + +This WG is only about CUDA support - other GPGPU targets are out-of-scope. +Our focus is on making the current CUDA target more reliable. +Everything that goes beyond that (e.g. higher-level CUDA libraries, CUDA frameworks, etc.) is also out-of-scope. ## Is your WG long-running or temporary? @@ -45,39 +46,33 @@ to support other GPGPU platforms or to create higher-level frameworks to improve ## What is your long-term vision? Having a reliable and safe CUDA development experience is our ultimate goal. -This should include: - -* Getting `nvptx64-nvidia-cuda` to a [Tier 2](https://forge.rust-lang.org/platform-support.html) support state. -* Test infrastructure on a real hardware and running the tests during Rust CI process. -* Rich documentation, references, tutorials and examples. -* A broad set of new compile errors, warnings and lints for SIMT execution model to avoid pitfalls and ensure code soundness. +To get there, first of all, we will need to achieve all milestones in our [Roadmap]. ## How do you expect the relationship to the project be? -The WG will be responsible for CUDA related issues and user requests. -This includes [already reported issues](https://github.com/rust-lang/rust/issues?q=is%3Aopen+is%3Aissue+label%3AO-NVPTX) and those that will be opened due to the work made by the WG. -An important aspect is that the WG will take care of the state of the `rust-ptx-linker` and will do its best to avoid blocking someone else's work (like it, unfortunately, happened in [rust#59752](https://github.com/rust-lang/rust/pull/59752)). +The Working Group is mainly going to create various RFCs and send PRs to Rust components. + +An important aspect of the WG is active participation in discussions related to the NVPTX target and involved tools. +This also includes [already reported issues](https://github.com/rust-lang/rust/issues?q=is%3Aopen+is%3Aissue+label%3AO-NVPTX) and those that will be opened due to the work made by the WG. ### How do you want to establish accountability? -For that purpose, we will publish an agenda and made decisions from our meetings. -Once we achieve important milestones we plan to make public announces. +Currently, as an unofficial WG, we work completely in the open and we intend to go on in the same way. +Most of discussions happen either in out github issues, or in our public [Zulip], so that they be read by anyone. + +Once we will start having regular meetings, we would publish agenda and summary. ## Which other WGs do you expect to have close contact with? -We would like to cooperate with *Language Team* in discussions about safety in SIMT code. +We would like to cooperate with *Language Team* (or perhaps, additionally with *Unsafe Code Guidelines WG*) in discussions about safety in SIMT code. Additionally, it would be very important to discuss with *Infra team* a strategy of reliable deployment of `rust-ptx-linker` (likely as a `rustup` component). -On the other hand, proposed [Machine Learning WG](https://internals.rust-lang.org/t/enabling-the-formation-of-new-working-groups/10218/11) could leverage CUDA accelerated computing capabilities, so we can discuss their use cases. +On the other hand, proposed [Machine Learning WG](https://internals.rust-lang.org/t/enabling-the-formation-of-new-working-groups/10218/11) could leverage CUDA accelerated computing capabilities, so we can collaborate on their use cases. ## What are your short-term goals? -Mainly, short-term goals are about evaluating and discussing how can we achieve long-term goals: - -* Make a poll about community experiences and use cases for CUDA. -* Deploy the `rust-ptx-linker` as a `rustup` component. -* Collect soundness and safety issues that can happen in SIMT execution model. -* Decide testing approach on a real hardware. +Mainly, short-term goals are defined in our [Roadmap] for an MVP. +Additionally, we plan to make a poll about use cases, expectations, and community awareness that it is already possible to experiment with CUDA on Nightly Rust today. ## Who is the initial leadership? @@ -85,16 +80,12 @@ Mainly, short-term goals are about evaluating and discussing how can we achieve ## How do you intend to make your work accessible to outsiders? -Excessive learning materials and retrospective about made decisions should help to get more people involved in either discussions or further experimenting. - -> TBD... something else? +Blog posts, announcements and retrospective about made decisions should help to get more people involved in either discussions or further experimenting. ## Everything that is already decided upon We work in the open, see our [Github]. -> TBD... would it make sense to move to a `rust-lang` Zulip server? - ## Preferred way of contact/discussion [Github] issues or [Zulip].