From a978d2e4619c67110e37367595bb53cb8d5adb26 Mon Sep 17 00:00:00 2001 From: njlyon0 Date: Thu, 22 Aug 2024 17:31:49 -0400 Subject: [PATCH] feat!: replaced 'workshops' page with dropdown menu of direct links to our content (and updated related/linked facets of our site) --- .../best_practices/execute-results/html.json | 15 ----- .../file-paths/execute-results/html.json | 15 ----- .../pkg-loading/execute-results/html.json | 15 ----- _quarto.yml | 8 ++- index.qmd | 2 +- wg_services.qmd | 22 +++++--- workshops.qmd | 55 ------------------- 7 files changed, 23 insertions(+), 109 deletions(-) delete mode 100644 _freeze/best_practices/execute-results/html.json delete mode 100644 _freeze/modules_best-practices/file-paths/execute-results/html.json delete mode 100644 _freeze/modules_best-practices/pkg-loading/execute-results/html.json delete mode 100644 workshops.qmd diff --git a/_freeze/best_practices/execute-results/html.json b/_freeze/best_practices/execute-results/html.json deleted file mode 100644 index 6ef8c7f..0000000 --- a/_freeze/best_practices/execute-results/html.json +++ /dev/null @@ -1,15 +0,0 @@ -{ - "hash": "e72fec71ea017db319c588e31a59697e", - "result": { - "engine": "knitr", - "markdown": "---\ntitle: \"Coding Tips\"\n---\n\n\n\n\n### Welcome!\n\nThis page contains the collected best practice tips of our team. More will be added over time and feel free to post [an issue](https://github.com/lter/scicomp/issues) if you have a specific request for a section to add to this document. Please feel free to reach out to [our team](https://lter.github.io/scicomp/staff.html) if you have any questions about this best practices manual and/or need help implementing some of this content.\n\nCheck the headings below or in the table of contents on the right of this page to see which tips and tricks we have included so far and we hope this page is a useful resource to you and your team!\n\n## R Scripts versus R Markdowns\n\n\nWhen coding in R, either R scripts (.R files) or R markdowns (.Rmd files) are viable options but they have different advantages and disadvantages that we will cover below.\n\n### R Scripts - Positives\n\nR scripts' greatest strength is their flexibility. They allow you to format a file in whatever way is most intuitive to you. Additionally, R scripts can be cleaner for `for` loops insofar as they need not be concerned with staying within a given code chunk (as would be the case for a .Rmd). Developing a new workflow can be swiftly accomplished in an R script as some or all of the code in a script can be run by simply selecting the desired lines rather than manually running the desired chunks in a .Rmd file. Finally, R scripts can also be a better home for custom functions that can be `source`d by another file (even a .Rmd!) for making repeated operations simpler to read.\n\n### R Scripts - Potential Weaknesses\n\nThe benefit of extreme flexibility in R scripts can sometimes be a disadvantage however. We've all seen (and written) R scripts that have few or no comments or where lines of code are densely packed without spacing or blank lines to help someone new to the code understand what is being done. R scripts can certainly be written in a way that is accessible to those without prior knowledge of what the script accomplishes but they do not *enforce* such structure. This can make it easy, especially when we're feeling pressed for time, to exclude structure that helps our code remain reproducible and understandable.\n\n### R Markdowns - Positives\n\nR markdown files' ability to \"knit\" as HTML or PDF documents makes them extremely useful in creating outward-facing reports. This is particularly the case when the specific code is less important to communicate than visualizations and/or analyses of the data but .Rmd files do facilitate `echo`ing the code so that report readers can see how background operations were accomplished. The code chunk structure of these files can also nudge users towards including valuable comments (both between chunks and within them) though of course .Rmd files do not enforce such non-code content.\n\n### R Markdowns - Potential Weaknesses\n\nR markdowns can fail to knit due to issues even when the code within the chunks works as desired. Duplicate code chunk names or a failure to install LaTeX can be a frustrating hurdle to overcome between functioning code and a knit output file. When code must be re-run repeatedly (as is often the case when developing a new workflow) the stop-and-start nature of running each code chunk separately can also be a small irritation.\n\n### Script vs. Markdown Summary\n\nTaken together, both R scripts and R markdown files can empower users to write reproducible, transparent code. However, both file types have some key limitations that should be taken into consideration when choosing which to use as you set out to create a new code product.\n\n\n\n

\n\"Photo\n

\n\n## File Paths\n\n\nThis section contains our recommendations for handling **file paths**. When you code collaboratively (e.g., with GitHub), accounting for the difference between your folder structure and those of your colleagues becomes critical. Ideally your code should be completely agnostic about (1) the operating system of the computer it is running on (i.e., Windows vs. Mac) and (2) the folder structure of the computer. We can--fortunately--handle these two considerations relatively simply.\n\nThis may seem somewhat dry but it is worth mentioning that failing to use relative file paths is a significant hindrance to reproducibility (see [Trisovic et al. 2022](https://www.nature.com/articles/s41597-022-01143-6)).\n\n### 1. Preserve File Paths as Objects Using `file.path`\n\nDepending on the operating system of the computer, the slashes between folder names are different (`\\` versus `/`). The `file.path` function automatically detects the computer operating system and inserts the correct slash. We recommend using this function and assigning your file path to an object.\n\n::: {.cell}\n\n```{.r .cell-code}\nmy_path <- file.path(\"path\", \"to\", \"my\", \"file\")\nmy_path\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\n[1] \"path/to/my/file\"\n```\n\n\n:::\n:::\n\nOnce you have that path object, you can use it everywhere you import or export information to/from the code (with another use of `file.path` to get the right type of slash!).\n\n::: {.cell}\n\n```{.r .cell-code}\n# Import\nmy_raw_data <- read.csv(file = file.path(my_path, \"raw_data.csv\"))\n\n# Export\nwrite.csv(x = data_object, file = file.path(my_path, \"tidy_data.csv\"))\n```\n:::\n\n### 2. Create Necessary Sub-Folders in the Code with `dir.create`\n\nUsing `file.path` guarantees that your code will work regardless of the upstream folder structure but what about the folders that you need to export or import things to/from? For example, say your `graphs.R` script saves a couple of useful exploratory graphs to the \"Plots\" folder, how would you guarantee that everyone running `graphs.R` *has* a \"Plots folder\"? You can use the `dir.create` function to create the folder in the code (and include your path object from step 1!).\n\n::: {.cell}\n\n```{.r .cell-code}\n# Create needed folder\ndir.create(path = file.path(my_path, \"Plots\"), showWarnings = FALSE)\n\n# Then export to that folder\nggplot2::ggsave(filename = file.path(my_path, \"Plots\", \"my_plot.png\"))\n```\n:::\n\nThe `showWarnings` argument of `dir.create` simply warns you if the folder you're creating already exists or not. There is no negative to \"creating\" a folder that already exists (nothing is overwritten!!) but the warning can be confusing so we can silence it ahead of time.\n\n### File Paths Summary\n\nWe strongly recommend following these guidelines so that your scripts work regardless of (1) the operating system, (2) folders \"upstream\" of the working directory, and (3) folders within the project. This will help your code by flexible and reproducible when others are attempting to re-run your scripts!\n\nAlso, for more information on how to read files in cloud storage locations such as Google Drive, Box, Dropbox, etc., please refer to our [Other Tutorials](https://nceas.github.io/scicomp.github.io/tutorials.html).\n\n\n\n

\n\"Photo\n

\n\n## Good Naming Conventions\n\nWhen you first start working on a project with your group members, figuring out what to name your folders/files may not be at the top of your priority list. However, following a good naming convention will allow team members to quickly locate files and figure out what they contain. The organized naming structure will also allow new members of the group to be onboarded more easily! \n\nHere is a summary of some naming tips that we recommend. These were taken from the [Reproducibility Best Practices module](https://lter.github.io/ssecr/mod_reproducibility.html#naming-tips) in the LTER's SSECR course. Please feel free to refer to the aforementioned link for more information.\n\n- Names should be informative\n - An ideal file name should give some information about the file’s contents, purpose, and relation to other project files.\n - For example, if you have a bunch of scripts that need to be run in order, consider adding step numbers to the start of each file name (e.g., \"01_harmonize_data.R\" or \"step01_harmonize_data.R\"). \n- Names should avoid spaces and special characters\n - Spaces and special characters (e.g., é, ü, etc.) in folder/file names may cause errors when someone with a Windows computer tries to read those file paths. You can replace spaces with delimiters like underscores or hyphens to increase machine readability. \n- Follow a consistent naming convention throughout!\n - If you and your group members find a naming convention that works, stick with it! Having a consistent naming convention is key to getting new collaborators to follow it. \n \n\n\n\n## Package Loading\n\n\nLoading packages / libraries in R can be cumbersome when working collaboratively because there is no guarantee that you all have the same packages installed. While you could comment-out an `install.packages()` line for every package you need for a given script, we recommend using the R package `librarian` to greatly simplify this process!\n\n`librarian::shelf()` accepts the names of all of the packages--either CRAN or GitHub--installs those that are missing in that particular R session and then attaches all of them. See below for an example:\n\nTo load packages typically you'd have something like the following in your script:\n\n::: {.cell}\n\n```{.r .cell-code}\n## Install packages (if needed)\n# install.packages(\"tidyverse\")\n# install.packages(\"devtools\")\n# devtools::install_github(\"NCEAS/scicomptools\")\n\n# Load libraries\nlibrary(tidyverse); library(scicomptools)\n```\n:::\n\nWith `librarian::shelf()` however this becomes *much* cleaner! In addition to being fewer lines, using `librarian` also removes the possibility that someone running your code misses one of the packages that your script depends on and then the script breaks for them later on. `librarian::shelf()` automatically detects whether a package is installed, installs it if necessary, and then attaches the package.\n\nIn essence, `librarian::shelf()` wraps `install.packages()`, `devtools::install_github()`, and `library()` into a single, human-readable function.\n\n::: {.cell}\n\n```{.r .cell-code}\n# Install and load packages!\nlibrarian::shelf(tidyverse, NCEAS/scicomptools)\n```\n:::\n\nWhen using `librarian::shelf()`, package names do not need to be quoted and GitHub packages can be installed without the additional steps of installing the `devtools` package and using `devtools::install_github()` instead of `install.packages()`.\n\n", - "supporting": [], - "filters": [ - "rmarkdown/pagebreak.lua" - ], - "includes": {}, - "engineDependencies": {}, - "preserve": {}, - "postProcess": true - } -} \ No newline at end of file diff --git a/_freeze/modules_best-practices/file-paths/execute-results/html.json b/_freeze/modules_best-practices/file-paths/execute-results/html.json deleted file mode 100644 index c939b54..0000000 --- a/_freeze/modules_best-practices/file-paths/execute-results/html.json +++ /dev/null @@ -1,15 +0,0 @@ -{ - "hash": "0b74bcb0ee0f9019d9c30ea7a67b3169", - "result": { - "engine": "knitr", - "markdown": "\nThis section contains our recommendations for handling **file paths**. When you code collaboratively (e.g., with GitHub), accounting for the difference between your folder structure and those of your colleagues becomes critical. Ideally your code should be completely agnostic about (1) the operating system of the computer it is running on (i.e., Windows vs. Mac) and (2) the folder structure of the computer. We can--fortunately--handle these two considerations relatively simply.\n\nThis may seem somewhat dry but it is worth mentioning that failing to use relative file paths is a significant hindrance to reproducibility (see [Trisovic et al. 2022](https://www.nature.com/articles/s41597-022-01143-6)).\n\n### 1. Preserve File Paths as Objects Using `file.path`\n\nDepending on the operating system of the computer, the slashes between folder names are different (`\\` versus `/`). The `file.path` function automatically detects the computer operating system and inserts the correct slash. We recommend using this function and assigning your file path to an object.\n\n\n::: {.cell}\n\n```{.r .cell-code}\nmy_path <- file.path(\"path\", \"to\", \"my\", \"file\")\nmy_path\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\n[1] \"path/to/my/file\"\n```\n\n\n:::\n:::\n\n\nOnce you have that path object, you can use it everywhere you import or export information to/from the code (with another use of `file.path` to get the right type of slash!).\n\n\n::: {.cell}\n\n```{.r .cell-code}\n# Import\nmy_raw_data <- read.csv(file = file.path(my_path, \"raw_data.csv\"))\n\n# Export\nwrite.csv(x = data_object, file = file.path(my_path, \"tidy_data.csv\"))\n```\n:::\n\n\n### 2. Create Necessary Sub-Folders in the Code with `dir.create`\n\nUsing `file.path` guarantees that your code will work regardless of the upstream folder structure but what about the folders that you need to export or import things to/from? For example, say your `graphs.R` script saves a couple of useful exploratory graphs to the \"Plots\" folder, how would you guarantee that everyone running `graphs.R` *has* a \"Plots folder\"? You can use the `dir.create` function to create the folder in the code (and include your path object from step 1!).\n\n\n::: {.cell}\n\n```{.r .cell-code}\n# Create needed folder\ndir.create(path = file.path(my_path, \"Plots\"), showWarnings = FALSE)\n\n# Then export to that folder\nggplot2::ggsave(filename = file.path(my_path, \"Plots\", \"my_plot.png\"))\n```\n:::\n\n\nThe `showWarnings` argument of `dir.create` simply warns you if the folder you're creating already exists or not. There is no negative to \"creating\" a folder that already exists (nothing is overwritten!!) but the warning can be confusing so we can silence it ahead of time.\n\n### File Paths Summary\n\nWe strongly recommend following these guidelines so that your scripts work regardless of (1) the operating system, (2) folders \"upstream\" of the working directory, and (3) folders within the project. This will help your code by flexible and reproducible when others are attempting to re-run your scripts!\n\nAlso, for more information on how to read files in cloud storage locations such as Google Drive, Box, Dropbox, etc., please refer to our [Other Tutorials](https://nceas.github.io/scicomp.github.io/tutorials.html).", - "supporting": [], - "filters": [ - "rmarkdown/pagebreak.lua" - ], - "includes": {}, - "engineDependencies": {}, - "preserve": {}, - "postProcess": true - } -} \ No newline at end of file diff --git a/_freeze/modules_best-practices/pkg-loading/execute-results/html.json b/_freeze/modules_best-practices/pkg-loading/execute-results/html.json deleted file mode 100644 index 8f5c582..0000000 --- a/_freeze/modules_best-practices/pkg-loading/execute-results/html.json +++ /dev/null @@ -1,15 +0,0 @@ -{ - "hash": "ea1d300bfc65af384c36509a735900c2", - "result": { - "engine": "knitr", - "markdown": "\nLoading packages / libraries in R can be cumbersome when working collaboratively because there is no guarantee that you all have the same packages installed. While you could comment-out an `install.packages()` line for every package you need for a given script, we recommend using the R package `librarian` to greatly simplify this process!\n\n`librarian::shelf()` accepts the names of all of the packages--either CRAN or GitHub--installs those that are missing in that particular R session and then attaches all of them. See below for an example:\n\nTo load packages typically you'd have something like the following in your script:\n\n\n::: {.cell}\n\n```{.r .cell-code}\n## Install packages (if needed)\n# install.packages(\"tidyverse\")\n# install.packages(\"devtools\")\n# devtools::install_github(\"NCEAS/scicomptools\")\n\n# Load libraries\nlibrary(tidyverse); library(scicomptools)\n```\n:::\n\n\nWith `librarian::shelf()` however this becomes *much* cleaner! In addition to being fewer lines, using `librarian` also removes the possibility that someone running your code misses one of the packages that your script depends on and then the script breaks for them later on. `librarian::shelf()` automatically detects whether a package is installed, installs it if necessary, and then attaches the package.\n\nIn essence, `librarian::shelf()` wraps `install.packages()`, `devtools::install_github()`, and `library()` into a single, human-readable function.\n\n\n::: {.cell}\n\n```{.r .cell-code}\n# Install and load packages!\nlibrarian::shelf(tidyverse, NCEAS/scicomptools)\n```\n:::\n\n\nWhen using `librarian::shelf()`, package names do not need to be quoted and GitHub packages can be installed without the additional steps of installing the `devtools` package and using `devtools::install_github()` instead of `install.packages()`.\n", - "supporting": [], - "filters": [ - "rmarkdown/pagebreak.lua" - ], - "includes": {}, - "engineDependencies": {}, - "preserve": {}, - "postProcess": true - } -} \ No newline at end of file diff --git a/_quarto.yml b/_quarto.yml index 554db59..8cbef51 100644 --- a/_quarto.yml +++ b/_quarto.yml @@ -26,7 +26,13 @@ website: - text: "Facilitation" href: wg_facilitation.qmd - text: "Workshops" - href: workshops.qmd + menu: + - text: "Collaborative Coding with GitHub" + href: https://lter.github.io/workshop-github/ + - text: "Coding in the Tidyverse" + href: https://lter.github.io/workshop-tidyverse/ + - text: "Shiny Apps for Sharing Science" + href: https://njlyon0.github.io/asm-2022_shiny-workshop/ - text: "Tutorials" menu: - text: "Use NCEAS' Server" diff --git a/index.qmd b/index.qmd index 2c97142..d8ce61a 100644 --- a/index.qmd +++ b/index.qmd @@ -34,7 +34,7 @@ For members of our working groups, this website is a centralized hub of resource ### Other Visitors -Welcome! Even if you're not a part of a working group, we hope that you find the various resources on this website helpful! If you're interested in learning more about open, reproducible science, please feel free to take a look at our [**Workshops**](https://lter.github.io/scicomp/workshops.html) pages and [**Coding Tips**](https://lter.github.io/scicomp/best_practices.html) pages for other useful tips in R. +Welcome! Even if you're not a part of a working group, we hope that you find the various resources on this website helpful! If you're interested in learning more about open, reproducible science, please feel free to take a look at the items under our **Workshops** and **Tips** dropdown menus. :::: diff --git a/wg_services.qmd b/wg_services.qmd index 6a5fa4f..5523a17 100644 --- a/wg_services.qmd +++ b/wg_services.qmd @@ -13,7 +13,7 @@ The categories below are not exhaustive so if you think your needs will fall bet ## Tasks -**This level of collaboration is the core of our value to working groups!** When your group identifies a data-related need (e.g., designing an analytical workflow, creating a website, writing an R Shiny app, etc.), you reach out to [our team](https://lter.github.io/scicomp/staff.html) and get the conversation started. During that time we will work closely with you to define the scope of the work and get a clear picture of what "success" looks like in this context. +**This level of collaboration is the core of our value to working groups!** When your group identifies a data-related need (e.g., designing an analytical workflow, creating a website, writing an R Shiny app, etc.), you reach out to our team and get the conversation started. During that time we will work closely with you to define the scope of the work and get a clear picture of what "success" looks like in this context. Once the task is appropriately defined, the conversation moves on to how independently you'd like us to work. This varies dramatically between tasks even within a single working group and _there is no single right answer!_ For some tasks, we are capable of working completely independently and returning to your team with a finished product in hand for review but we are equally comfortable working closely with you throughout the life-cycle of a task. @@ -25,14 +25,22 @@ We are excited that these sprints are a part of our menu of offerings to you all ## Weekly Office Hours -Each of [our staff members](https://lter.github.io/scicomp/staff.html) offers a one-hour block weekly as a standing office hour each week. This is a great time to join us with small hurdles or obstacles you're experiencing in a given week. For example, previous office hours have dealt with topics like refreshing on Git/GitHub vocabulary, authenticating the `googledrive` R package, or solving a specific error in a new R script. +Each of our staff members offers a one-hour block weekly as a standing office hour each week. This is a great time to join us with small hurdles or obstacles you're experiencing in a given week. For example, previous office hours have dealt with topics like refreshing on Git/GitHub vocabulary, authenticating the `googledrive` R package, or solving a specific error in a new R script. -## Workshops & Trainings +## Trainings -With those workshops, our team aims to help your group further develop skills in reproducible data science to enable your team to better collaborate and efficiently tackle data and analytical challenges. It is sometimes the case that your working group wants to become more familiar (or get a refresher) on a tool you'd like to include in your group's workflow. To that end **we can offer workshops on a selection of data science tools.** +We have three primary methods for helping your group further develop skills in reproducible data science to enable your team to better collaborate and efficiently tackle data and analytical challenges. **To access a category of our skill development content, click the corresponding dropdown menu in the navbar at the top of this website.** -For our current workshop catalog, see [here](https://lter.github.io/scicomp/workshops.html). +We are also happy to design new workshops, tutorials, or tips if your group wants training on something within our knowledge base for which we haven't yet built something. -We are also happy to offer new workshops if your group wants training on something within our knowledge base for which we haven't (yet) built a workshop. These workshops are typically done remotely (we can also accommodate time during one of your meetings if desired) and last 2-3 hours but we can be flexible with that timing depending on your group's needs. +### Workshops -Similarly, we also have been creating more **'go at your own pace'-style tutorials** that can be accessed [here](https://lter.github.io/scicomp/tutorials.html). These tutorials are usually smaller in scope than workshops but are still built to maximize value to your group either as review or first contact with a given subject. As with the workshops, we are happy to create new tutorials if something comes up for your team so please reach out and we can discuss further! +Workshops are typically done remotely and last 2-3 hours but we can be flexible with that timing depending on your group's needs. We can also accommodate time during one of your meetings if desired. + +### Tutorials + +The tutorials tend to be smaller in scope than the full workshops and more 'go at your own pace'-style. Despite that, they are still built to maximize value to your group either as review or first contact with a given subject. + +### Tips + +Finally, we've also curated a set of 'code tips' that are even smaller in scale than the tutorials. These are often just a short summary of our team's opinion on a given subject. diff --git a/workshops.qmd b/workshops.qmd deleted file mode 100644 index bae2f92..0000000 --- a/workshops.qmd +++ /dev/null @@ -1,55 +0,0 @@ ---- -title: "Workshops" ---- - -In addition to the specific task-based support we offer, we can also create and run interactive workshops on data or coding topics. The specific goals of these workshops can be modified to best suit your team and meet all attendees where they are to ensure no one is left behind. While we are always happy to discuss developing new workshops we do have some materials that have already been designed (and tested by other working groups!) and are happy to offer any of these workshops to your group if it is of interest. - -## Collaborative Coding with GitHub - -### [Workshop Website](https://lter.github.io/workshop-github/) - -GitHub logo - -In synthesis science, collaboration on code products is often integral to the productivity of the group. However, learning to use the software and graphical user interfaces that support this kind of teamwork can be a significant hurdle for teams that are already experts in their subject areas. This workshop is aimed at helping participants gain an understanding of the fundamental purpose and functioning of "version control" systems--specifically [GitHub](https://github.com/)--to help teams more effectively code collaboratively. - -The GitHub {{< fa brands github >}} repository for the workshop can be found [here](https://github.com/lter/workshop-github). - -## Coding in the Tidyverse - -### [Workshop Website](https://lter.github.io/workshop-tidyverse/) - -Hex logo for the 'tidyverse' R package - -For teams that code using the R programming language, the most familiar tools are often part of "base R" meaning that those functions and packages come pre-loaded when R is installed. Relatively recently the Tidyverse has emerged as a comprehensive suite of packages that can complement base R or serve as an alternative for some tasks. This includes packages like `dplyr` and `tidyr` as well as the perhaps infamous pipe operator (`%>%`) among many other tools. This workshop is aimed at helping participants use the Tidyverse equivalents of fundamental data wrangling tasks that learners may be used to performing with base R. - -The GitHub {{< fa brands github >}} repository for the workshop can be found [here](https://github.com/lter/workshop-tidyverse). - -## R Shiny Apps for Sharing Science - -### [Workshop Website](https://njlyon0.github.io/asm-2022_shiny-workshop/) - -Hex logo for the 'shiny' R package - -One of our team members--Nick Lyon--created a workshop on learning to create R Shiny apps. R Shiny includes a suite of R packages (primarily `shiny`) that allow R users to create interactive apps that can be subsequently deployed to a URL. These apps are most commonly used for data visualization purposes but can also be a neat way of accomplishing other outward-facing tasks without needing to learn a new programming language. This workshop was offered at the 2022 LTER All Scientists' Meeting (ASM) and is aimed at an audience with moderate R capability but limited prior exposure to Shiny. - -The GitHub {{< fa brands github >}} repository for the workshop can be found [here](https://github.com/njlyon0/asm-2022_shiny-workshop). - -This workshop also includes a second GitHub {{< fa brands github >}} repository that contains several example Shiny apps. See [here](https://github.com/njlyon0/asm-2022_shiny-workshop-examples). - -## Other Training Resources - -### NCEAS Learning Hub - -### [Training Catalog](https://learning.nceas.ucsb.edu/) - -NCEAS logo - -In addition to the workshops described above, NCEAS offers a variety of other workshops and trainings that may be of interest to you or your group via the [Learning Hub](https://www.nceas.ucsb.edu/learning-hub). While these trainings can be very helpful, it is *important to note that our team may or may not be involved with teaching them.* Also, workshops we create will be hosted on this website rather than on the Learning Hub. - -### The Carpentries - -Logo for 'The Carpentries' - -[The Carpentries](https://carpentries.org/) is another great place to find workshops and tutorials on various data and programming topics. All of their materials are publicly available so even if a workshop isn't being offered, you can visit that site and review the content at your own pace! This can be a nice way of refreshing yourself on the fundamentals of something you have prior experience with or teaching yourself something totally new! For example, the Carpentries include helpful workshops on [using R for ecologists](https://datacarpentry.org/R-ecology-lesson/), [using the "shell" or command line](https://swcarpentry.github.io/shell-novice/), or [handling geospatial data in R](https://datacarpentry.org/r-intro-geospatial/). - -For the set of lessons that are most likely to be helpful to your groups, explore the [Data Carpentry](https://datacarpentry.org/lessons/) and [Software Carpentry](https://software-carpentry.org/lessons/) lesson lists.