-
Notifications
You must be signed in to change notification settings - Fork 1
/
search-doc-1732590897931.json
1 lines (1 loc) · 199 KB
/
search-doc-1732590897931.json
1
{"searchDocs":[{"title":"CIROH Cloud User Success Story","type":0,"sectionRef":"#","url":"/blog/August Monthly Blog Update","content":"","keywords":"","version":null},{"title":"1. ngen-datastream and NGIAB","type":1,"pageTitle":"CIROH Cloud User Success Story","url":"/blog/August Monthly Blog Update#1-ngen-datastream-and-ngiab","content":" ","version":null,"tagName":"h2"},{"title":"Overview:","type":1,"pageTitle":"CIROH Cloud User Success Story","url":"/blog/August Monthly Blog Update#overview","content":" CIROH’s cloud computing resources have allowed for the development of ngen-datastream, which automates the process of collecting and formatting input data for NextGen, orchestrating the NextGen run through NextGen In a Box (NGIAB), and handling outputs. This software allows users to run NextGen in an efficient, relatively painless, and reproducible fashion, increasing community access to the NextGen framework. ngen-datastream is already community accessible (https://github.com/CIROH-UA/ngen-datastream/tree/main) and making an impact on research. A major component of this software is the Amazon Web Services (AWS) cloud-based research datastream (https://github.com/CIROH-UA/ngen-datastream/tree/main/research_datastream). The research datastream is a CONUS-wide recurring NextGen simulation configured by the community. The terraform to build the AWS infrastructure exists in the ngen-datastream repository and current development focuses on CI/CD and enabling community contribution to the research datastream via edits to the NextGen configuration. Ultimately, these tools help distribute access throughout the community to cutting edge hydrologic research, maximizing the pace of progress of research to operations in hydrology. ","version":null,"tagName":"h3"},{"title":"Contribution to CIROH:","type":1,"pageTitle":"CIROH Cloud User Success Story","url":"/blog/August Monthly Blog Update#contribution-to-ciroh","content":" Automation: It automates the process of collecting, formatting, and validating input data for NextGen, streamlining model preparation.Flexibility: It allows users to provide their own input files to run NextGen.Scalable Infrastructure: It utilizes AWS state machine to provide access to high-performance computing (HPC) resources. ","version":null,"tagName":"h3"},{"title":"Infrastructure Utilized:","type":1,"pageTitle":"CIROH Cloud User Success Story","url":"/blog/August Monthly Blog Update#infrastructure-utilized","content":" Elastic Compute Cloud (EC2)Simple Storage Service (S3)AWS Lamda and Step Functions ","version":null,"tagName":"h3"},{"title":"2. TEEHR","type":1,"pageTitle":"CIROH Cloud User Success Story","url":"/blog/August Monthly Blog Update#2-teehr","content":" PI : Katie van WekhovenCo-PI : Matt Denno (Development Lead)Developer : Sam Lamont ","version":null,"tagName":"h2"},{"title":"Project Overview:","type":1,"pageTitle":"CIROH Cloud User Success Story","url":"/blog/August Monthly Blog Update#project-overview","content":" The goal of this project is to investigate, design, and build a prototype hydrologic model/forecast evaluation system (TEEHR) that will significantly improve our ability to evaluate continental-scale datasets and will provide a robust and consistent evaluation tool for CIROH and OWP research. Design priorities include easy integration into common research workflows, rapid execution of large-scale evaluations, simplified exploration of performance trends and drivers, inclusion of common and emergent evaluation methods, efficient data structures, open-source and community development, and easy extensibility. ","version":null,"tagName":"h3"},{"title":"Contribution to CIROH:","type":1,"pageTitle":"CIROH Cloud User Success Story","url":"/blog/August Monthly Blog Update#contribution-to-ciroh-1","content":" TEEHR-HUB: It is a JupyterHub environment, running the TEEHR image, with AWS services (EFS and S3) to provide a scalable platform for hydrologic research.Data Processing: TEEHR-HUB has successfully processed the AORC (v3.0 retrospective) gridded precipitation data to the MERIT basins, as well as the CONUS 40-year retrospective (v3.0 and USGS).Testbed Integration: TEEHR-HUB’s compatibility with various testbeds allows researchers to experiment with different hydrologic models and datasets.Evaluation - TEEHR is being used (or is planned for use) by several CIROH research teams to evaluate large scale model results. ","version":null,"tagName":"h3"},{"title":"Infrastructure Utilized:","type":1,"pageTitle":"CIROH Cloud User Success Story","url":"/blog/August Monthly Blog Update#infrastructure-utilized-1","content":" Elastic Kubernetes Service (EKS) (including supporting AWS services) - Scalable computing resources to host JupyterHub Dask and SparkElastic File System (EFS) - Shared data drive for cached data and shared documents (notebooks, etc.)Simple Storage Service (S3) - Bucket storage for large public and private datasets ","version":null,"tagName":"h3"},{"title":"AWRA 2024 Spring Conference","type":0,"sectionRef":"#","url":"/blog/AWRA 2024 Spring Conference","content":"AWRA 2024 Spring Conference The CIROH CyberInfrastructure team recently participated in the AWRA 2024 Spring Conference, co-hosted by the Alabama Water Institute at the University of Alabama. Themed "Water Risk and Resilience: Research and Sustainable Solutions," the conference brought together a diverse group of water professionals to exchange knowledge and explore cutting-edge research in the field. CIROH CyberInfrastructure team presented on these topics: Accelerating Community Contribution to the Next Generation Water Resources Modeling FrameworkCreating a community dataset for high-speed national water model data accessModel structure selection for the flood and drought predictions using the NextGen Framework based on the extreme event simulations CIROH team member James Halgren presented the work on "Accelerating Community Contribution to the Next Generation Water Resources Modeling Framework." The presentation focused on building and sharing a continuous research data stream using the NextGen Water Resources Modeling Framework with NextGen IN A Box (NGIAB). This project, a collaboration with Lynker members, showcases the potential for open-source tools and community-driven efforts to advance water resources modeling and research. CIROH team member Sepehr Karimi presented the work on "Creating a community dataset for high-speed national water model data access" CIROH team member Shahab Alam presented the work on "Model structure selection for the flood and drought predictions using the NextGen Framework based on the extreme event simulations" These presentations showcased CIROH's expertise in open-source tools, community-driven efforts, and water resources modeling. The team's contributions sparked insightful discussions and potential collaborations for future projects. Call to Action: To learn more about CIROH's work or connect with the team, visit our website at CIROH-website. Conference Website: AWRA 2024 Spring Conference website link","keywords":"","version":null},{"title":"CIROH Developers Conference 2024","type":0,"sectionRef":"#","url":"/blog/CIROH Developers Conference 2024","content":"CIROH Developers Conference 2024 The CIROH team recently participated in the 2nd Annual CIROH Developers Conference (DevCon24), held from May 29th to June 1st,2024. The conference brought together a diverse group of water professionals to exchange knowledge and explore cutting-edge research in the field of hydrological forecasting. Reflecting CIROH's current research focus, the conference explored topics including hydrological modeling (NextGen), flood inundation mapping, hydroinformatics, social science, and community engagement. Attendees got the opportunity to delve deeper into specific areas through its well-structured training track. This year the tracks were: NextGenFlood Inundation Mapping (FIM)Hydrological Applications of Machine Learning (ML)HydroinformaticsCross-cutting This year, various workshops leveraged cloud technologies. Notably, we provided access to the 2i2c JupyterHub environment, a cloud-based platform for interactive computing, for ten workshops. This facilitated seamless access to powerful computing resources for participants. Additionally, we provided AWS instances to support four workshops. Presentation Slides: You can find the presentation slides here. To learn more about CIROH's work or connect with the team, visit our website at CIROH-website. Conference Website: Learn More","keywords":"","version":null},{"title":"Monthly News Update - February 2024","type":0,"sectionRef":"#","url":"/blog/Community NextGen Updates Feb 2024","content":"Welcome to the February edition of the CIROH DocuHub blog, where we bring you the latest updates and news about the Community NextGen project and CIROH's Cloud and on-premise Infrastructure. Our team has been hard at work enhancing CIROH's Infrastructure and Community NextGen tools. Here are some highlights from February 2024: We successfully launched our new On-premises Infrastructure, which is now fully operational. You can find documentation for it here. For NGIAB, we've made improvements to the CI pipeline for pull requests submitted with forked repositories. Now, we automatically build and test these submissions using the CI pipeline. We've added documentation for the NWMURL python package, which offers utility functions tailored for accessing National Water Model (NWM) data URLs. This library streamlines the process of accessing NWM data for various purposes, including analysis, modeling, and visualization. You can explore the documentation here. We're thrilled to announce the NextGen Track for DevCon24. The schedule is now available at: DevCon24 Schedule. Thank you for your ongoing interest and support in the Community NextGen project. Stay tuned for more exciting updates and developments next month. 😊 Click Here to Visit Community NextGen and NGIAB News from Feb 2024","keywords":"","version":null},{"title":"Monthly News Update - March 2024","type":0,"sectionRef":"#","url":"/blog/Community NextGen Updates March 2024","content":"Accelerating Innovation: CIROH's March 2024 Update The CIROH team has been diligently accelerating research cyberinfrastructure capabilities this month. We're thrilled to share key milestones achieved in enhancing the Community NextGen project and our cloud/on-premises platforms. A significant highlight was the successful launch of our new fully operational on-premises infrastructure. Comprehensive documentation is now available here, ensuring seamless access and utilization. Additionally, we've fortified the NextGen in a Box (NGIAB) ecosystem with bug fixes, repository enhancements, and initiated work on automating the CI pipeline for the Singularity Repo Empowering our community remains a top priority. We've expanded the DocuHub knowledge base with dedicated sections on on-premises access guidelines, as well as policies and best practices for optimized infrastructure usage here . Furthermore, our team represented CIROH at the AWRA Geospatial Water Technology Conference in Orlando, sharing insights on leveraging geospatial data for water research. Refer here As we continue driving advancements, we extend our gratitude for your unwavering support of the Community NextGen project and CIROH's cyberinfrastructure endeavors. Be on the lookout for more exciting updates next month as we strive to unlock new frontiers in water science through robust computing capabilities. Click Here to Visit Community NextGen and NGIAB News from March 2024","keywords":"","version":null},{"title":"Google Cloud Next '24: A Flood of Innovation and Inspiration","type":0,"sectionRef":"#","url":"/blog/Google Cloud Next 2024","content":"Google Cloud Next '24 Hello everyone, and thanks for stopping by! I recently had the incredible opportunity to attend Google Cloud Next 2024 in person for the first time, and it was truly an amazing experience. From insightful keynote presentations and workshops to vibrant booths buzzing with connections, the event was a whirlwind of innovation and inspiration. One of the highlights was undoubtedly the abundance of AI announcements and advancements. Google continues to push the boundaries of what's possible, and it was exciting to witness the future of technology unfold. Among the many highlights, CIROH achieved a significant milestone with its first-ever session at Google Cloud Next. The presentation, titled "Channel the Flood Data Deluge: Unlocking the American National Water Model," link led by Kel Markert (Google), Dr. Dan Ames (BYU), and Michael Ames(SADA) was a resounding success. The session link shed light on the immense potential of the National Water Model and its ability to revolutionize water resource management. The conference was a truly enjoyable experience, especially collaborating with Dan, Kel, Michael and others. We had a great time together and sharing our insights. The energy and enthusiasm throughout the event were contagious, and I left feeling incredibly motivated and inspired. I connected with numerous individuals from diverse backgrounds, fostering new collaborations and sparking exciting ideas for the future of water research and technology. If you're curious to see more about my Google Cloud Next experience, head over to my LinkedIn post link where I've shared pictures from all three days. Thank you for reading and stay tuned for more updates on the exciting advancements in water research and technology! Want to delve deeper into the insights and announcements from Google Cloud Next? Check out these valuable resources: SADA Live: Recap Key Cloud Technology Insights from Google Cloud Next '24: linkThis LinkedIn event offers a comprehensive overview of the key takeaways and technological advancements unveiled at the conference. Day 2 Google Blog Recap: Dive into the specifics of Day 2 at Google Cloud Next with this insightful blog post, covering topics ranging from AI and data analytics to infrastructure and security. link AI Takes Center Stage: Gemini for Google Cloud: The introduction of Gemini 1.5 Pro, integrated with various Google Cloud services, promises enhanced functionality, security, and AI performance across diverse applications.AI Infrastructure Advancements: The AI Hypercomputer provides exceptional computational power for complex AI tasks, while Gemini API now offers models tailored for various scales, enriching the development environment.Vertex AI Enhancements: New tools for low-latency applications and improved Gemini integration empower developers to build more efficient and sophisticated AI-driven applications.Secure AI Framework (SAIF): Establishes rigorous security standards for AI implementations, ensuring secure and responsible AI integrations.AI Database Assistant: Leverages Gemini to simplify complex queries and deepen AI integration into database management.Google Vids: This innovative Workspace feature utilizes Gemini and Vertex AI to enhance digital storytelling and collaboration, revolutionizing workplace communication. Infrastructure and Development: Google Axion Processor: This cutting-edge processor boasts significant performance and energy efficiency improvements compared to traditional x86 instances, setting a new standard for computational efficiency.link Google Distributed Cloud (GDC) Sandbox: Enables developers to build and test services for GDC within a Google Cloud environment, simplifying the development process.link Migrate to Containers (M2C) CLI: This new tool facilitates seamless migration of applications to containers, supporting deployment on GKE or Cloud Run.link Security and Data Analytics: AI Cyber Defense Initiative: Revolutionizes cybersecurity by leveraging AI for innovative solutions against cyber threats.BigQuery as a Unified Platform: Transforms BigQuery into a comprehensive platform for managing multimodal data and executing AI tasks, seamlessly integrated with Gemini. Check out all the announcements: link","keywords":"","version":null},{"title":"CIROH Research CyberInfrastructure Update","type":0,"sectionRef":"#","url":"/blog/July Monthly IT Update","content":"","keywords":"","version":null},{"title":"Cloud Infrastructure","type":1,"pageTitle":"CIROH Research CyberInfrastructure Update","url":"/blog/July Monthly IT Update#cloud-infrastructure","content":" CIROH's Google Cloud Account is now fully operational and managed by our team. You can find more information here.We're in the process of migrating our 2i2c JupyterHub to CIROH's Google Cloud account.We've successfully deployed the Google BigQuery API (developed by BYU and Google) for NWM data in our cloud. To access this API, please contact us at [email protected]. Please refer to NWM BigQuery API to learn more. ","version":null,"tagName":"h3"},{"title":"Support and Services","type":1,"pageTitle":"CIROH Research CyberInfrastructure Update","url":"/blog/July Monthly IT Update#support-and-services","content":" Monthly AWS office hours are ongoing. For more details on how to join, email us at [email protected] provided IT support for the Summer Institute 2024, REU students, and team leads this summer. ","version":null,"tagName":"h3"},{"title":"Security Enhancements","type":1,"pageTitle":"CIROH Research CyberInfrastructure Update","url":"/blog/July Monthly IT Update#security-enhancements","content":" New security features have been added to our CIROH-UA GitHub repository to prevent commits containing sensitive information.We've updated our AWS best practices, particularly regarding key management. If your project uses CIROH AWS resources, please review these updates at AWS Best Practices. ","version":null,"tagName":"h3"},{"title":"Resources and Access","type":1,"pageTitle":"CIROH Research CyberInfrastructure Update","url":"/blog/July Monthly IT Update#resources-and-access","content":" For external IT resources needed for your projects, check out NSF Access Allocations here.GPU allocation is now available on CIROH's 2i2c JupyterHub. To request access, please fill out this form. For more information on our services, please refer to our services page. We're continually working to improve our IT infrastructure and support. If you have any questions or need assistance, don't hesitate to reach out to us at [email protected]. ","version":null,"tagName":"h3"},{"title":"NextGen Framework Forcings","type":0,"sectionRef":"#","url":"/blog/NextGen Forcings","content":"","keywords":"","version":null},{"title":"NextGen Framework Forcings","type":1,"pageTitle":"NextGen Framework Forcings","url":"/blog/NextGen Forcings#nextgen-framework-forcings","content":" A new forcing processor tool has been made public. This tool converts any National Water Model based forcing files into ngen forcing files. This process can be an intensive operation in compute, memory, and IO, so this tool facilitates generating ngen input and ultimately makes running ngen more accessible. ","version":null,"tagName":"h2"},{"title":"Read more","type":1,"pageTitle":"NextGen Framework Forcings","url":"/blog/NextGen Forcings#read-more","content":" Visit Github ","version":null,"tagName":"h3"},{"title":"NextGen In A Box v1.1.0 Release","type":0,"sectionRef":"#","url":"/blog/NextGen In A Box Release Notes","content":"We've release NGIAB v1.1.0! This release fixes issues: #21#67#44 More info: https://github.com/CIROH-UA/NGIAB-CloudInfra/releases/tag/v1.1.0 Visit NGIAB News","keywords":"","version":null},{"title":"NextGen Monthly News Update - December 2023","type":0,"sectionRef":"#","url":"/blog/NextGen Updates Dec 2023","content":"Happy New Year!!! We are back from SFO after attending AGU last month. We are excited to share the latest updates for NGIAB, NextGen, T-route, Hydrofabric, NextGen forcings, and Community Support from December 2023. Visit NGIAB News","keywords":"","version":null},{"title":"NextGen Monthly News Update - January 2024","type":0,"sectionRef":"#","url":"/blog/NextGen Updates Jan 2024","content":"Welcome to the January edition of the CIROH DocuHub blog, where we share the latest updates and news about the Community NextGen project monthly. NextGen is a cutting-edge hydrologic modeling framework that aims to advance the science and practice of hydrology and water resources management. In this month's blog, we will highlight some of the recent achievements and developments of the Community NextGen team. First, we are excited to announce that NextGen In A Box (NGIAB) is now available with Singularity support. This means that you can run NGIAB on any HPC system that does not support Docker, using Singularity containers. Singularity is a popular tool for creating and running portable and reproducible computational environments. To learn how to use NGIAB with Singularity, please visit our GitHub repository: Ngen-Singularity. Second, we have made several improvements and enhancements to NGIAB, such as updating the sample input data, upgrading the Boost library, adding auto mode run, and supporting geopackage format. You can find more details about these updates on our GitHub repository: NGIAB-CloudInfra. Third, we would like to share with you is the development of NextGen Datastream, a tool that automates the process of collecting and formatting input data for NextGen, orchestrating the NextGen run through NextGen In a Box (NGIAB), and handling outputs. The NextGen Datastream is a shell script that orchestrates each step in the process, using a configuration file that specifies the data sources, parameters, and options for the NextGen run. The NextGen Datastream can also generate its own internal configs and modify the configuration file as needed. You can find more details and instructions on how to use the NextGen Datastream on our GitHub repository: ngen-datastream. We hope you enjoyed this blog and found it informative and useful. If you have any questions, comments, or feedback, please feel free to contact us at [email protected]. Thank you for your interest and support in the Community NextGen project. Stay tuned for more exciting news and developments in the next month. 😊 Visit NGIAB News","keywords":"","version":null},{"title":"NextGen In A Box Updates","type":0,"sectionRef":"#","url":"/blog/NextGen-In-A-Box Release Notes","content":"We've introduced a fresh addition within the NGIAB, offering the most up-to-date insights on NGIAB and NextGen monthly updates. Visit NGIAB Release Notes Page","keywords":"","version":null},{"title":"NextGen Monthly News Update - November 2023","type":0,"sectionRef":"#","url":"/blog/NextGen Updates Nov 2023","content":"We are excited to share the latest updates for NGIAB, NextGen, T-route, Hydrofabric, NextGen forcings and Community Support. Visit NGIAB News","keywords":"","version":null},{"title":"CIROH Science Meeting 2024","type":0,"sectionRef":"#","url":"/blog/October Monthly Blog Update","content":"","keywords":"","version":null},{"title":"CIROH 2024 Science Meeting: Fostering Innovation in Hydrology","type":1,"pageTitle":"CIROH Science Meeting 2024","url":"/blog/October Monthly Blog Update#ciroh-2024-science-meeting-fostering-innovation-in-hydrology","content":" The third annual CIROH Science Meeting, held at The University of Alabama from October 14-17, 2024, brought together our vibrant community of researchers, practitioners, and stakeholders. This four-day event showcased the remarkable progress in our mission to advance national water prediction capabilities. ","version":null,"tagName":"h2"},{"title":"Meeting Highlights","type":1,"pageTitle":"CIROH Science Meeting 2024","url":"/blog/October Monthly Blog Update#meeting-highlights","content":" Over 300 participants gathered to share knowledge and forge new collaborations Engaging discussions spanning critical topics in hydrology Special focus on CIROH's four key research themes: Next Generation Water Prediction Community Modeling Hydroinformatics Social Science in Water Resources ","version":null,"tagName":"h2"},{"title":"Community Engagement","type":1,"pageTitle":"CIROH Science Meeting 2024","url":"/blog/October Monthly Blog Update#community-engagement","content":" The meeting featured presentations from our consortium members, research partners, and key stakeholders including NOAA/NWS. The diverse agenda included keynotes, project presentations, roundtable discussions, poster sessions, and HIF tour that fostered meaningful discussions about the future of water prediction. Representatives from our member institutions and various partnering organizations contributed to rich dialogues about advancing the National Water Model and related technologies. The collaborative atmosphere exemplified CIROH's commitment to building a stronger hydroinformatics community. ","version":null,"tagName":"h2"},{"title":"Key Outcomes","type":1,"pageTitle":"CIROH Science Meeting 2024","url":"/blog/October Monthly Blog Update#key-outcomes","content":" Strengthened partnerships across the consortium Shared progress on ongoing research initiatives Identified new opportunities for cross-institutional collaboration Advanced discussions on improving national water prediction capabilities Enhanced understanding of community needs and priorities ","version":null,"tagName":"h2"},{"title":"Looking Forward","type":1,"pageTitle":"CIROH Science Meeting 2024","url":"/blog/October Monthly Blog Update#looking-forward","content":" The success of the 2024 Science Meeting reinforces CIROH's role in shaping the future of water prediction. The connections made and ideas shared during these four days will continue to influence our work throughout the year as we strive to improve water prediction capabilities for the nation. ","version":null,"tagName":"h2"},{"title":"Photo Gallery","type":1,"pageTitle":"CIROH Science Meeting 2024","url":"/blog/October Monthly Blog Update#photo-gallery","content":" ","version":null,"tagName":"h2"},{"title":"Conclusion","type":1,"pageTitle":"CIROH Science Meeting 2024","url":"/blog/October Monthly Blog Update#conclusion","content":" The 2024 CIROH Science Meeting demonstrated the power of collaboration in advancing hydrology. As we move forward, the energy and insights from this gathering will fuel our continued efforts to enhance water prediction capabilities and serve our communities better. ","version":null,"tagName":"h2"},{"title":"Accessing National Water Model (NWM) Data via Google Cloud BigQuery API","type":0,"sectionRef":"#","url":"/blog/September Monthly Blog Update","content":"","keywords":"","version":null},{"title":"Public-Private Partnership: Advancing Water Resource Management","type":1,"pageTitle":"Accessing National Water Model (NWM) Data via Google Cloud BigQuery API","url":"/blog/September Monthly Blog Update#public-private-partnership-advancing-water-resource-management","content":" The National Water Model (NWM) BigQuery project exemplifies a successful collaboration between public and private sectors, uniting government-generated data with modern, cutting-edge cloud technology. This collaboration addresses several key aspects: Improved Data Access: By leveraging Google Cloud BigQuery, a private sector platform, the project dramatically improves access to public NWM data. This partnership makes valuable water resource information more readily available to researchers, policymakers, and the public.Technological Innovation: The integration of NWM data with BigQuery showcases how private sector technology can enhance the utility of public sector data. This synergy promotes innovation in data analysis and visualization techniques.Cost-Effective Solutions: The CIROH DevOps team's commitment to covering query costs demonstrates how public funding can be strategically used to make private sector tools accessible to a wider audience, particularly in the academic and research communities.Capacity Building: This initiative helps build capacity across sectors by providing researchers and organizations with powerful tools to analyze water resource data, potentially leading to better-informed decision-making in water management.Scalability and Efficiency: By utilizing Google's cloud infrastructure, the project ensures that the growing volume of NWM data can be efficiently stored, accessed, and analyzed, addressing the scalability challenges often faced by public sector IT resources.Cross-Sector Collaboration: This project fosters collaboration between government agencies, academic institutions, and private technology companies, creating a model for future partnerships in environmental and resource management.Open Science Promotion: By making NWM data more accessible, this partnership supports the principles of open science, encouraging transparency and reproducibility in water resource research. This public-private partnership not only enhances the value of the National Water Model but also sets a precedent for future collaborations that can drive innovation in environmental data management and analysis. ","version":null,"tagName":"h2"},{"title":"Welcome DocuHub's Blog","type":0,"sectionRef":"#","url":"/blog/Welcome-DocuHub-Blog","content":"","keywords":"","version":null},{"title":"Adding posts","type":1,"pageTitle":"Welcome DocuHub's Blog","url":"/blog/Welcome-DocuHub-Blog#adding-posts","content":" ","version":null,"tagName":"h2"},{"title":"What file name to use?","type":1,"pageTitle":"Welcome DocuHub's Blog","url":"/blog/Welcome-DocuHub-Blog#what-file-name-to-use","content":" DocuHub will extract a YYYY-MM-DD date from many patterns such as YYYY-MM-DD-my-blog-post-title.md or YYYY/MM/DD/my-blog-post-title.md. This enables you to easily group blog posts by year, by month, or to use a flat structure. ","version":null,"tagName":"h3"},{"title":"Example (with Metadata/Front matter)","type":1,"pageTitle":"Welcome DocuHub's Blog","url":"/blog/Welcome-DocuHub-Blog#example-with-metadatafront-matter","content":" To publish in the blog, create a Markdown file within the blog directory. For example, create a file at /blog/2019-09-05-hello-docuhub.md e.g. --- title: Welcome DocuHub description: This is my first post on DocuHub. slug: welcome-DocuHub authors: - name: John Doe title: Co-creator of Product 1 url: <Youe github product or external article link> image_url: <Author pic url> - name: Jane Doe title: Co-creator of Product 2 url: <Youe github product or external article link> image_url: <Author pic url> tags: [hello, docuhub, nextgen] hide_table_of_contents: false --- Welcome to this blog. This blog is created with [**DocuHub 2**](https://docs.ciroh.org/). <!-- truncate --> This is my first post on DocuHub. A whole bunch of exploration to follow. ","version":null,"tagName":"h3"},{"title":"What is Metadata/Front Matter","type":1,"pageTitle":"Welcome DocuHub's Blog","url":"/blog/Welcome-DocuHub-Blog#what-is-metadatafront-matter","content":" The front matter is useful to add more metadata to your blog post, for example, author information, but Docusaurus will be able to infer all necessary metadata without the front matter. Front matter is provided at the very top of the file, enclosed by three dashes ---. The content is parsed as YAML. --- title: My Doc Title more_data: - Can be provided - as: objects or: arrays --- ","version":null,"tagName":"h2"},{"title":"BYU Hydroinformatics Courses: Enhance Your Skillds in Water Resources Management","type":0,"sectionRef":"#","url":"/docs/education/BYUHydroinformaticsLab/","content":"BYU Hydroinformatics Courses: Enhance Your Skillds in Water Resources Management Online courses powered by Open edX Available at : https://byu-hydroinformatics.edunext.io/ Advance your knowledge and skills in water resources management with online courses from Brigham Young University's Hydroinformatics Program. These courses are designed for professionals, students, and anyone interested in deepening their understanding of water-related challenges and solutions. Courses cover a wide range of topics, including: Hydrologic modelingWater quality analysisWater resources planning and managementGIS and remote sensing for water resourcesData science for water resources Whether you are new to the field or looking to enhance your existing knowledge, our online courses offer a flexible and convenient way to learn from experts in the field. Benefits of Taking BYU Hydroinformatics Courses: Learn from experienced faculty and researchers Access high-quality course materials and resources Engage with a global community of learners Earn certificates of completion to demonstrate your skills Get Started Today: Visit website at https://byu-hydroinformatics.edunext.io/ to explore our course offerings and enroll in the courses that align with your interests and career goals. Note: These courses are offered through Open edX, a non-profit organization dedicated to providing open and affordable online education.","keywords":"","version":"Next"},{"title":"Education and Training","type":0,"sectionRef":"#","url":"/docs/education/","content":"","keywords":"","version":"Next"},{"title":"Featured Education Resources","type":1,"pageTitle":"Education and Training","url":"/docs/education/#featured-education-resources","content":" NextGen FrameworkHydrofabricBYU HydroInformatics Lab ","version":"Next","tagName":"h2"},{"title":"Hydrofabric: An Introduction","type":0,"sectionRef":"#","url":"/docs/education/hydrofabric/","content":"","keywords":"","version":"Next"},{"title":"Getting Started with Hydrofabric","type":1,"pageTitle":"Hydrofabric: An Introduction","url":"/docs/education/hydrofabric/#getting-started-with-hydrofabric","content":" To get started with Hydrofabric, you can follow these steps: Install hydrofabric from Github using remotes:remotes::install_github("NOAA-OWP/hydrofabric) Attach the package into your R script:library(hydrofabric) Explore the Hydrofabric documentation to learn more about its features and how to use them: Hydrofabric Documentation ","version":"Next","tagName":"h2"},{"title":"Hydrofabric Resources","type":1,"pageTitle":"Hydrofabric: An Introduction","url":"/docs/education/hydrofabric/#hydrofabric-resources","content":" Hydrofabric Code RepositoryHydrofabric Data ","version":"Next","tagName":"h2"},{"title":"HydroLearn: An Introduction","type":0,"sectionRef":"#","url":"/docs/education/Hydrolearn/","content":"","keywords":"","version":"Next"},{"title":"HydroLearn Resources","type":1,"pageTitle":"HydroLearn: An Introduction","url":"/docs/education/Hydrolearn/#hydrolearn-resources","content":" HydroLearn Official WebsiteHydroLearn CoursesMore information on HydroLearn 101 ","version":"Next","tagName":"h2"},{"title":"NextGen Framework: Building Novel and Mimic Model Formulations","type":0,"sectionRef":"#","url":"/docs/education/NextGenFramework/nextgen","content":"","keywords":"","version":"Next"},{"title":"Speaker: Fred L. Ogden, Ph.D., P.E., Chief Scientist (ST), NOAA/NWS Office of Water Prediction","type":1,"pageTitle":"NextGen Framework: Building Novel and Mimic Model Formulations","url":"/docs/education/NextGenFramework/nextgen#speaker--fred-l-ogden-phd-pe-chief-scientist-st-noaanws-office-of-water-prediction","content":" The Basic Model Interface (BMI) standard employed in the Next Generation Water Resources Modeling (NextGen) framework enables the construction of both mimic and novel model formulations. This involves the ordered execution of modules designed to simulate individual processes or fluxes and the integration of these over time on various control volumes to simulate the temporal evolution of model states. These are referred to as "multi-BMI" formulations. Currently, the order of execution of modules within a multi-BMI model formulation is determined by the ordering of modules in the run realization file. This presentation showcases model/module developments undertaken to date in this regard and discusses knowledge gaps and needs. Additionally, it proposes potential coding standards for modules that allow for the monitoring of conservation law enforcement at the framework level for different model formulations using various internal discretizations. ","version":"Next","tagName":"h2"},{"title":"Building Novel & Mimic Model Formulations Using NextGen Framework:","type":1,"pageTitle":"NextGen Framework: Building Novel and Mimic Model Formulations","url":"/docs/education/NextGenFramework/nextgen#building-novel--mimic-model-formulations-using-nextgen-framework","content":" ","version":"Next","tagName":"h2"},{"title":"What is National Water Model (NWM)?","type":0,"sectionRef":"#","url":"/docs/education/NextGenFramework/","content":"","keywords":"","version":"Next"},{"title":"Features - NWM3.0","type":1,"pageTitle":"What is National Water Model (NWM)?","url":"/docs/education/NextGenFramework/#features---nwm30","content":" First time provision of NWM Total Water Level guidance for coastal areas of the Continental United States (CONUS), Hawaii and Puerto Rico / U.S. Virgin Island domains. This is accomplished via use of the Semi-implicit Cross-scale Hydroscience Integrated System Model (SCHISM) integrated within the NWM, to couple NWM freshwater discharge estimates with oceanic forcing from the Surge and Tide Operational Forecast System (STOFS) and Probabilistic Tropical Storm Surge (P-SURGE) model. Output will be provided in both NetCDF as well as Standard Hydrometeorological Exchange Format (SHEF) format. Each NetCDF file contains full TWL domain output for one output time step, while each SHEF file contains timeseries station output for the full length of each simulation. NWM Domain expansion to south-central Alaska (Cook Inlet, Copper River Basin, and Prince William Sound regions), enabling provision of NWM operational hydrologic model forecast guidance to this region. Addition of the National Blend of Models (NBM) as a forcing source for NWM CONUS medium-range forecasts and Alaska short-range and medium-range forecasts. Use of Multi-Radar Multi-Sensor (MRMS) precipitation as forcing for the NWM Analysis and Assimilation configuration over the Puerto Rico / U.S. Virgin Island domain. Ingest of RFC-supplied reservoir outflow forecasts at 77 additional locations, bringing the total of such sites to 392. Enhancements to the treatment of reservoirs, land surface parameters and calibration/regionalization approach leading to improvements in model skill. ","version":"Next","tagName":"h2"},{"title":"Features - NWM2.1","type":1,"pageTitle":"What is National Water Model (NWM)?","url":"/docs/education/NextGenFramework/#features---nwm21","content":" The NWM currently runs in four configurations: Analysis and assimilation provides a snapshotof current hydrologic conditionsShort-Range produces hourly forecasts of streamflow and hydrologic states out to 15 hoursMedium-Range produces 3-hourly forecasts out to 10 daysLong-Range generates 30-day ensemble forecasts. Source : https://water.noaa.gov/about/nwm ","version":"Next","tagName":"h2"},{"title":"Conference Material","type":0,"sectionRef":"#","url":"/docs/education/Conferences/","content":"","keywords":"","version":"Next"},{"title":"Slides and recordings from the various conferences can be found in the below section.","type":1,"pageTitle":"Conference Material","url":"/docs/education/Conferences/#slides-and-recordings-from-the-various-conferences-can-be-found-in-the-below-section","content":" ","version":"Next","tagName":"h2"},{"title":"CIROH Developers Conference - May 2024","type":1,"pageTitle":"Conference Material","url":"/docs/education/Conferences/#ciroh-developers-conference---may-2024","content":" Presenter: James Halgren, Dr. Jeffrey Carver, Arpita Patel, Dan Ames, Zach Wills, Matt Denno, Yusuf Sermet, Mohamed Abdelkader, Ryan Johnson, Jeff Horsburgh CIROH CyberInfrastructure And Case Studies Presenter: Arpita Patel, Zach Wills, Giovanni Romero NGIAB Workshop: Running NextGen In a Box Presenter: Jordan Laser, Josh Cunningham NextGen Simulation Development Tools Presenter: Kel Markert Slides: Google AI Modeling Workshop - Vertex AIVideo Link: Google AI Modeling Workshop - Vertex AI Presenter: Patrick Clemins Video Link: Data Workflows 101: Acquisition, Manipulation, and Visualization Presenter: Kyle Klenk, Raymond Spiteri Using Actors for Prallelization in Hydrology Presenter: Scott Hendrickson, Savalan Naser Neisary, Eric Christensen, Arpita Patel Working with National Water Model data in Amazon SageMaker Presenter: Keith Jennings, Nels Frazier Get your mdoel ready for NextGen with BMI ","version":"Next","tagName":"h3"},{"title":"AWRA 2024 Spring - Tuscaloosa, AL (April 2024)","type":1,"pageTitle":"Conference Material","url":"/docs/education/Conferences/#awra-2024-spring---tuscaloosa-al-april-2024","content":" Presenter: Shahab Alam Model structure selection for the flood and drought predictions using the NextGen Framework based on the extreme event simulations) Presenter: James Halgren Presenter: Sepehr Karimi ","version":"Next","tagName":"h3"},{"title":"AWRA 2024 Geospatial Water Technology Conference - Orlando, FL (Mar 25-27)","type":1,"pageTitle":"Conference Material","url":"/docs/education/Conferences/#awra-2024-geospatial-water-technology-conference----orlando-fl-mar-25-27","content":" Presenter: James HalgrenData to Decisions: Managing and Modeling Water Challenges ","version":"Next","tagName":"h3"},{"title":"UF Water Institute Symposium - Feb 2024","type":1,"pageTitle":"Conference Material","url":"/docs/education/Conferences/#uf-water-institute-symposium---feb-2024","content":" Presenter: Josh Cunningham Community Contribution to NWM - Poster Presenter: Sepehr Karimi Creating Community Dataset for High-Speed National Water Model Data Access - Poster ","version":"Next","tagName":"h3"},{"title":"AMS 2024 - Jan 2024","type":1,"pageTitle":"Conference Material","url":"/docs/education/Conferences/#ams-2024---jan-2024","content":" Presenter: Sepehr KarimiAbstractTitle : Creating a Community Dataset for High-Speed National Water Model Data AccessAMS24 Slides ","version":"Next","tagName":"h3"},{"title":"AGU 2023 - Dec 2023","type":1,"pageTitle":"Conference Material","url":"/docs/education/Conferences/#agu-2023---dec-2023","content":" Presenter: Arpita PatelAbstractTitle : H34D-02 NextGen In A Box: Advancing Collaborative Modeling for Enhanced Water Resource ManagementAGU23 NextGen In A Box Slides ","version":"Next","tagName":"h3"},{"title":"Community NextGen Advancement Lightning Talk - Nov 2023","type":1,"pageTitle":"Conference Material","url":"/docs/education/Conferences/#community-nextgen-advancement-lightning-talk---nov-2023","content":" Presenter: Arpita PatelCommunity NextGen and Infrastructure Enhancements SlidesYouTube Recording Details: Video Link: CIROH YouTubeTimestamp: 52:56 ","version":"Next","tagName":"h3"},{"title":"CIROH Science Meeting - October 2023","type":1,"pageTitle":"Conference Material","url":"/docs/education/Conferences/#ciroh-science-meeting---october-2023","content":" Presenter: Arpita PatelNextGen In A Box Poster - Poster ","version":"Next","tagName":"h3"},{"title":"Summer Institute - June 2023","type":1,"pageTitle":"Conference Material","url":"/docs/education/Conferences/#summer-institute---june-2023","content":" Presenter: Arpita PatelNextGen In A Box Slides ","version":"Next","tagName":"h3"},{"title":"CIROHDevCon1 - May 2023","type":1,"pageTitle":"Conference Material","url":"/docs/education/Conferences/#cirohdevcon1---may-2023","content":" Presenter: Arpita PatelNextGen In A Box Slides ","version":"Next","tagName":"h3"},{"title":"ngen Model Framework","type":0,"sectionRef":"#","url":"/docs/education/NextGenFramework/ngen","content":"","keywords":"","version":"Next"},{"title":"Key Features of ngen:","type":1,"pageTitle":"ngen Model Framework","url":"/docs/education/NextGenFramework/ngen#key-features-of-ngen","content":" Model Agnostic: ngen is not a specific model but rather a framework for building and integrating different models.Data-Centric: ngen emphasizes a data-centric approach, ensuring that data is central to the modeling process.Flexible and Standardized: ngen provides a standardized approach to model creation and integration, allowing for greater flexibility and ease of use. ","version":"Next","tagName":"h2"},{"title":"Resources:","type":1,"pageTitle":"ngen Model Framework","url":"/docs/education/NextGenFramework/ngen#resources","content":" Public GitHub Repository: NOAA-OWP/ngenDocumentation: ngen documentation ","version":"Next","tagName":"h2"},{"title":"t-route: Tree-Based Channel Routing","type":0,"sectionRef":"#","url":"/docs/education/NextGenFramework/troute","content":"","keywords":"","version":"Next"},{"title":"Key Features of t-route:","type":1,"pageTitle":"t-route: Tree-Based Channel Routing","url":"/docs/education/NextGenFramework/troute#key-features-of-t-route","content":" Tree-Based Approach: t-route uses a tree-based algorithm to identify and connect channels, ensuring hydrologically consistent channel networks.High-Resolution DEMs: t-route can generate channel networks from high-resolution DEMs, capturing detailed channel morphology.Flexible and Extensible: t-route provides a flexible framework for customizing the channel generation process and extending its capabilities. ","version":"Next","tagName":"h2"},{"title":"Resources:","type":1,"pageTitle":"t-route: Tree-Based Channel Routing","url":"/docs/education/NextGenFramework/troute#resources","content":" Public GitHub Repository: NOAA-OWP/t-route ","version":"Next","tagName":"h2"},{"title":"Data, Code Sharing and Infrastructure Policies","type":0,"sectionRef":"#","url":"/docs/policies/intro","content":"Data, Code Sharing and Infrastructure Policies In the following sections, we provide some practical guidance for CIROH researchers designed to help them meet the terms and conditions of CIROH’s Data, Code, and Infrastructure Policy, as discussed above, for different types of research products. Each section is focused on providing guidlines and recommendations for data, code, and infrastructure use.","keywords":"","version":"Next"},{"title":"Data and Code Sharing","type":0,"sectionRef":"#","url":"/docs/policies/DataAndCodeSharingPolicy/","content":"Data and Code Sharing The Cooperative Institute for Research to Operations in Hydrology (CIROH) is committed to an open data policy that will maximize the impact and broad use of data and research products produced by CIROH projects and will also ensure that Federal data sharing requirements are met. This policy document is intended to assist CIROH investigators in creating and sharing high-quality data and research products. We begin with guiding principles, after which the specific policy and recommendations are stated. This document also provides guidance and instructions that may be useful to CIROH investigators in meeting the terms of this policy. Finally, we also include an appendix with further details about the specific data sharing requirements of CIROH’s partner agencies. We consider this policy to be a living document that will be revised as the needs of CIROH investigators and CIROH partner agencies evolve. 📄️ Policy and Guidance Policy and Guidance for Data and Code Sharing Policy 📄️ Recommendations Recommendations for Data and Code Sharing Policy","keywords":"","version":"Next"},{"title":"Recommendations for Data and Code Sharing Policy","type":0,"sectionRef":"#","url":"/docs/policies/DataAndCodeSharingPolicy/Recommendations","content":"","keywords":"","version":"Next"},{"title":"Recommendations for Sharing Data","type":1,"pageTitle":"Recommendations for Data and Code Sharing Policy","url":"/docs/policies/DataAndCodeSharingPolicy/Recommendations#recommendations-for-sharing-data","content":" Depending on the type and size of data you are producing and using, we recommend the following options for data archiving and sharing data: HydroShare (www.hydroshare.org) When to use: Use for datasets under 1GB (increases are possible)Datasets that require spatial data services (THREDDS, WMS, etc)Datasets that need to be accessed from applications through APIsDatasets that are linked to other datasetsDatasets that require formal publishing with a DOIFor links and pointers to external datasetsConsider using a Creative Commons License for releasing data When not to use: Very large datasetsRapidly changing datasetsData with extensive sharing and license restrictions Cost of use: Free for researchers up to 20 GB per userFree for permanently published data Where to go for help: HydroShare HelpEmail [email protected] to reach the CUAHSI HydroShare teamHydroShare short videos on CUAHSI YouTube channel CIROH Cloud Amazon S3 storage via CIROH’s AWS account and Google Buckets, Azure, On-Premise When to use: Use for large datasetsData that is part of the NWM workflows (https://console.cloud.google.com/storage/browser/national-water-model)Cloud computing linked dataConsider linking to cloud share from HydroShare for discoverability When not to use: Smaller datasets you want to formally publish with a DOI (May complicate formal publication with DOI) Cost of use: Some uses may be covered by CIROH core funds (contact CIROH Cloud Team to start a request)Extensive uses may be charged to individual CIROH projects How to get access to CIROH AWS: More information on obtaining accesss to CIROH's AWS resources are available at this link: https://docs.ciroh.org/docs/services/cloudservices/aws/ Where to go for help: Email [email protected] UA CIROH Cloud TeamCIROH Cloud Slack Channel - #ciroh-ua-it-adminAWS support Slack Channel - #aws-ciroh-support Water Prediction Node (https://waternode.ciroh.org/) Who to contact for help: [email protected] ","version":"Next","tagName":"h2"},{"title":"Recommendations for Sharing Code","type":1,"pageTitle":"Recommendations for Data and Code Sharing Policy","url":"/docs/policies/DataAndCodeSharingPolicy/Recommendations#recommendations-for-sharing-code","content":" GitHub (www.github.com) When to use: Generally post your code on your institution’s GitHub organization - some projects may be appropriately hosted on CIROH organizationWe suggest forking the CIROH template for structured readme files, etc.Example: https://github.com/NOAA-OWP/owp-open-source-project-templateWhen multiple developers are actively developing software or other productsConsider using the three clause BSD3 or MIT licenseConsider linking to Zenodo to snapshot and get a DOI for your code When not to use: Not recommended for proprietary code (although private repositories are available in GitHub at cost) Cost of use: Free for public repositoriesFree for private repositories (limited functionality - e.g. no GitHub Actions/Runners, Wiki and other features) Where to go for help: GitHub discussion forumsCIROH Slack channels e.g., #ciroh-hydroinformatics-working-group Jupyter notebooks in HydroShare When to use: Sharing code as Jupyter notebooks that you want to be launchable into a computational environment like CIROH JupyterHubWhen you want your code to accompany data in one citable resource for reproducibility purposes When not to use: When code is rapidly changingWhen you want your code to be under formal version control Cost of use: Free for researchers to store up to 20 GB of content in HydroShareCUAHSI JupyterHub is free to useCIROH 2i2c JupyterHub is free to use (cost covered by CIROH core funds) Where to go for help: Email [email protected] for help with sharing notebooks in HydroShare and/or launching notebooks into the CUAHSI JupyterHub instanceHow to get access to CIROH 2i2c: https://docs.ciroh.org/docs/services/cloudservices/google/How to get access to 2i2c using Hydroshare: https://docs.ciroh.org/docs/services/cloudservices/google/hydroshareintegration ","version":"Next","tagName":"h2"},{"title":"Recommendations for Sharing Models","type":1,"pageTitle":"Recommendations for Data and Code Sharing Policy","url":"/docs/policies/DataAndCodeSharingPolicy/Recommendations#recommendations-for-sharing-models","content":" Model sharing can be viewed as “code sharing” or “data sharing,” and many of the suggested methods above can be adopted for model sharing. Consider the following options for sharing models: GitHub - Supports sharing of model source codesHydroShare - Supports sharing of model programs and models instancesNextGen In A Box (NGIAB) - Use cloud computing or local machine to modify and execute NextGen based models in a docker containerCIROH Web Sites - Downloadable executables, model instances, installers, etc can be shared on the CIROH portal web site. ","version":"Next","tagName":"h2"},{"title":"Recommendations for Sharing Workflows","type":1,"pageTitle":"Recommendations for Data and Code Sharing Policy","url":"/docs/policies/DataAndCodeSharingPolicy/Recommendations#recommendations-for-sharing-workflows","content":" GitHub gists (e.g., to show how to use certain modules)Post materials on the CIROH DocuHubJupyterNotebooks in HydroShare - launch into CIROH JupyterHub environment or CUAHSI JupyterHub ","version":"Next","tagName":"h2"},{"title":"Recommendations for Sharing Published Manuscripts","type":1,"pageTitle":"Recommendations for Data and Code Sharing Policy","url":"/docs/policies/DataAndCodeSharingPolicy/Recommendations#recommendations-for-sharing-published-manuscripts","content":" GitHub (see https://github.com/NOAA-OWP/OWP-Presentations)Per NOAA - Don’t share preprints prior to peer review ","version":"Next","tagName":"h2"},{"title":"Recommendations for Sharing Educational Materials","type":1,"pageTitle":"Recommendations for Data and Code Sharing Policy","url":"/docs/policies/DataAndCodeSharingPolicy/Recommendations#recommendations-for-sharing-educational-materials","content":" HydroLearn - We recommend using www.HydroLearn.org which allows for and supports learning module sharing and dissemination of educational materials. Hydrolearn modules be linked from the CIROH docuhub. Supplementary materials, tutorials, and guideline materials may also be posted directly on the CIROH educational portal at https://docs.ciroh.org/docs/education/. When to use: Broadly applicable learning modules related to hydrology and NWM When not to use: Highly specific, localized, tailored learning materials for your specific university or departmental coursesMaterial that requires specific and inaccessible data, software, etc. Cost to use: Free for open access learning modules Where to go for help: HydroLearn Contact Us ","version":"Next","tagName":"h2"},{"title":"Policy and Guidance for Data and Code Sharing Policy","type":0,"sectionRef":"#","url":"/docs/policies/DataAndCodeSharingPolicy/Policies","content":"","keywords":"","version":"Next"},{"title":"Guiding Principles","type":1,"pageTitle":"Policy and Guidance for Data and Code Sharing Policy","url":"/docs/policies/DataAndCodeSharingPolicy/Policies#guiding-principles","content":" We provide the following principles that guide CIROH’s activities and associated data sharing: Science is reproducible.Reproducibility of scientific work is enabled through openness.Open science is enabled through open access to data, source code, accessible computational resources, and sufficient metadata for interpretation/use.Products of CIROH research are produced at public expense and should be broadly accessible to the public. ","version":"Next","tagName":"h2"},{"title":"Policy Statement","type":1,"pageTitle":"Policy and Guidance for Data and Code Sharing Policy","url":"/docs/policies/DataAndCodeSharingPolicy/Policies#policy-statement","content":" CIROH follows NOAA’s Data Sharing Directive, which is included in the Terms and Conditions of CIROH’s Cooperative Agreement with NOAA and is also available here (Version 3.0 at the time of this writing). CIROH is responsible for implementing these conditions and ensuring that they are also met by CIROH sub-recipients and subcontractors. The Data Management Plan submitted with the original CIROH proposal is included as an Appendix to this document. The specific wording included in the CIROH Cooperative Agreement is as follows: Data Sharing: Environmental data collected or created under this Grant, Cooperative Agreement, or Contract must be made publicly visible and accessible in a timely manner, free of charge or at minimal cost that is no more than the cost of distribution to the user, except where limited by law, regulation, policy, or national security requirements. Data are to be made available in a form that would permit further analysis or reuse: data must be encoded in a machine-readable format, preferably using existing open format standards; data must be sufficiently documented, preferably using open metadata standards, to enable users to independently read and understand the data. The location (internet address) of the data should be included in the final report. Pursuant to NOAA Information Quality Guidelines, data should undergo quality control (QC), and a description of the QC process and results should be referenced in the metadata.Timeliness: Data accessibility must occur no later than publication of a peer-reviewed article based on the data, or two years after the data are collected and verified, or two years after the original end date of the grant (not including any extensions or follow-on funding), whichever is soonest unless a delay has been authorized by the NOAA funding program.Disclaimer: Data produced under this award and made available to the public must be accompanied by the following statement: "These data and related items of information have not been formally disseminated by NOAA, and do not represent any agency determination, view, or policy."Failure to Share Data: Failing or delaying to make environmental data accessible in accordance with the submitted Data Management Plan, unless authorized by the NOAA Program, may lead to enforcement actions and will be considered by NOAA when making future award decisions. Funding recipients are responsible for ensuring these conditions are also met by sub-recipients and subcontractors.Funding acknowledgment: Federal funding sources shall be identified in all scholarly publications. An Acknowledgements section shall be included in the body of the publication stating the relevant Grant Programs and Award Numbers. In addition, funding sources shall be reported during the publication submission process using the FundRef mechanism (http://www.crossref.org/fundref/) if supported by the Publisher.Manuscript submission: The final pre-publication manuscripts of scholarly publications produced with NOAA funding shall be submitted to the NOAA Institutional Repository at http://library.noaa.gov/repository after acceptance and no later than upon publication of the paper by a journal. NOAA will produce a publicly-visible catalog entry directing users to the published version of the article. After an embargo period of one year after publication, NOAA shall make the manuscript itself publicly visible, free of charge, while continuing to direct users to the published version of record.Data Citation: Publications based on data, and new products derived from source data, must cite the data used according to the conventions of the Publisher, using unambiguous labels such as Digital Object Identifiers (DOIs). All data and derived products that are used to support the conclusions of a peer-reviewed publication must be made available in a form that permits verification and reproducibility of the results. ","version":"Next","tagName":"h2"},{"title":"Important Definitions","type":1,"pageTitle":"Policy and Guidance for Data and Code Sharing Policy","url":"/docs/policies/DataAndCodeSharingPolicy/Policies#important-definitions","content":" There are several definitions in NOAA’s Data and Publication Sharing Directive that we provide here for interpretation of the above text. For the full list and for the exact statement of these definitions, refer to the full text of NOAA’s Data Sharing Directive (Version 3.0) at the link in the section above. Research Results: Defined as environmental data and peer-reviewed publications under NOAA’s Data Sharing Directive.Environmental Data: Defined by NOAA Administrative Order (NAO) 212-15 as: Recorded and derived observations and measurements of: Physical, chemical, biological, geological, and geophysical properties and conditions of: Oceans, atmosphere, space environment, sun, and solid earth.Correlative data such as socio-economic data, related documentation, and metadata. Includes digital audio or video recordings of environmental phenomena and numerical model outputs used to support peer-reviewed publications.Data collected in a laboratory or other controlled environment, including measurements of animals and chemical processes. Data Sharing Directive: Defines "data" specifically as environmental data.Sharing Data: Making data publicly visible and accessible in a timely manner at no cost or minimal cost, in a machine-readable format based on open standards, along with necessary metadata.Timeliness: Data accessibility must occur no later than publication of a peer-reviewed article based on the data or within two years of data collection or grant end date, whichever is soonest, unless authorized delay by NOAA.Applicability: Applies to new data created by extramural funding recipients; internally produced NOAA data or collaborative research data are subject to the NOAA Data Access Directive.Exclusions: Laboratory notebooks, preliminary analyses, drafts of scientific papers, plans for future research, peer review reports, communications with colleagues, or physical objects are not covered under NOAA’s Data Sharing Directive. ","version":"Next","tagName":"h2"},{"title":"Technical guidance for the inclusion of models/modules in the NextGen Water Resources Modeling Framework","type":0,"sectionRef":"#","url":"/docs/policies/NextGen/","content":"","keywords":"","version":"Next"},{"title":"Introduction","type":1,"pageTitle":"Technical guidance for the inclusion of models/modules in the NextGen Water Resources Modeling Framework","url":"/docs/policies/NextGen/#introduction","content":" This document provides technical guidance for including models and modules in the Next Generation Water Resources Modeling Framework (NextGen). It covers essential aspects of model integration, best practices, and framework requirements. ","version":"Next","tagName":"h2"},{"title":"Full Document","type":1,"pageTitle":"Technical guidance for the inclusion of models/modules in the NextGen Water Resources Modeling Framework","url":"/docs/policies/NextGen/#full-document","content":" For the complete technical guidance, please refer to the PDF document below: Download Technical Guidance PDF ","version":"Next","tagName":"h2"},{"title":"Key Points","type":1,"pageTitle":"Technical guidance for the inclusion of models/modules in the NextGen Water Resources Modeling Framework","url":"/docs/policies/NextGen/#key-points","content":" NextGen is a model-agnostic, standards-based framework for water resources modelingIt allows for flexible experimentation with hydrologic cycle representationsThe framework supports explicit coupling of models through sharing of computed states and fluxesDesign requirements include maximum flexibility, open-source development, and multi-language support ","version":"Next","tagName":"h2"},{"title":"Community Hydrologic Modeling Framework","type":0,"sectionRef":"#","url":"/docs/products/Community Hydrologic Modeling Framework/","content":"Community Hydrologic Modeling Framework Community NextGen is a community-accessible verison of NextGen Water Resources Modeling Framework (NextGen). It simplifies deployment with a one-click containerization. NGIAB offers both Docker-based cloud infrastructure (NGIAB-CloudInfra) and Singularity-based high-performance computing (NGIAB-HPCInfra). NGIAB includes essential tools such as Data Preprocess, which simplifies data preparation with its interactive map, and Data Visualizer, which provides geospatial and time series visualization. With NextGen Datastream you can build and validate NextGen input packages, execute NextGen through NGIAB and version the entire run for reproducibility. 📄️ NGIAB Data Preprocess NextGen Data Preprocess 🗃️ NGIAB-CloudInfra 2 items 📄️ NGIAB-HPCInfra NextGen In a Box (NGIAB) 🗃️ NextGen Datastream 1 item 📄️ Community NextGen Office Hours Community NextGen Office Hours","keywords":"","version":"Next"},{"title":"NextGen Datastream","type":0,"sectionRef":"#","url":"/docs/products/Community Hydrologic Modeling Framework/nextgenDatastream/","content":"NextGen Datastream info The NextGen DataStream forcing files and associated metadata are available through our AWS S3 Explorer. You can browse and access these regularly updated resources at datastream.ciroh.org/index.html. In addition to NextGen forcings, daily NextGen simulation outputs via Datastream will soon be available.","keywords":"","version":"Next"},{"title":"Forcing Processor","type":0,"sectionRef":"#","url":"/docs/products/Community Hydrologic Modeling Framework/nextgenDatastream/forcingprocessor/","content":"Forcing Processor","keywords":"","version":"Next"},{"title":"NGIAB End-to-End Setup Guide","type":0,"sectionRef":"#","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/workflow","content":"","keywords":"","version":"Next"},{"title":"Overview","type":1,"pageTitle":"NGIAB End-to-End Setup Guide","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/workflow#overview","content":" This guide provides step-by-step instructions for setting up and running the complete NGIAB workflow, including data preprocessor, model execution, TEEHR evaluation and Tethys visualization. ","version":"Next","tagName":"h2"},{"title":"Prerequisites","type":1,"pageTitle":"NGIAB End-to-End Setup Guide","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/workflow#prerequisites","content":" Python 3.xGitDockerTerminal access ","version":"Next","tagName":"h2"},{"title":"Video Tutorial","type":1,"pageTitle":"NGIAB End-to-End Setup Guide","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/workflow#video-tutorial","content":" ","version":"Next","tagName":"h2"},{"title":"Quick Start Guide (5 minutes)","type":1,"pageTitle":"NGIAB End-to-End Setup Guide","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/workflow#quick-start-guide-5-minutes","content":" You can watch our step-by-step video guide here: ","version":"Next","tagName":"h3"},{"title":"Video Contents Timeline","type":1,"pageTitle":"NGIAB End-to-End Setup Guide","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/workflow#video-contents-timeline","content":" (System: Using Mac M1 Max Laptop) 0:00 - Introduction0:10 - Data Preprocessing Setup2:32 - NGIAB Installation3:05 - Running TEEHR Evaluation4:30 - Visualization using Tethys Portal5:05 - Verify Output Plot and Conclusion ","version":"Next","tagName":"h3"},{"title":"Step 1: Data Preprocessing","type":1,"pageTitle":"NGIAB End-to-End Setup Guide","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/workflow#step-1-data-preprocessing","content":" ","version":"Next","tagName":"h2"},{"title":"Clone and Configure Data Preprocess","type":1,"pageTitle":"NGIAB End-to-End Setup Guide","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/workflow#clone-and-configure-data-preprocess","content":" # Clone the repository git clone https://github.com/CIROH-UA/NGIAB_data_preprocess # Navigate to project directory cd NGIAB_data_preprocess # Create and activate virtual environment python3 -m venv .venv source .venv/bin/activate # Install required package pip install 'ngiab_data_preprocess' ","version":"Next","tagName":"h3"},{"title":"Run Data Preprocess","type":1,"pageTitle":"NGIAB End-to-End Setup Guide","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/workflow#run-data-preprocess","content":" # Execute preprocess for sample catchment python -m ngiab_data_cli -i cat-2863657 --subset --start 2010-01-01 --end 2010-01-02 --forcings --realization ","version":"Next","tagName":"h3"},{"title":"Parameters Explained","type":1,"pageTitle":"NGIAB End-to-End Setup Guide","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/workflow#parameters-explained","content":" Parameter\tDescription-i cat-2863657\tSpecifies the catchment ID --subset\tCreates a subset of the data --start\tStart date for the analysis --end\tEnd date for the analysis --forcings\tIncludes forcing data --realization\tGenerates realization data ","version":"Next","tagName":"h3"},{"title":"Step 2: NGIAB Setup and Execution","type":1,"pageTitle":"NGIAB End-to-End Setup Guide","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/workflow#step-2-ngiab-setup-and-execution","content":" ","version":"Next","tagName":"h2"},{"title":"Clone and Run NGIAB","type":1,"pageTitle":"NGIAB End-to-End Setup Guide","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/workflow#clone-and-run-ngiab","content":" # Return to home directory cd .. # Clone NGIAB repository git clone https://github.com/CIROH-UA/NGIAB-CloudInfra.git # Navigate to NGIAB directory cd NGIAB-CloudInfra # Execute the setup and run script ./guide.sh ","version":"Next","tagName":"h3"},{"title":"Automated Process","type":1,"pageTitle":"NGIAB End-to-End Setup Guide","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/workflow#automated-process","content":" The guide.sh script automatically: Sets up the NGIAB environmentLaunches the NextGen modelInitializes and run TEEHR analysis toolsStarts the visualization interface ","version":"Next","tagName":"h3"},{"title":"Accessing the Tools","type":1,"pageTitle":"NGIAB End-to-End Setup Guide","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/workflow#accessing-the-tools","content":" After successful execution, you can access: Visualizer: http://localhost/apps/ngiab ","version":"Next","tagName":"h2"},{"title":"Troubleshooting","type":1,"pageTitle":"NGIAB End-to-End Setup Guide","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/workflow#troubleshooting","content":" Common issues and solutions: Docker Issues Ensure Docker is runningVerify Docker configurationCheck Docker permissions Virtual Environment Confirm activation statusCheck Python version compatibility Installation Problems Verify prerequisitesCheck system requirementsReview error logs ","version":"Next","tagName":"h2"},{"title":"Additional Resources","type":1,"pageTitle":"NGIAB End-to-End Setup Guide","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/workflow#additional-resources","content":" ","version":"Next","tagName":"h2"},{"title":"Documentation","type":1,"pageTitle":"NGIAB End-to-End Setup Guide","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/workflow#documentation","content":" NGIAB Data PreprocessorNGIAB Cloud InfrastructureTEEHR DocumentationTethys Documentation ","version":"Next","tagName":"h3"},{"title":"Support","type":1,"pageTitle":"NGIAB End-to-End Setup Guide","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/workflow#support","content":" For technical support: Create an issue in the respective GitHub repositoryContact CIROH support teamJoin community discussions on CIROH Slack ","version":"Next","tagName":"h3"},{"title":"NGIAB-CloudInfra","type":0,"sectionRef":"#","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/","content":"NGIAB-CloudInfra","keywords":"","version":"Next"},{"title":"NGIAB-HPCInfra","type":0,"sectionRef":"#","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxSingularity/","content":"NGIAB-HPCInfra","keywords":"","version":"Next"},{"title":"Monthly Community NextGen Office Hours","type":0,"sectionRef":"#","url":"/docs/products/Community Hydrologic Modeling Framework/ngiabOfficeHours","content":"","keywords":"","version":"Next"},{"title":"About Community NextGen","type":1,"pageTitle":"Monthly Community NextGen Office Hours","url":"/docs/products/Community Hydrologic Modeling Framework/ngiabOfficeHours#about-community-nextgen","content":" Community NextGen is part of our Community Hydrologic Modeling Framework, designed to advance collaborative efforts in hydrologic science and modeling. These office hours provide a platform for discussion, problem-solving, and knowledge sharing among consortium members. ","version":"Next","tagName":"h2"},{"title":"How to Join","type":1,"pageTitle":"Monthly Community NextGen Office Hours","url":"/docs/products/Community Hydrologic Modeling Framework/ngiabOfficeHours#how-to-join","content":" If you’re interested in participating in the Community NextGen Office Hours, please send us an email expressing your interest at [email protected]. We’ll provide you further details. ","version":"Next","tagName":"h2"},{"title":"NGIAB Data Preprocess","type":0,"sectionRef":"#","url":"/docs/products/Community Hydrologic Modeling Framework/ngiabpreprocessor/","content":"NGIAB Data Preprocess","keywords":"","version":"Next"},{"title":"Data Management and Access Tools","type":0,"sectionRef":"#","url":"/docs/products/Data Management and Access Tools/","content":"Data Management and Access Tools 🗃️ Data Access 1 item 📄️ Water Prediction Node Water Node Website 📄️ HydroServer HydroServer Portal 🗃️ NETWA 1 item 📄️ HydroShare HydroShare 📄️ NWM BigQuery API REST API backed by National Water Model data, developed on Google Cloud Platform","keywords":"","version":"Next"},{"title":"NGIAB-Prerequisites","type":0,"sectionRef":"#","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/ngiabprerequisite","content":"","keywords":"","version":"Next"},{"title":"Windows:","type":1,"pageTitle":"NGIAB-Prerequisites","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/ngiabprerequisite#windows","content":" Install WSL: Head over to Microsoft's official documentation and follow their comprehensive guide on installing WSL: https://learn.microsoft.com/en-us/windows/wsl/installInstall Docker Desktop: Begin by downloading and installing Docker Desktop from the official website: https://docs.docker.com/desktop/install/windows-install/#install-docker-desktop-on-windowsStart Docker Desktop: After installation, launch the Docker Desktop application.Open WSL as Admin: Right-click on the WSL icon and select "Run as Administrator".Verify Installation: In the WSL window, type the command docker ps -a to check if Docker is running correctly. This command should display a list of Docker containers. ","version":"Next","tagName":"h3"},{"title":"Mac:","type":1,"pageTitle":"NGIAB-Prerequisites","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/ngiabprerequisite#mac","content":" Install Docker Desktop: Download and install Docker Desktop for Mac from: https://docs.docker.com/desktop/install/mac-install/Start Docker Desktop: Launch the Docker Desktop application once the installation is complete.Open Terminal: Open the Terminal application on your Mac.Verify Installation: Similar to Windows, use the command docker ps -a in the Terminal to verify Docker is functioning as expected. ","version":"Next","tagName":"h3"},{"title":"Linux:","type":1,"pageTitle":"NGIAB-Prerequisites","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/ngiabprerequisite#linux","content":" Install Docker: The installation process for Linux varies depending on your distribution. Refer to the official documentation for detailed instructions: https://docs.docker.com/desktop/install/linux-install/Start Docker and Verify: Follow the same steps as described for Mac to start Docker and verify its installation using the docker ps -a command in the terminal. Input Data: Download Sample Data: Use the provided commands to download sample data for the Sipsey Fork case study.To generate your own data: Refer to the NGIAB-datapreprocessor for instructions on generating custom input data.To generate your own data and run using NGIAB: Refer to the ngen-datastream repository for instructions on generating custom input data. This section guides you through downloading and preparing the sample input data for the NextGen In A Box project. ","version":"Next","tagName":"h3"},{"title":"Step 1: Create Project Directory","type":1,"pageTitle":"NGIAB-Prerequisites","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/ngiabprerequisite#step-1-create-project-directory","content":" Linux/Mac users: Open your terminal and go to your desired folder where you want to checkout repo and ngen-data folder and run the following commands: mkdir -p NextGen/ngen-data cd NextGen/ngen-data Windows users: WSL (Right click and run as Admin): Open WSL with administrator privileges and execute: cd /mnt/c/Users/<Folder> mkdir -p NextGen/ngen-data cd NextGen/ngen-data ","version":"Next","tagName":"h3"},{"title":"Step 2: Download Sample Data","type":1,"pageTitle":"NGIAB-Prerequisites","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/ngiabprerequisite#step-2-download-sample-data","content":" Linux/Mac/Windows WSL: Use wget to download the compressed data file: wget --no-parent https://ciroh-ua-ngen-data.s3.us-east-2.amazonaws.com/AWI-006/AWI_16_2853886_006.tar.gz ","version":"Next","tagName":"h3"},{"title":"Step 3: Extract and Rename","type":1,"pageTitle":"NGIAB-Prerequisites","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/ngiabprerequisite#step-3-extract-and-rename","content":" All Platforms: Extract the downloaded file and optionally rename the folder: tar -xf AWI_16_2853886_006.tar.gz ","version":"Next","tagName":"h3"},{"title":"Rename the folder","type":1,"pageTitle":"NGIAB-Prerequisites","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/ngiabprerequisite#rename-the-folder","content":" mv AWI_16_2853886_006 my_data Now you have successfully downloaded and prepared the sample input data in the NextGen/ngen-data directory. ","version":"Next","tagName":"h3"},{"title":"Step 4: Pull images (Optional)","type":1,"pageTitle":"NGIAB-Prerequisites","url":"/docs/products/Community Hydrologic Modeling Framework/nextgeninaboxDocker/ngiabprerequisite#step-4-pull-images-optional","content":" For Mac: docker pull awiciroh/ciroh-ngen-image:latest docker pull docker.osgeo.org/geoserver:2.25.x docker pull awiciroh/tethys-ngiab:main For Windows/Linux: docker pull awiciroh/ciroh-ngen-image:latest-x86 docker pull docker.osgeo.org/geoserver:2.25.x docker pull awiciroh/tethys-ngiab:main ","version":"Next","tagName":"h3"},{"title":"National Water Model (NWM) BigQuery API","type":0,"sectionRef":"#","url":"/docs/products/Data Management and Access Tools/bigqeury-api/","content":"National Water Model (NWM) BigQuery API info More details about “Design and implementation of a BigQuery dataset and application programmer interface (API) for the U.S. National Water Model” paper can be found here. Please submit the form below to request access to NWM BigQuery API. NWM BigQuery API Access Request Form","keywords":"","version":"Next"},{"title":"HydroShare","type":0,"sectionRef":"#","url":"/docs/products/Data Management and Access Tools/hydroshare/","content":"","keywords":"","version":"Next"},{"title":"Software and Technologies","type":1,"pageTitle":"HydroShare","url":"/docs/products/Data Management and Access Tools/hydroshare/#software-and-technologies","content":" HydroShare is an operational repository at https://www.hydroshare.org. HydroShare includes the following functionality: A user-oriented web application for creation of "resources" within which you can share data, models, computational notebooks, and other content files.A flexible, file-based data model for storing content created within HydroShare resources.A REST application programming interface (API) for programmatic access to HydroShare resources. You can automate and code most everything through HydroShare's API that you can do through the web user interface.A Python client package called "hsclient" that enables easier interaction with HydroShare's REST APIAbility to link to and launch computational notebooks, code, and content files into linked JupyterHub environments, including the CIROH JupyterHub, CUASHI JupyterHub, and CyberGIS JupyterHub. ","version":"Next","tagName":"h2"},{"title":"Access","type":1,"pageTitle":"HydroShare","url":"/docs/products/Data Management and Access Tools/hydroshare/#access","content":" Anyone can access HydroShare by navigating to https://www.hydroshare.org and creating a user account. All users are automatically allocated a 20GB quota for content within HydroShare, but if you need more space you can make a request to CUAHSI. ","version":"Next","tagName":"h2"},{"title":"Open-Source Code Development","type":1,"pageTitle":"HydroShare","url":"/docs/products/Data Management and Access Tools/hydroshare/#open-source-code-development","content":" HydroShare is an open source software development project, with repositories and source code available at https://github.com/hydroshare. HydroShare is developed as open-source software using the BSD 3-clause open source license. ","version":"Next","tagName":"h2"},{"title":"Bugs and Issues","type":1,"pageTitle":"HydroShare","url":"/docs/products/Data Management and Access Tools/hydroshare/#bugs-and-issues","content":" Bugs, issues, and feature requests related to HydroShare can be reported via the main HydroShare GitHub repository at: https://github.com/hydroshare/hydroshare/issues ","version":"Next","tagName":"h2"},{"title":"Development Team","type":1,"pageTitle":"HydroShare","url":"/docs/products/Data Management and Access Tools/hydroshare/#development-team","content":" HydroShare is the work of many individuals and organizations who have contributed to its design and development over many years. For details, see https://github.com/orgs/hydroshare/people. ","version":"Next","tagName":"h2"},{"title":"How to cite HydroShare","type":1,"pageTitle":"HydroShare","url":"/docs/products/Data Management and Access Tools/hydroshare/#how-to-cite-hydroshare","content":" The following citations should be used when citing HydroShare: Tarboton, D. G., Ames, D. P., Horsburgh, J. S., Goodall, J. L., Couch, A., Hooper, R., Bales, J., Wang, S., Castronova, A., Seul, M., Idaszak, R., Li, Z., Dash, P., Black, S., Ramirez, M., Yi, H., Calloway, C., Cogswell, C. (2024). HydroShare Retrospective: A Review of Science and Technology Advances of a Comprehensive Data and Model Publication Environment for the Water Science Domain, Environmental Modelling & Software, 172, 105902, https://doi.org/10.1016/j.envsoft.2023.105902. Horsburgh, J. S., M. M. Morsy, A. M. Castronova, J. L. Goodall, T. Gan, H. Yi, M. J. Stealey, and D. G. Tarboton (2016). HydroShare: Sharing diverse environmental data types and models as social objects with application to the hydrology domain, JAWRA Journal of the American Water Resources Association, 52(4), 873-889, https://doi.org/10.1111/1752-1688.12363. Tarboton, D. G., R. Idaszak, J. S. Horsburgh, J. Heard, D. Ames, J. L. Goodall, L. Band, V. Merwade, A. Couch, J. Arrigo, R. Hooper, D. Valentine and D. Maidment (2014). HydroShare: Advancing Collaboration through Hydrologic Data and Model Sharing, in D. P. Ames, N. W. T. Quinn and A. E. Rizzoli (eds.), Proceedings of the 7th International Congress on Environmental Modelling and Software, San Diego, California, USA, International Environmental Modelling and Software Society (iEMSs), ISBN: 978-88-9035-744-2, https://scholarsarchive.byu.edu/iemssconference/2014/Stream-A/7/. ","version":"Next","tagName":"h2"},{"title":"nwmurl","type":0,"sectionRef":"#","url":"/docs/products/Data Management and Access Tools/dataaccess/NWMURL Library","content":"","keywords":"","version":"Next"},{"title":"Installation","type":1,"pageTitle":"nwmurl","url":"/docs/products/Data Management and Access Tools/dataaccess/NWMURL Library#installation","content":" You can install nwmurl using pip: pip install nwmurl ","version":"Next","tagName":"h2"},{"title":"Usage","type":1,"pageTitle":"nwmurl","url":"/docs/products/Data Management and Access Tools/dataaccess/NWMURL Library#usage","content":" In the code, you can modify the input parameters, such as start_date, end_date, fcst_cycle, lead_time, varinput, geoinput, and runinput, to customize the NWM data retrieval. The code will generate a list of JSON header URLs tailored to your specified parameters using the generate_urls function. ","version":"Next","tagName":"h2"},{"title":"Customize Your Data Retrieval for Operational Dataset","type":1,"pageTitle":"nwmurl","url":"/docs/products/Data Management and Access Tools/dataaccess/NWMURL Library#customize-your-data-retrieval-for-operational-dataset","content":" start_date: A string representing the starting date in the format "YYYYMMDDHHMM".end_date: A string representing the ending date in the same format.fcst_cycle: A list of integers specifying forecast cycle numbers, e.g., [0, 1, 2, 3, 4]. These cycles represent specific points in time for which URLs will be generated.lead_time: A list of integers indicating lead times in hours for forecasts. It determines the time ahead of the forecast start, e.g., [1, 2, 3, 4].varinput: An integer or string representing the variable of interest within the NWM data. Available options include: 1 or \\"channel_rt\\" for channel routing data.2 or \\"land\\" for land data.3 or \\"reservoir\\" for reservoir data.4 or \\"terrain_rt\\" for terrain routing data.5 or \\"forcing\\" for forcing data.geoinput: An integer or string specifying the geographic region of interest. Options include:1 or \\"conus\\" for the continental United States.2 or \\"hawaii\\" for Hawaii.3 or \\"puertorico\\" for Puerto Rico. runinput: An integer or string representing the NWM run configuration. Available options include: 1 or \\"short_range\\" for short-range forecasts.2 or \\"medium_range\\" for medium-range forecasts.3 or \\"medium_range_no_da\\" for medium-range forecasts without data assimilation.4 or \\"long_range\\" for long-range forecasts.5 or \\"analysis_assim\\" for analysis-assimilation runs.6 or \\"analysis_assim_extend\\" for extended analysis-assimilation runs.7 or \\"analysis_assim_extend_no_da\\" for extended analysis-assimilation runs without data assimilation.8 or \\"analysis_assim_long\\" for long analysis-assimilation runs.9 or \\"analysis_assim_long_no_da\\" for long analysis-assimilation runs without data assimilation.10 or \\"analysis_assim_no_da\\" for analysis-assimilation runs without data assimilation.11 or \\"short_range_no_da\\" for short-range forecasts without data assimilation. urlbaseinput : An integer representing the NWM dataset. Available options include: 1: "https://nomads.ncep.noaa.gov/pub/data/nccf/com/nwm/prod/".2: "https://nomads.ncep.noaa.gov/pub/data/nccf/com/nwm/post-processed/WMS/".3: "https://storage.googleapis.com/national-water-model/".4: "https://storage.cloud.google.com/national-water-model/".5: "gs://national-water-model/".6: "gcs://national-water-model/".7: "https://noaa-nwm-pds.s3.amazonaws.com/".8: "s3://noaa-nwm-pds/".9: "https://ciroh-nwm-zarr-copy.s3.amazonaws.com/national-water-model/". meminput : An integer representing the ensemble member designation ranging from 0 to 7write_to_file: A Boolean variable that saves the output urls into a .txt file if set 'True' ","version":"Next","tagName":"h2"},{"title":"Customize Your Data Retrieval for Retrospective Dataset","type":1,"pageTitle":"nwmurl","url":"/docs/products/Data Management and Access Tools/dataaccess/NWMURL Library#customize-your-data-retrieval-for-retrospective-dataset","content":" start_date: A string representing the starting date in the format "YYYYMMDDHHMM".end_date: A string representing the ending date in the same format.urlbaseinput : An integer representing the NWM dataset. Available options include: 1: "https://noaa-nwm-retrospective-2-1-pds.s3.amazonaws.com/".2: "s3://noaa-nwm-retrospective-2-1-pds/model_output/".3: "https://ciroh-nwm-zarr-retrospective-data-copy.s3.amazonaws.com/noaa-nwm-retrospective-2-1-zarr-pds/".4: "https://noaa-nwm-retrospective-3-0-pds.s3.amazonaws.com/CONUS/netcdf/". selectet_object_type: An integer representing the object type. Available options include: 1 for forcing data2 for model_output Selectet_var_types: An integer or string representing the variable of interest within the NWM data. Available options include: 1: ".CHRTOUT_DOMAIN1.comp"2: ".GWOUT_DOMAIN1.comp"3: ".LAKEOUT_DOMAIN1.comp"4: ".LDASOUT_DOMAIN1.comp"5: ".RTOUT_DOMAIN1.comp"6: ".LDASIN_DOMAIN1.comp" write_to_file: A Boolean A Boolean variable that saves the output urls into a .txt file if set True ","version":"Next","tagName":"h2"},{"title":"Examples of how to use","type":1,"pageTitle":"nwmurl","url":"/docs/products/Data Management and Access Tools/dataaccess/NWMURL Library#examples-of-how-to-use","content":" For Operational dataset: import nwmurl start_date = "202201120000" end_date = "202201130000" fcst_cycle = [0,8] lead_time = [1,18] varinput = 1 geoinput = 1 runinput = 1 urlbaseinput = 2 meminput = 1 write_to_file = False file_list = nwmurl.generate_urls_operational( start_date, end_date, fcst_cycle, lead_time, varinput, geoinput, runinput, urlbaseinput, meminput, write_to_file ) For Retrospective dataset: import nwmurl start_date = "200701010000" end_date = "200701030800" urlbaseinput = 2 selected_var_types = [1, 2] selected_object_types = [1] write_to_file = True file_list = nwmurl.generate_urls_retro( start_date, end_date, urlbaseinput, selected_object_types, selected_var_types, write_to_file ) ","version":"Next","tagName":"h2"},{"title":"How to Contribute","type":1,"pageTitle":"nwmurl","url":"/docs/products/Data Management and Access Tools/dataaccess/NWMURL Library#how-to-contribute","content":" We welcome contributions to nwmurl! To contribute to the development of this library, please follow these steps: Fork the repository on GitHub. Clone your fork to your local machine:` git clone https://github.com/CIROH-UA/nwmurl.git Create a new branch for your contribution:` git checkout -b feature/your-feature-name Make your code changes and improvements. Before submitting a pull request, make sure to update the package version in setup.py if necessary. Commit your changes with descriptive commit messages. Push your changes to your fork:`` Open a pull request on the main repository, describing your changes and why they should be merged. We appreciate your contributions and will review your pull request as soon as possible. Thank you for helping improve nwmurl! ","version":"Next","tagName":"h2"},{"title":"HydroServer","type":0,"sectionRef":"#","url":"/docs/products/Data Management and Access Tools/hydroserver/","content":"","keywords":"","version":"Next"},{"title":"Software and Technologies","type":1,"pageTitle":"HydroServer","url":"/docs/products/Data Management and Access Tools/hydroserver/#software-and-technologies","content":" The HydroServer software stack includes: A user-oriented web application for creation of monitorings sites, site metadata, information about observed variables, etc.A Python package and desktop app for loading time series data from monitoring sites into HydroServerApplication Programming Interfaces (APIs) for data ingest into HydroServer, data querying and retrieval, and data and metadata managementA highly performant time series database for storing and managing time series data Additional planned tools include: A Python client package for retrieving time series data from HydroServerAutomated archival of time series data to the HydroShare repositoryIntegration of data quality control functionalityWeb app(s) for data visualization The HydroServer software stack is being build using the following technologies: Vue.js - A JavaScript framework for building web user interfacesPython/Django - A Python web framework for backend web developmentOpen Geospatial Consortium SensorThings API - An API specification and data model for managing and retrieving observations and metadata from sensor systems.Timescale Cloud - A cloud native implementation of PostgreSQL and its Timescale extension for storing and managing time series dataAmazon Web Services (AWS) - The HydroServer web application and APIs are deployed using AWS. ","version":"Next","tagName":"h2"},{"title":"Access","type":1,"pageTitle":"HydroServer","url":"/docs/products/Data Management and Access Tools/hydroserver/#access","content":" As of August 28, 2023, we are currently working on setting up domains, associated security certificates, and additional settings, but we anticipate that the CIROH HydroServer instances will be: https://hydroserver.ciroh.org - Production instance of HydroServer for CIROHhttps://hydroserver-dev.ciroh.org - Development instance for internal development and testinghttps://hydroserver-beta.ciroh.org - Beta instance for testing and demonstration of latest functionality ","version":"Next","tagName":"h2"},{"title":"Open-Source Code Development","type":1,"pageTitle":"HydroServer","url":"/docs/products/Data Management and Access Tools/hydroserver/#open-source-code-development","content":" The HydroServer software stack is developed as open-source software using the BSD3 open source license. All code development is hosted in our GitHub repositories hosted under the HydroServer GitHub Organization https://github.com/hydroserver2/ ","version":"Next","tagName":"h2"},{"title":"Bugs and Issues","type":1,"pageTitle":"HydroServer","url":"/docs/products/Data Management and Access Tools/hydroserver/#bugs-and-issues","content":" Bugs, issues, and feature requests related to HydroServer applications can be reported via their respective GitHub repositories at: https://github.com/orgs/hydroserver2/repositories ","version":"Next","tagName":"h2"},{"title":"Development Team","type":1,"pageTitle":"HydroServer","url":"/docs/products/Data Management and Access Tools/hydroserver/#development-team","content":" The HydroServer software stack is under development at the Utah Water Research Laboratory at Utah State University. The main contributors include: Jeff Horsburgh - Associate Professor, Utah Water Research Laboratory and Civil and Environmental Engineering, Utah State UniversityKen Lippold - Software Engineer, Utah Water Research Laboratory, Utah State UniversityDaniel Slaugh - Software Engineer, Utah Water Research Laboratory, Utah State UniversityMaurier Ramirez - Software Engineer, Utah Water Research Laboratory, Utah State University ","version":"Next","tagName":"h2"},{"title":"NWM Data Access","type":0,"sectionRef":"#","url":"/docs/products/Data Management and Access Tools/dataaccess/","content":"","keywords":"","version":"Next"},{"title":"Input and Output Data of the National Water Model","type":1,"pageTitle":"NWM Data Access","url":"/docs/products/Data Management and Access Tools/dataaccess/#input-and-output-data-of-the-national-water-model","content":" Here, you will find resources that grant access to the input data used and the output data produced by the operational national water model. ","version":"Next","tagName":"h2"},{"title":"Official NOMADS Resource","type":1,"pageTitle":"NWM Data Access","url":"/docs/products/Data Management and Access Tools/dataaccess/#official-nomads-resource","content":" The official NWM meteorological inputs and hydrology and routing outputs are accessible through both HTTP and FTP. These resources are provided by the National Center for Environmental Prediction (NCEP) at the following locations: NOMADS - NOAA Operational Model Archive and Distribution System HTTPFTP As of October 24, 2023, these resources include the following directories: para_post-processed/ 22-Sep-2023 20:37 - post-processed/ 02-Nov-2020 14:31 - prod/ 24-Oct-2023 00:18 - v3.0/ 24-Oct-2023 00:18 - The para_post-processed directory lacks specific documentation, although the "para" designation suggests it is a "parallel" execution, indicating a candidate production run under testing for operational use. In the post-processed dataset, you will find the following subdirectories: NOMADS post-processed RFC: Outputs filtered down to RFC locations.WMS: Contains re-indexed/reformatted outputs in per-forecast netCDFs suitable for rapid querying and responsive for graph visualizations on the water.noaa.gov/map site.IMAGES: .png-formatted renderings of NWM output for various domains and variables.logs: Logs. :) ","version":"Next","tagName":"h3"},{"title":"NODD - NOAA Open Data Dissemination Program","type":1,"pageTitle":"NWM Data Access","url":"/docs/products/Data Management and Access Tools/dataaccess/#nodd---noaa-open-data-dissemination-program","content":" "The NOAA Open Data Dissemination (NODD) Program provides public access to NOAA's open data on commercial cloud platforms through public-private partnerships. These partnerships remove obstacles to public use of NOAA data, help avoid costs and risks associated with federal data access services, and leverage operational public-private partnerships with the cloud computing and information services industries." (For more information, visit NODD) The NODD datasets made available through several public cloud vendors are an incredible resource for accessing NWM data for research and evaluative purposes. The NWS NODD datasets are listed on this page and include the following: AWS AWS hosts two repositories as part of their sustainability data initiative. The first repository contains the operational data (now hosts 4 week rolling collection of all output; it used to only be short range and the registry entry retains the description only for the short_range data here; alternatively, the same resource is described under the sustainability initiative page here.) The catalog of AWS-hosted operational NWM data can be browsed here. The second (and more useful) AWS repository contains several versions of the retrospective dataset each described on the main page under the open data registry here. (The same information is also on the AWS sustainability initiative webpage here ) The different catalogs of those [currently] five versions of that resource are linked below: Two versions of NWM v2.1 retrospective netCDF, herezarr, here Two versions of NWM v2.0 retrospective netCDF, herezarr, here NWM v1.2 retrospective data netCDF, here The AWS retrospective resource is the primary publicly available source for the version 1.0 of the “AORC” Analysis of Record for Calibration dataset, which is a 40-year best-available estimate of most common meteorological parameters required for hydrological modeling. Version 1.1 of the dataset will accompany the release of the NWM model version 3.0 retrospective (or 2.2 version??), hopefully in the next few weeks. Jupyter notebook instructions for processing NWM Zarr and NetCDF output formats here An example of pulling data from the channel output zarr 2.1 archive and writing the results to csv follows: ''' #install these libraries if they aren't already installed !pip install zarr !pip install xarray !pip install s3fs !pip install numpy ''' # Import needed libraries import xarray as xr import numpy as np import s3fs from datetime import datetime, timedelta # open the zarr store url = "s3://noaa-nwm-retrospective-2-1-zarr-pds/chrtout.zarr" fs = s3fs.S3FileSystem(anon=True) store = xr.open_zarr(s3fs.S3Map(url, s3=fs)) # Function to get the time series for a specified reach id and and time range # then write it out to a csv file. def GetAndWriteTimeSeriesAtReach(reach_id, start_time_index, end_time_index): flows = streamflow_array.where(feature_id_array==reach_id, drop=True) df_flows = flows[start_time_index:end_time_index].to_dataframe() df_flows.to_csv(f'flows_{reach_id}.csv') # get an xarray array of the various values time_array = store['time'] feature_id_array = store['feature_id'] streamflow_array = store['streamflow'] # Define the feature IDs to check for feature_ids = [5781221, 5781223, 5781703] # Specify the start and end times of interest start_time = datetime(2015, 5, 23, 0, 0, 0) end_time = datetime(2015, 6, 24, 0, 0, 0) # Get the indices for the needed dates zero_start_time = start_date = datetime(1979, 2, 1, 0, 0, 0) start_time_index = int((start_time - zero_start_time).total_seconds() / 3600) end_time_index = int((end_time - zero_start_time).total_seconds() / 3600) for reach_id in feature_ids: GetAndWriteTimeSeriesAtReach(reach_id, start_time_index, end_time_index) ''' Simple Script for Retrieving Retrospective NWM Data from AWS Store Dan Ames, 11/17/2023 [email protected] ''' Google – Operational NWM Data Google hosts the most complete operational data archive of inputs and outputs from the National Water Model, with nearly every file since August 2018. The Google open data registry provides additional explanations here. Operational data can be browsed here.Google also hosts a copy of the NWM v1.2 retrospective here. Coming soon: Big Query Efforts are underway to make some of the datasets from the NWM operational and retrospective simulations available on BigQuery for ultra-high-bandwidth access. Stay tuned... Azure/Planetary Computer Microsoft hosts the NWM input and output datasets in Azure Blob Storage, associated with the Microsoft Planetary Computer.Microsoft Planetary ComputerTom Augspurger of Microsoft has a series of notebooks providing examples of how to use this data from his workshop at the first CIROH developers conference.Tom Augspurger's Notebooks ","version":"Next","tagName":"h3"},{"title":"CIROH Resources","type":1,"pageTitle":"NWM Data Access","url":"/docs/products/Data Management and Access Tools/dataaccess/#ciroh-resources","content":" More detailed information and example usage will be available soon. Kerchunk Retro (points to AWS 2.1 NetCDF Retro) Kerchunk Retro - Forcing complete; model output 2011-2020 Kerchunk Operational (points to Google assets – a simple text change can point to AWS short range, if desired) Kerchunk Operational ","version":"Next","tagName":"h3"},{"title":"Other resources","type":1,"pageTitle":"NWM Data Access","url":"/docs/products/Data Management and Access Tools/dataaccess/#other-resources","content":" ESRI Living Atlas ESRI Living Atlas provides a map-enabled version of the NWM output, which can be accessed here. Description of WRF-Hydro code: A detailed description of various aspects of the WRF-Hydro code, which produces the current NWM, can be found here. 📄️ nwmurl nwmurl is a Python library developed by CIROH 2023. It provides utility functions specifically designed to subset and generate National Water Model (NWM) data URLs. This library simplifies the process of accessing NWM data for various purposes such as analysis, modeling, and visualization. ","version":"Next","tagName":"h3"},{"title":"Welcome to the Water Prediction Node","type":0,"sectionRef":"#","url":"/docs/products/Data Management and Access Tools/waternode/","content":"","keywords":"","version":"Next"},{"title":"Data Catalog","type":1,"pageTitle":"Welcome to the Water Prediction Node","url":"/docs/products/Data Management and Access Tools/waternode/#data-catalog","content":" The WPN has a STAC data catalog. The catalog can be accessed via the graphical browser or programatically via R or Python. The WPN has a tutorial demonstrating how to download WPN data via python. More tutorials for working with WPN data will be created at the tutorials page. The two main initial focuses of the data catalog will be: Flood maps created using satellite data. Maps created by NESDIS STAR using VIIRS, Sentinel 1/2, landsat, and GOES ABI will be made available.ET related data. The WPN will catalog NESDIS STAR soil moisture products as well as remotely sensed vegetation indices that have the potential to improve hydrological model validation and evaluation efforts.Baseline inundated extents and river widths. Baseline inundated extents are already being produced by the National Water Center and have the potential to inform the flood masp in the catalog. Multi-year baselines can also be informative when evaluating drought stricken regions. River widths have the potential to be assimilated into the National Water Model to improve synthetic rating curves as well as model discharge estimates. ","version":"Next","tagName":"h2"},{"title":"Current WPN projects","type":1,"pageTitle":"Welcome to the Water Prediction Node","url":"/docs/products/Data Management and Access Tools/waternode/#current-wpn-projects","content":" The Water Prediction Node first project is making it easier to perform qualitative comparisons between flood maps created from satellite imagery and flood maps created using the inundation models used by the National Water Center (NWC). The WPN will enable these comparisons by focusing on: Exposing WPN data catalog assets as a web mapping service or web mapping tile service. This will allow stakeholders to easily import WPN satellite derived flood maps into their GIS viewer of choice. This satellite derived flood map layer can then be quickly compared to modeled inundation.Creating easy to access collections of satellite imagery of notable floods so that retrospective evaluation is easier.Creating a web processing service that allows for the creation of agreement maps in the style of gval. These agreement maps will highlight areas of agreement and disagreement between the modelled and remotely sensed flood maps and will allow the computation of agreement metrics. ","version":"Next","tagName":"h2"},{"title":"Website repository","type":1,"pageTitle":"Welcome to the Water Prediction Node","url":"/docs/products/Data Management and Access Tools/waternode/#website-repository","content":" The source for the website implementation can be found here ","version":"Next","tagName":"h2"},{"title":"Forecast-Workflow","type":0,"sectionRef":"#","url":"/docs/products/Data Management and Access Tools/netwa/","content":"","keywords":"","version":"Next"},{"title":"About","type":1,"pageTitle":"Forecast-Workflow","url":"/docs/products/Data Management and Access Tools/netwa/#about","content":" The forecast-workflow repository was initally created to implement the AEM3D model to generate 7-day forecasts of Harmful Algal Blooms (HABs) in Lake Champlain. However, we've created some handy data grabbers along the way that we've modified in order to make them more useful for other CIORH researchers. The scripts for these data grabbers can be found in the data/ folder within the repository. This page will include documentation on how to use some of these data grabber tools we've created. ","version":"Next","tagName":"h2"},{"title":"Cloning the repository","type":1,"pageTitle":"Forecast-Workflow","url":"/docs/products/Data Management and Access Tools/netwa/#cloning-the-repository","content":" The first thing you need to do in order to be able to use the tools in forecast-workflow is clone the repository into your user space using git. Once logged on the testbed, open a new terminal and navigate to the directroy in which you'd like to store the repoThen, run git clone https://github.com/CIROH-UVM/forecast-workflow.gitThat's it! You now have the repo on your own personal user space. Be sure to checkout the repo on GitHub every now and then to make sure you have the most recent version of the repo.If you need to pull any updates to your local repo, simply run git pull from your forecast-workflow/ directory ","version":"Next","tagName":"h2"},{"title":"Using Jupyter Notebooks","type":1,"pageTitle":"Forecast-Workflow","url":"/docs/products/Data Management and Access Tools/netwa/#using-jupyter-notebooks","content":" We recommmend using Jupyter notebooks for interactive computing, but you can also import forecast-workflow tools in a plain old python script as well (see code blocks below). Log onto the CIROH VM using Remote Desktop Viewer or similar softwareOpen a new terminal and cd to your forecast-workflow directoryActivate the forecast mamba environment with mamba activate /data/condaEnvs/forecastRun jupyter lab to launch JupyterIf starting a new notebook, click on the kernel called "forecast" underneath the "Notebooks" bannerOr if you have a notebook you're working on, simply open that one and pick up where you left off! Add the repo to your python path variable Evertime you start or restart a jupyter kernel, you will need to add the forecast-workflow directory to your sys.path variable so that python knows where to look for forecast-workflow code. You can do that with the following code block: import sys sys.path.append("/absolute/path/to/your/forecast-workflow") This cell should be at the top of your notebook, but you only need to run it once; comment out these lines after runing so that you do not add the same path to you sys.path over and over again. Now, you can import the data grabbers into your notebook like any other module! import data.nwm_fc as nwm import data.gfs_fc as gfs Etc. ","version":"Next","tagName":"h2"},{"title":"Data Grabber Demo","type":1,"pageTitle":"Forecast-Workflow","url":"/docs/products/Data Management and Access Tools/netwa/#data-grabber-demo","content":" There is a neat demo notebook that includes more in-depth instructions on how to use our data grabber tools. You can find that notebook at forecast-workflow/examples/get_data_demo.ipynb ","version":"Next","tagName":"h2"},{"title":"Getting Started","type":0,"sectionRef":"#","url":"/docs/products/Data Management and Access Tools/netwa/gettingstarted","content":"","keywords":"","version":"Next"},{"title":"Accessing the Testbed","type":1,"pageTitle":"Getting Started","url":"/docs/products/Data Management and Access Tools/netwa/gettingstarted#accessing-the-testbed","content":" ","version":"Next","tagName":"h2"},{"title":"Remote Desktop Software","type":1,"pageTitle":"Getting Started","url":"/docs/products/Data Management and Access Tools/netwa/gettingstarted#remote-desktop-software","content":" One of the easiest ways to access the NETWA is through the a remote desktop software, such as Remote Desktop Connection, which is a pre-installed softare on Windows. Microsoft also has a version available for MacOs as well, but any software that allows remote desktop access will do. Below are steps to access your own virtual desktop on the testbed: Launch Remote Desktop Connection or similar software to connect to your virtual desktopIn the "computer:" field, enter "ciroh-testbed.uvm.edu"Click "connect" and enter your university username and password when promptedCongratulations! You've just logged on to your very own NETWA virtual desktop ","version":"Next","tagName":"h3"},{"title":"SSH","type":1,"pageTitle":"Getting Started","url":"/docs/products/Data Management and Access Tools/netwa/gettingstarted#ssh","content":" Another way to access the testbed is through the Secure Shell Protocol, more comomnly known as SSH. This option is more appropriate for users who are familiar with using the Linux terminal or other command-line interfaces. Open a terminal application on your machine (there are built-in terminals on MacOS and Windows, or you can use a third-party app such as PuTTY or MobaXterm)Type the command ssh [email protected], using your actual netIDEnter your password when promptedNow you should be logged in to the testbed and be able to navigate via the command-line ","version":"Next","tagName":"h3"},{"title":"Data Storage","type":1,"pageTitle":"Getting Started","url":"/docs/products/Data Management and Access Tools/netwa/gettingstarted#data-storage","content":" The /netfiles/ciroh/ directory on the NETWA has over 40 terabytes of disk space that researchers can use to store data - that's equivalent to 40 million megabytes! Needless to say, there's plenty of space for data. If you have data you'd like to store here, please create your own directroy titled after your netID. For example, "John Smith" would store their data under /netfiles/ciroh/jsmith. ","version":"Next","tagName":"h2"},{"title":"Setting up Mamba","type":1,"pageTitle":"Getting Started","url":"/docs/products/Data Management and Access Tools/netwa/gettingstarted#setting-up-mamba","content":" Mamba is a lightweight version of Conda, a popular package manager for a variety of programming languages. It is used on the testbed to set up virtual environments that contain all of the relevant packages and dependencies for a given software repository or workflow. There are a few existing mamba environments on the testbed already, such as one that houses all of the packages necessary to run the forecast-workflow repo (more specific instructions for accessing that here). This section will demonstrate how to initalize mamba for a new user and how to see what virtual enviornments are available On the testbed, open a terminal and run the following command: /usr/local/mambaforge/bin/mamba init Note: you only have to do this once, not every time you open a new terminal or want to use mambaIf the command ran successfully, you should see something like this now at the command line: (base) [jsmith@ciroh-testbed ~]$ The (base) text indicates that you are in the mamba base environment. To see the list of packages in said enviornment (or any environment you happen to be in), run mamba list Note: from here on out, you can use mamba and conda at the command line interchangeably, but for simplicity's sake we will stick with using mamba To see a list of available environments, run mamba env list Most relevant mamba environments will be located in /data/condaEnvs/ To activate an environmnet, simply run mamba activate /data/condaEnvs/env_name replacing env_name with the actual name of the environment ","version":"Next","tagName":"h2"},{"title":"Evaluation Tools","type":0,"sectionRef":"#","url":"/docs/products/Evaluation Tools/","content":"Evaluation Tools 📄️ CSES Community Streamflow Evaluation System 📄️ TEEHR Tools for Exploratory Evaluation in Hydrologic Research","keywords":"","version":"Next"},{"title":"CSES","type":0,"sectionRef":"#","url":"/docs/products/Evaluation Tools/cses/","content":"CSES Community Streamflow Evaluation System (CSES) is a Python-based, user friendly, fast, and model agnostic streamflow evaluator tool. This tool can be used to evaluate any hydrological model that uses NHDPlus dataset. It allows a user to evaluate the performance of a hydrological model at the collocated USGS gauges and NHDPlus stream reaches. This Python-based tool helps visualize the results and investigate the model performance interactively. The current version of the tool is available on GitHub and can be accessed using the following link.","keywords":"","version":"Next"},{"title":"Products Overview","type":0,"sectionRef":"#","url":"/docs/products/intro","content":"Products Overview At CIROH, our team of researchers, hydrologists, and engineers is committed to advancing our understanding of hydrologic processes, improving operational hydrologic forecasting techniques and workflows, collaborating on community water modeling, converting forecasts into practical solutions, and utilizing water predictions to help guide decision-making processes.","keywords":"","version":"Next"},{"title":"NWM-ML","type":0,"sectionRef":"#","url":"/docs/products/Machine Learning and AI Tools/nwm_ml/","content":"NWM-ML","keywords":"","version":"Next"},{"title":"TEEHR","type":0,"sectionRef":"#","url":"/docs/products/Evaluation Tools/rtiteehr/","content":"TEEHR TEEHR (pronounced "tier") is a python tool set for loading, storing, processing and visualizing hydrologic data, particularly National Water Model data, for the purpose of exploring and evaluating the datasets to assess their skill and performance.","keywords":"","version":"Next"},{"title":"Portal","type":0,"sectionRef":"#","url":"/docs/products/Portal/","content":"Portal 📄️ CIROH Research Portal CIROH Research Portal","keywords":"","version":"Next"},{"title":"Advancing Snow Modeling","type":0,"sectionRef":"#","url":"/docs/products/Machine Learning and AI Tools/sweml/","content":"Advancing Snow Modeling The Snow Water Equivalent Machine Learning(SWEML) incorporates ground-based snow measuring sites, remotely-sensed snow cover information, and a Artificial Neural Network to provide point estimations of Snow Water Equivalent. The network was trained on historical data data from NASA’s ASO missions, divided into regions, and then a LightGradientBoost Model was used to preform recursive feature elimination to produce an efficient feature selection and region-specific model. The class contains the required functions for downloading data, pre-processing, running inference, and for producing visualizations.","keywords":"","version":"Next"},{"title":"Machine Learning and AI Tools","type":0,"sectionRef":"#","url":"/docs/products/Machine Learning and AI Tools/","content":"Machine Learning and AI Tools 📄️ SWEML Snow Water Equivalent Machine Learning 📄️ NWM-ML National Water Model - Machine Learning","keywords":"","version":"Next"},{"title":"Visualization and Analysis Tools","type":0,"sectionRef":"#","url":"/docs/products/Visualization and Analysis Tools/","content":"Visualization and Analysis Tools 📄️ Tethys-CSES Community Streamflow Evaluation System (CSES) - Tethys Web Application","keywords":"","version":"Next"},{"title":"Tethys-CSES","type":0,"sectionRef":"#","url":"/docs/products/Visualization and Analysis Tools/tethys-cses/","content":"Tethys-CSES","keywords":"","version":"Next"},{"title":"CIROH Research Portal","type":0,"sectionRef":"#","url":"/docs/products/Portal/research-portal/","content":"","keywords":"","version":"Next"},{"title":"Technologies","type":1,"pageTitle":"CIROH Research Portal","url":"/docs/products/Portal/research-portal/#technologies","content":" The proposed catalog is being built using Django CMS, Tethys Platform, HydroShare, HydroLearn, Amazon AWS cloud computing infrastructure, Google BigQuery data services, and an agile, continuous integration development and deployment approach. Use the following links to learn more about the technologies powering the NWM Research Apps Portal: Django CMS - A Content Management System built on the most popular open-source Python-based web framework: Django.Tethys Platform - An open-source web framework for build and hosting geospatial and scientific web applications.HydroShare - CUAHSI's online collaboration environment for sharing data, models, and code.HydroLearn - A platform for instructors to find and collaborate on developing learning modules, and for students to discover and learn from authentic problems.Amazon AWS - The NWM Research Apps Portal is hosted on AWS cloud resources.Google BigQuery - BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data. Use built-in ML/AI and BI for insights at scale. ","version":"Next","tagName":"h2"},{"title":"Research Applications","type":1,"pageTitle":"CIROH Research Portal","url":"/docs/products/Portal/research-portal/#research-applications","content":" The portal includes applications developed using the Tethys Platform framework, and external applications. Please use the following information to learn more about the applications hosted in the portal: ","version":"Next","tagName":"h2"},{"title":"Native Applications","type":1,"pageTitle":"CIROH Research Portal","url":"/docs/products/Portal/research-portal/#native-applications","content":" Native applications are application developed using the Tethys Platform framework. The following application are hosted in the CIROH Research Portal Water Data Explorer: Open-source web application which allows users to import, visualizem, and interact with water data management services such as CUAHSI WaterOneFlow and the SensorThings API used by HydroServer. Met Data Explorer: Open-source web application for visualizing meteorological gridded data. Utilizing TDS to serve the data, the application allows you to organize and save data files with the specific variables and dimensions that you need, visualize the data in a Leaflet based map viewer, animate the data across a time series, and extract a time series over a specified area. HydroCompute && HydroLang Tethys Application Demo: This application demonstrates two projects developed by the University of Iowa Hydroinformatics Lab (UIHI Lab): HydroLang and HydroCompute. This application serves as the solution for HydroCompute Case Study 3: Dashboard for Station Statistical Analysis tutorial. This tutorial was developed by the University of Iowa Hydroinformatics Lab for the 2023 CIROH Developer Conference. SWEOpen-source web application designed for the visualization of Standardized Snow Water Equivalent (SWE). This application provides users with a comprehensive platform to view, and interpret snow water equivalent data, which is crucial for understanding snowpack dynamics, water resource management, and environmental monitoring Ground Water Mapper Application: The Groundwater Data Mapper (GWDM) is an open source web application and an associated set of Python scripts. The GWDM app can be used to host and visualize groundwater data in one or more regions. The groundwater data is a set of wells organized by aquifers. For each well, the user can upload a set of measurements (typically groundwater elevations) taken at different points in time. The wells are then displayed on a map in the web interface and the measurement time series can be visualized by clicking on the wells. Grace Groundwater Subsseting Tool: The Grace Groundwater Subsseting Tool (GGST) app uses GRACE data to generate time series and animated maps of groundwater storage changes. GRACE provides monthly estimates of water storage anomalies in equivalent water height and has provided monthly gravity field solutions since April 2002. Estimates of mass variability and associated observational errors are available on a global 300 km grid. GRACE has proved an effective tool for characterizing groundwater storage changes in large regions Snow Inspector: Open-source web tool designed for visualizing and analyzing snow cover data using imagery from the MODIS_Terra satellite. It leverages interactive maps and time-series plots to provide insights into snow coverage and depth over specified periods. The application fetches data from NASA's GIBS WMTS service, processes the imagery to extract snow percentage values, and presents the information through a user-friendly interface. This tool is particularly valuable for researchers, hydrologists, and environmental scientists studying snow dynamics and their impact on water resources. OWP Experimental App: Open-source application designed to provide analysis assimilation, short-term, medium-term, and long-term forecasts for any river ID that appears in the anomaly layer of the National Water Model. It also integrates with the Google BigQuery API to perform the same analysis and forecasts for groups of reaches. Additionally, the app allows users to create reach groups from a CSV list of reaches, geometry data, or HydroShare resources. Community Streamflow Evaluation System (CSES): Open-source web application designed to evaluate hydrological model performance using the National Water Model (NWM) and USGS monitoring data. It provides interactive visualizations for over 5,000 monitoring sites, enabling users to analyze streamflow data and assess model accuracy. The application supports various evaluation methods, including state, reach, and HUC (Hydrologic Unit Code) evaluations, and leverages data from the Alabama Water Institute hosted on Amazon AWS S3. Researchers can also use a Python-based package for hands-on model evaluation and are encouraged to contribute to the development and integration of additional hydrological modeling components. ","version":"Next","tagName":"h3"},{"title":"Proxy Applications","type":1,"pageTitle":"CIROH Research Portal","url":"/docs/products/Portal/research-portal/#proxy-applications","content":" Proxy applications are applications developed using frameworks other than the Tethys platform and are deployed outside the CIROH research portal. However, they are integrated into the portal through Tethys' capability to incorporate external applications. This allows users to access and utilize these applications seamlessly within the CIROH research portal environment. OWP NWM Map Viewer: Experimental geospatial services depicting forecasts from the River Forecast Centers and the National Water Model. Services available via the prototype NWS National Map Viewer, or directly via URLs hosted on the Hydrologic Visualization and Inundation Services (HydroVIS) cloud resource. CIROH JupyterHub: Cloud computing service that enables users to execute scientific code and explore, modify, and interact with data inside a remote execution environment using Python and/or R programming languages. Our JupyterHub is closely integrated with CUAHSI’s HydroShare and Hydrologic Information System data repositories, making it easy to leverage community datasets, collaborate, and disseminate research workflows. This platform provides all users with a modest amount of persistent disk space to store files as well as free cloud computing, making it ideal for water science education, data analysis, and research dissemination. To access this system you must become a member of the CUAHSI HydroShare and join our Cloud Computing group HydroShare: Online, collaborative system for sharing, publishing, and discovering water-related data and models. It is developed and maintained by the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI). You can develop an application using the Tethys platform, which can later be installed in the portal for greater visibility. Similarly, if you already have an application developed using another framework already deployed it can be added to the portal as a proxy applications ","version":"Next","tagName":"h3"},{"title":"Access","type":1,"pageTitle":"CIROH Research Portal","url":"/docs/products/Portal/research-portal/#access","content":" The developed NWM Research Apps Portal is hosted at https://portal.ciroh.org and serves as a platform that links and organizes together a curated selection of publicly-facing NWM visualization and analysis tools developed by CIROH and NOAA. ","version":"Next","tagName":"h2"},{"title":"Contribute","type":1,"pageTitle":"CIROH Research Portal","url":"/docs/products/Portal/research-portal/#contribute","content":" There are many ways to contribute to the NWM Research Apps Portal. You can contribute applications, datasets, learning modules, and publications, which will gain greater visibility within the CIROH community. ","version":"Next","tagName":"h2"},{"title":"Applications","type":1,"pageTitle":"CIROH Research Portal","url":"/docs/products/Portal/research-portal/#applications","content":" If you would like any current application that you have to appear on the portal application page, you can create a HydroShare resource containing your application information, and it will be displayed automatically in the portal. Follow these steps: Log in to HydroShare or create an account. Create a HydroShare App Resource. Add the following keyword: nwm_portal_app. Fill in the following metadata: app iconhelp page urlapp home page urlsource code url Note: You need to provide an URL for the app_icon, for example:http://portal.ciroh.org/t/static/water_data_explorer/images/wde.png Make the resource public. After following these steps, you will be able to see your app as shown below: ","version":"Next","tagName":"h3"},{"title":"Data","type":1,"pageTitle":"CIROH Research Portal","url":"/docs/products/Portal/research-portal/#data","content":" If you would like any current data set that you have to appear on the portal data access page, you can create a HydroShare resource containing your dataset information, and it will be displayed automatically in the portal. Follow these steps: Log in to HydroShare or create an account. Create a HydroShare Resource. Add the following keyword: nwm_portal_data. Fill in the following metadata in the additional metadata portion: app_iconhelp_page_urlapp_home_page_urlsource_code_url Note: You need to provide an URL for the app_icon, for example:http://portal.ciroh.org/t/static/water_data_explorer/images/wde.png Make the resource public. After following these steps, you will be able to see your dataset: ","version":"Next","tagName":"h3"},{"title":"Publications","type":1,"pageTitle":"CIROH Research Portal","url":"/docs/products/Portal/research-portal/#publications","content":" If you would like any current or previous publication to appear on the portal publications page, you can create a Zotero item in the Zotero CIROH Library Group for your publications, and it will be displayed automatically in the portal. Follow these steps: Request access to the Zotero CIROH Library Group.Return to your Zotero library.Sync your Zotero account to see your group folder.Click on a group folder, then add new citations using the Zotero icon in the URL bar or drag and drop items from other folders into your group folders. After following these steps, you will be able to see your publications: ","version":"Next","tagName":"h3"},{"title":"Learning Modules","type":1,"pageTitle":"CIROH Research Portal","url":"/docs/products/Portal/research-portal/#learning-modules","content":" If you would like any HydroLearn learning module to appear on the portal learning modules page, you can create a HydroShare resource containing your learning module information, and it will be displayed automatically in the portal. Follow these steps: Log in to HydroShare or create an account.Create a HydroShare Resource.Add the following keyword: nwm_portal_module.Make sure the name of the resource is the same as the name of the learning module.Make the resource public. After following these steps, you will be able to see your learning module ","version":"Next","tagName":"h3"},{"title":"Resources","type":1,"pageTitle":"CIROH Research Portal","url":"/docs/products/Portal/research-portal/#resources","content":" Making Resources PublicHydroShare ResourcesUploading and Publishing Data and Models ","version":"Next","tagName":"h3"},{"title":"Code","type":1,"pageTitle":"CIROH Research Portal","url":"/docs/products/Portal/research-portal/#code","content":" The source code for the NWM Research Apps Portal can be found on GitHub: https://github.com/CIROH-UA/tethysportal-ciroh ","version":"Next","tagName":"h2"},{"title":"Bugs and Issues","type":1,"pageTitle":"CIROH Research Portal","url":"/docs/products/Portal/research-portal/#bugs-and-issues","content":" Please report bugs and issues on the GitHub Issues page: https://github.com/CIROH-UA/tethysportal-ciroh/issues ","version":"Next","tagName":"h2"},{"title":"Team","type":1,"pageTitle":"CIROH Research Portal","url":"/docs/products/Portal/research-portal/#team","content":" Brigham Young University - Roja Najafi, Iman Maghami, Dan Ames, Jim NelsonUniversity of Iowa - Ibrahim DemirAquaveo Contractors - Gio Romero, Michael Souffront, Nathan Swain ","version":"Next","tagName":"h2"},{"title":"Develop","type":1,"pageTitle":"CIROH Research Portal","url":"/docs/products/Portal/research-portal/#develop","content":" If you would like to develop a native Tethys application or integrate an existing application about your research into the CIROH Research Portal, please contact the Aquaveo team: Nathan SwainMichael SouffrontJacob JohnsonGiovanni Romero ","version":"Next","tagName":"h2"},{"title":"CIROH AWS Account","type":0,"sectionRef":"#","url":"/docs/services/cloudservices/aws/","content":"","keywords":"","version":"Next"},{"title":"How to get access to CIROH AWS Account?","type":1,"pageTitle":"CIROH AWS Account","url":"/docs/services/cloudservices/aws/#how-to-get-access-to-ciroh-aws-account","content":" CIROH Cloud Hosting services include: Creation of AWS subaccounts for CIROH consortium members and partners.Project PI contact identity creation and access (AWS IAM) We encourage PI of the project to start here: (select Public Cloud and fill out details for AWS) Infrastructure Request Form CIROH Consortium member or partner is responsible for: Management of CIROH subaccount assigned to them.project specific software and environment configuration.Additional project's contacts creation and/or access. note Please refer link for references to submitted forms. ","version":"Next","tagName":"h2"},{"title":"Cost of use:","type":1,"pageTitle":"CIROH AWS Account","url":"/docs/services/cloudservices/aws/#cost-of-use","content":" Some uses may be covered by CIROH Infrastructure project fundsExtensive uses may be charged to individual CIROH projects Unleashing Research Potential with AWS Cloud Services Here's how AWS empowers your research: Enhanced Data Accessibility and Analysis: AWS provides scalable storage and computing resources, allowing researchers to readily access, analyze, and manipulate vast datasets efficiently. Specialized Solutions at Your Fingertips: The AWS Marketplace offers a wealth of pre-built solutions and tools specifically designed for hydrological research. This eliminates the need for time-consuming development and allows researchers to focus on scientific discovery. World-Class IT Infrastructure for Research Excellence: AWS offers a robust and secure cloud infrastructure that delivers the best possible IT foundation for your research projects. This translates to increased efficiency, reduced costs, and faster time-to-results. Accelerated Research Timelines: By leveraging the on-demand scalability and elasticity of AWS, researchers can dynamically scale their computing resources to meet the specific needs of their projects. This translates to faster analysis and completion of research endeavors. ","version":"Next","tagName":"h2"},{"title":"Where to go for help:","type":1,"pageTitle":"CIROH AWS Account","url":"/docs/services/cloudservices/aws/#where-to-go-for-help","content":" Email [email protected] UA CIROH Cloud TeamCIROH Cloud Slack Channel - #ciroh-ua-it-adminCIROH AWS support Slack Channel - #aws-ciroh-support 📄️ CIROH AWS Office Hours CIROH AWS Office Hours 📄️ AWS News Blog AWS News Blog 🗃️ Documentation and Tutorial 4 items ","version":"Next","tagName":"h2"},{"title":"Infrastructure Access Guide","type":0,"sectionRef":"#","url":"/docs/services/access","content":"","keywords":"","version":"Next"},{"title":"Public Cloud","type":1,"pageTitle":"Infrastructure Access Guide","url":"/docs/services/access#public-cloud","content":" CIROH has partnered with AWS, 2i2c, and Google Cloud to provide access to their cloud computing services. ","version":"Next","tagName":"h2"},{"title":"How to Request Access","type":1,"pageTitle":"Infrastructure Access Guide","url":"/docs/services/access#how-to-request-access","content":" Submit a GitHub template request detailing your project requirements and specifications.Our team will review your request and assist you in obtaining the necessary access. Cloud Infrastructure Request Form ","version":"Next","tagName":"h3"},{"title":"Cost of Use","type":1,"pageTitle":"Infrastructure Access Guide","url":"/docs/services/access#cost-of-use","content":" Some uses may be covered by CIROH core funds (contact CIROH CyberInfrastructure Team at [email protected] to start a request)Extensive uses may be charged to individual CIROH projects Note: The Cloud Infrastructure Request Form must be submitted by the Principal Investigator (PI) of the project. ","version":"Next","tagName":"h3"},{"title":"CIROH JupyterHub","type":1,"pageTitle":"Infrastructure Access Guide","url":"/docs/services/access#ciroh-jupyterhub","content":" CIROH JupyterHub provides both CPU and GPU capabilities. ","version":"Next","tagName":"h2"},{"title":"How to Get Access","type":1,"pageTitle":"Infrastructure Access Guide","url":"/docs/services/access#how-to-get-access","content":" Submit one of the following forms to get access to CIROH JupyterHub environments: JupyterHub (2i2c) CPU Access Request Form JupyterHub (2i2c) GPU Access Request Form ","version":"Next","tagName":"h3"},{"title":"Requesting Custom Images","type":1,"pageTitle":"Infrastructure Access Guide","url":"/docs/services/access#requesting-custom-images","content":" To request custom images: Create an environment.yml file by exporting your conda environment.Fill out the JupyterHub (2i2c) Software Install form. JupyterHub (2i2c) Software Install Form ","version":"Next","tagName":"h3"},{"title":"On-Premises Infrastructure","type":1,"pageTitle":"Infrastructure Access Guide","url":"/docs/services/access#on-premises-infrastructure","content":" CIROH operates an on-premises infrastructure that includes high-performance computing (HPC) resources and specialized software. ","version":"Next","tagName":"h2"},{"title":"How to Request Access","type":1,"pageTitle":"Infrastructure Access Guide","url":"/docs/services/access#how-to-request-access-1","content":" Submit a GitHub template request detailing your project requirements and specifications.Our team will review your request and assist you in obtaining the necessary access. Note: The On-Premises Request Form must be submitted by the Principal Investigator (PI) of the project. On-premises Infrastructure Request Form ","version":"Next","tagName":"h3"},{"title":"Accessing Pantarhei or Wukong","type":1,"pageTitle":"Infrastructure Access Guide","url":"/docs/services/access#accessing-pantarhei-or-wukong","content":" Step 1: Submit the On-premise Access Request form for individual user accounts: On-Premise Access Request Form Note: For UA users, please submit the On-Premises Access Request Form.For users outside UA, please start with the VPN Access Request Form followed by the On-Premises Access Request Form. ","version":"Next","tagName":"h3"},{"title":"Public Cloud Services","type":0,"sectionRef":"#","url":"/docs/services/cloudservices/","content":"Public Cloud Services In tandem with the power of the public cloud, our team of researchers, hydrologists, and engineers at CIROH is committed to advancing our understanding of hydrologic processes, improving operational hydrologic forecasting techniques and workflows, collaborating on community water modeling, converting forecasts into practical solutions, and utilizing water predictions to help guide decision-making processes. By leveraging the scalability and flexibility of public cloud platforms like AWS and Google Cloud, CIROH Cloud empowers our team to conduct groundbreaking research in hydrology. This translates into a robust and efficient computing environment that accelerates discovery and innovation. 🗃️ CIROH AWS Account 3 items 📄️ CIROH Google Account Google Research Cloud 🗃️ CIROH JupyterHub 1 item 📄️ HydroShare and CIROH JupyterHub Integration HydroShare and CIROH JupyterHub Integration 📄️ CUAHSI JupyterHub cuahsi jupyterhub","keywords":"","version":"Next"},{"title":"AWS News Blog","type":0,"sectionRef":"#","url":"/docs/services/cloudservices/aws/aws_news","content":"AWS News Blog Stay up-to-date on the latest AWS news and announcements by visiting the official AWS News Blog: AWS News Blog Here, you'll find: Product launches and updatesCustomer success storiesIndustry insightsTechnical deep divesAnd much more! Whether you're a seasoned AWS user or just getting started, the AWS News Blog is a valuable resource for staying informed about the latest developments in the cloud.","keywords":"","version":"Next"},{"title":"Documentation and Tutorial","type":0,"sectionRef":"#","url":"/docs/services/cloudservices/aws/documentation/","content":"Documentation and Tutorial 📄️ AWS Best Practices As the main account administrator for CIROH subaccount, here are some best practices to follow within your subaccount: 📄️ Tag Resources on AWS AWS tags for cost tracking 📄️ AWS Data Science Tools AWS Data Science Tools 📄️ AWS S3 Sync with Different Access Keys AWS S3 Sync with Different Access Keyss","keywords":"","version":"Next"},{"title":"AWS Best Practices for CIROH AWS Users","type":0,"sectionRef":"#","url":"/docs/services/cloudservices/aws/documentation/aws-best-practice/","content":"","keywords":"","version":"Next"},{"title":"Security:","type":1,"pageTitle":"AWS Best Practices for CIROH AWS Users","url":"/docs/services/cloudservices/aws/documentation/aws-best-practice/#security","content":" MFA: Require Multi-Factor Authentication (MFA) for all subaccount users and admins to enhance account security. IAM roles for resources: Instead of individual access keys, utilize IAM roles for accessing resources within subaccounts. This simplifies access management and eliminates the need for storing long-lived credentials. Regularly review and update permissions: Regularly review and update user and role permissions within subaccounts to ensure they remain aligned with their current needs. Utilize git-secrets: git-secrets is a client tool that prohibits unwanted commits containing secret data such as API keys, passwords, and tokens. You can integrate it into your CI/CD pipelines to prevent sensitive information from being added to your GitHub repositories. For more information, refer to the AWS documentation and the git-secrets GitHub repository. Use AWS Secrets Manager: Use AWS Secrets Manager, or other secrets management solution, so you don’t have to hardcode keys in plaintext. The application or client can then retrieve secrets when needed. For more information, see What is AWS Secrets Manager? ","version":"Next","tagName":"h2"},{"title":"Access Key Management :","type":1,"pageTitle":"AWS Best Practices for CIROH AWS Users","url":"/docs/services/cloudservices/aws/documentation/aws-best-practice/#access-key-management-","content":" Never store your access key in plain text, in a code repository, or in code.Never check in the access key in the public repository.Disable or delete access key when no longer needed.Enable least-privilege permissions.Rotate access keys regularly, preferably every 90 days. ","version":"Next","tagName":"h2"},{"title":"Resource Management:","type":1,"pageTitle":"AWS Best Practices for CIROH AWS Users","url":"/docs/services/cloudservices/aws/documentation/aws-best-practice/#resource-management","content":" Tagging: Implement a consistent tagging strategy for resources in all linked accounts. This allows for better cost allocation, resource identification, and easier filtering when managing resources across multiple accounts. Follow How to tag resources on AWS. Cost allocation: Allowed limit for new subaccount is $500/project per month. Monitor the usage throughout the month and if it reaches above $500/project, notify admin of the subaccount to take necessary actions. For projects expecting more than $500 per month usage, please email [email protected] in advance to get the approval from higher management. Effective Sept 2024, we transitioned to a new budgeting model (for existing users) that provides your CIROH AWS subaccount with a $10,000 budget for every 6-month period and monthly max limit of $3000. This change will give you more flexibility to plan and execute your research workloads without the constraints of a monthly cap. Resource quotas: Set resource quotas for subaccounts to limit their spending and resource usage. This helps prevent accidental overspending and ensures efficient resource allocation. Monitor resource usage: Encourage subaccount admins to monitor their resource usage regularly to identify potential cost optimization opportunities. Data Locality: Always consider the location of your data when selecting a region for deploying resources. Deploying resources in the same region as your data minimizes data transfer costs and latency, leading to improved performance and cost-efficiency. Region Selection: Carefully evaluate the available AWS regions and select the one that best aligns with your data residency requirements, compliance needs, and desired performance characteristics. EBS: EBS Volume Management: Avoiding Unnecessary Costs: Terminate EBS Volumes with Terminated Instances: When terminating an EC2 instance, ensure that you also delete any associated EBS volumes that are no longer needed. EBS volumes incur charges even if they are not attached to a running instance. Regularly Review EBS Volume Usage: Periodically review your EBS volumes using the EC2 Management Console or AWS CLI. Identify any unattached volumes that are no longer required and delete them to avoid ongoing charges. EFS: Data Lifecycle Management: Evaluate your data access patterns. For infrequently accessed files, consider migrating data from Amazon EFS to Amazon S3 to leverage its cost-efficient storage classes, such as S3 Standard-IA or S3 Glacier. Tiered Storage Strategy: Implement a tiered storage strategy where frequently accessed data resides on EFS for high performance, while infrequently accessed or archival data is moved to S3 for cost-effective long-term storage. ","version":"Next","tagName":"h2"},{"title":"Governance and Compliance:","type":1,"pageTitle":"AWS Best Practices for CIROH AWS Users","url":"/docs/services/cloudservices/aws/documentation/aws-best-practice/#governance-and-compliance","content":" Standardized configurations: Establish and enforce standardized configurations for resources across linked accounts. This ensures consistency and simplifies management. Compliance policies: Implement compliance policies for subaccounts to ensure they adhere to relevant regulations and internal standards. Logging and auditing: Enable logging and auditing for all activities within linked accounts to track resource usage, identify potential security threats, and maintain compliance. Regular security audits: Conduct regular security audits of linked accounts to identify and address any vulnerabilities. ","version":"Next","tagName":"h2"},{"title":"Additional Recommendations:","type":1,"pageTitle":"AWS Best Practices for CIROH AWS Users","url":"/docs/services/cloudservices/aws/documentation/aws-best-practice/#additional-recommendations","content":" Centralized documentation: Use CIROH DocuHub (docs.ciroh.org) as a central location for documenting procedures, best practices, and resource usage guidelines for linked accounts. Training and awareness: Offer training and awareness programs to subaccount admins on secure practices, compliance requirements, and resource management best practices through CIROH AWS Office hours. Regular communication: Maintain regular communication with subaccount admins to address their concerns, answer questions, and share updates regarding policies and procedures via Slack Channel and also available through CIROH AWS Office hours. ","version":"Next","tagName":"h2"},{"title":"Application Deployment:","type":1,"pageTitle":"AWS Best Practices for CIROH AWS Users","url":"/docs/services/cloudservices/aws/documentation/aws-best-practice/#application-deployment","content":" Use terraforms or any Infrastructure as Code if possible for your application deployment. ","version":"Next","tagName":"h2"},{"title":"AWS S3 Sync with Different Access Keys","type":0,"sectionRef":"#","url":"/docs/services/cloudservices/aws/documentation/aws-s3-sync/","content":"","keywords":"","version":"Next"},{"title":"Step 1: Configure AWS CLI Profiles","type":1,"pageTitle":"AWS S3 Sync with Different Access Keys","url":"/docs/services/cloudservices/aws/documentation/aws-s3-sync/#step-1-configure-aws-cli-profiles","content":" First, set up two different profiles in your AWS CLI configuration, one for the source bucket and one for the destination bucket. # Configure source profile aws configure --profile source-profile AWS Access Key ID [None]: [Enter source bucket access key] AWS Secret Access Key [None]: [Enter source bucket secret key] Default region name [None]: [Enter region, e.g., us-west-2] Default output format [None]: json # Configure destination profile aws configure --profile destination-profile AWS Access Key ID [None]: [Enter destination bucket access key] AWS Secret Access Key [None]: [Enter destination bucket secret key] Default region name [None]: [Enter region, e.g., us-east-1] Default output format [None]: json ","version":"Next","tagName":"h3"},{"title":"Step 2: Verify Profiles","type":1,"pageTitle":"AWS S3 Sync with Different Access Keys","url":"/docs/services/cloudservices/aws/documentation/aws-s3-sync/#step-2-verify-profiles","content":" Verify that your profiles are set up correctly: aws configure list --profile source-profile aws configure list --profile destination-profile ","version":"Next","tagName":"h3"},{"title":"Step 3: Use AWS S3 Sync Command","type":1,"pageTitle":"AWS S3 Sync with Different Access Keys","url":"/docs/services/cloudservices/aws/documentation/aws-s3-sync/#step-3-use-aws-s3-sync-command","content":" Now, you can use the aws s3 sync command with these profiles. Here's the basic syntax: aws s3 sync s3://source-bucket s3://destination-bucket \\ --source-profile source-profile \\ --profile destination-profile Examples #Sync all contents: aws s3 sync s3://my-source-bucket s3://my-destination-bucket \\ --source-profile source-profile \\ --profile destination-profile #Sync with specific options: aws s3 sync s3://my-source-bucket s3://my-destination-bucket \\ --source-profile source-profile \\ --profile destination-profile \\ --delete \\ --exclude "*.tmp" \\ --include "*.jpg" #Dry run (no actual changes): aws s3 sync s3://my-source-bucket s3://my-destination-bucket \\ --source-profile source-profile \\ --profile destination-profile \\ --dryrun ","version":"Next","tagName":"h3"},{"title":"Important Notes:","type":1,"pageTitle":"AWS S3 Sync with Different Access Keys","url":"/docs/services/cloudservices/aws/documentation/aws-s3-sync/#important-notes","content":" The --source-profile is used for reading from the source bucket.The --profile (or --dest-profile in some AWS CLI versions) is used for writing to the destination bucket.Ensure both profiles have the necessary permissions on their respective buckets.Be cautious with the --delete flag as it removes files in the destination that don't exist in the source.Always test with --dryrun first to ensure the operation will do what you expect. ","version":"Next","tagName":"h3"},{"title":"AWS Data Science Tools","type":0,"sectionRef":"#","url":"/docs/services/cloudservices/aws/documentation/data-science-tools/","content":"AWS Data Science Tools A collection of Amazon Web Services (AWS) scripts supporting Water Data Science. This repository can serve as a resoruces for those looking to connect and leverage the power of AWS products, specifically AWS S3 storage. info AWS Data Science Tools","keywords":"","version":"Next"},{"title":"How to Tag Resources on AWS","type":0,"sectionRef":"#","url":"/docs/services/cloudservices/aws/documentation/tagging/","content":"Tags in AWS are essential for organizing resources based on their purpose, owner, or environment, and can also aid in cost tracking when unique key-value pairs are assigned. How to Tag Resources on AWS Using AWS Console: Navigate to the desired resource, such as an EC2 instance, and follow these steps: Select the instance from the list view.Go to the Tags tab and click on the Manage tags button.Add a new tag with a unique Key and Value. Save the changes. Using AWS CLI: Use the following command-line example to create a tag for an EC2 instance: aws ec2 create-tags \\ --resources i-1234567890abcdef0 \\ --tags Key=webserver,Value=dev For each project, tag all its resources with: Project=project_name (e.g., ciroh-hydroshare, ciroh-fim) Double-check the tag name with the AWS main account admin to make sure it fits well with our naming scheme.","keywords":"","version":"Next"},{"title":"CIROH AWS Office Hours","type":0,"sectionRef":"#","url":"/docs/services/cloudservices/aws/officehours","content":"CIROH AWS Office Hours Scheduled the monthly CIROH AWS Office Hour sessions, an opportunity for discussing AWS-related inquiries with direct response from AWS experts. These sessions cover various topics, AWS services, projects, and other topics of interest. AWS technical staff is available to address the community's AWS questions and have discussions around best practices. We encourage CIROH members to participate to learn more about how to effectively leverage AWS tools and resources for their projects and share knowledge and experience with fellow CIROH members. info Email: [email protected] to subscribe to monthly CIROH Office Hour Sessions.","keywords":"","version":"Next"},{"title":"CIROH JupyterHub","type":0,"sectionRef":"#","url":"/docs/services/cloudservices/ciroh jupyterhub/","content":"","keywords":"","version":"Next"},{"title":"Powered by 2i2c JupyterHub on Google Cloud","type":1,"pageTitle":"CIROH JupyterHub","url":"/docs/services/cloudservices/ciroh jupyterhub/#powered-by-2i2c-jupyterhub-on-google-cloud","content":" CIROH, in collaboration with 2i2c, offers a dedicated JupyterHub environment on Google Cloud specifically designed for hydrological researchers. 2i2c is a cloud service provider specializing in open-source infrastructure for research and development. ","version":"Next","tagName":"h2"},{"title":"How to get access to these environments?","type":1,"pageTitle":"CIROH JupyterHub","url":"/docs/services/cloudservices/ciroh jupyterhub/#how-to-get-access-to-these-environments","content":" CIROH JupyterHub provides both CPU and GPU capabilities. Please submit the form below to get access to the CIROH JupyterHub environments: JupyterHub (2i2c) CPU Access Request Form JupyterHub (2i2c) GPU Access Request Form note You will need to submit GitHub username for this request. If you do not currently have GitHub account, follow instruction at GitHub account ","version":"Next","tagName":"h3"},{"title":"How to request new software install on 2i2c?","type":1,"pageTitle":"CIROH JupyterHub","url":"/docs/services/cloudservices/ciroh jupyterhub/#how-to-request-new-software-install-on-2i2c","content":" Please refer Dockerfile file for the list of software currently deployed on CIROH JupyterHub. If your software in not listed in this file, please submit the form below to request new software installation on 2i2c JupyterHub. JupyterHub (2i2c) Software Install Form ","version":"Next","tagName":"h3"},{"title":"CIROH JupyterHub Environments:","type":1,"pageTitle":"CIROH JupyterHub","url":"/docs/services/cloudservices/ciroh jupyterhub/#ciroh-jupyterhub-environments","content":" Click on below button to access the CIROH JupyterHub environments (production and staging environments): CIROH Production JupyterHub CIROH Staging JupyterHub note Please remember to stop the server when you're not actively using it ","version":"Next","tagName":"h3"},{"title":"Softwares currently deployed on CIROH JupyterHub","type":1,"pageTitle":"CIROH JupyterHub","url":"/docs/services/cloudservices/ciroh jupyterhub/#softwares-currently-deployed-on-ciroh-jupyterhub","content":" Please refer Dockerfile fileof the CIROH JupyterHub for the list of software deployed on CIROH Jupyter. ","version":"Next","tagName":"h3"},{"title":"Server Options","type":1,"pageTitle":"CIROH JupyterHub","url":"/docs/services/cloudservices/ciroh jupyterhub/#server-options","content":" Small - 5GB RAM, 2 CPUs Medium - 11GB RAM, 4 CPUs Large - 24GB RAM, 8 CPUs Huge - 52GB RAM, 16 CPUs ","version":"Next","tagName":"h3"},{"title":"Cost of Use:","type":1,"pageTitle":"CIROH JupyterHub","url":"/docs/services/cloudservices/ciroh jupyterhub/#cost-of-use","content":" CIROH 2i2c JupyterHub is free to use (cost covered by CIROH Infrastructure project funds) ","version":"Next","tagName":"h3"},{"title":"Benefits of CIROH Cloud's 2i2c JupyterHub:","type":1,"pageTitle":"CIROH JupyterHub","url":"/docs/services/cloudservices/ciroh jupyterhub/#benefits-of-ciroh-clouds-2i2c-jupyterhub","content":" Managed JupyterHub as a Service: CIROH Cloud takes care of the entire JupyterHub infrastructure, allowing researchers to focus on their scientific endeavors. Open Source Powerhouse: Built on open-source tools, 2i2c JupyterHub offers flexibility, scalability, and a collaborative environment that fosters research advancement. Leveraging Google Cloud: 2i2c utilizes Google Cloud's robust infrastructure to deliver a powerful and reliable platform for your computational needs. This reworded version emphasizes the collaboration between CIROH and 2i2c, highlights the benefits specifically for hydrological researchers, and clarifies the role of Google Cloud. ","version":"Next","tagName":"h2"},{"title":"Documentation and Tutorial","type":0,"sectionRef":"#","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/","content":"","keywords":"","version":"Next"},{"title":"2i2c JupyterHub details:","type":1,"pageTitle":"Documentation and Tutorial","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/#2i2c-jupyterhub-details","content":" info 2i2c JupyterHub ","version":"Next","tagName":"h3"},{"title":"2i2c JupyterHub Documentation and Tutorials:","type":1,"pageTitle":"Documentation and Tutorial","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/#2i2c-jupyterhub-documentation-and-tutorials","content":" info 2i2c Docs ","version":"Next","tagName":"h3"},{"title":"2i2c JupyterHub Infrastructure Documentation:","type":1,"pageTitle":"Documentation and Tutorial","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/#2i2c-jupyterhub-infrastructure-documentation","content":" info 2i2c Infrastructure Docs ","version":"Next","tagName":"h3"},{"title":"awi-ciroh-image repository on CIROH GitHub:","type":1,"pageTitle":"Documentation and Tutorial","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/#awi-ciroh-image-repository-on-ciroh-github","content":" info Github URL for AWI-CIROH 2i2c Hub Image note For any other questions, please email us at [email protected] with detailed information. 📄️ JupyterHub User Directory 2i2c JupyterHub File System 📄️ Manage files in GCP bucket 2i2c JupyterHub Google Cloud Buckets 📄️ Push and Pull to GitHub 2i2c JupyterHub is a cloud-based JupyterHub environment specifically designed for hydrological researchers. It is powered by 2i2c JupyterHub, a cloud-based JupyterHub environment specifically on Google Cloud 📄️ Request custom images 2i2c JupyterHub is a cloud-based JupyterHub environment specifically designed for hydrological researchers. It is powered by 2i2c JupyterHub, a cloud-based JupyterHub environment specifically on Google Cloud 📄️ Persistent Conda Environment 2i2c JupyterHub is a cloud-based JupyterHub environment specifically designed for hydrological researchers. It is powered by 2i2c JupyterHub, a cloud-based JupyterHub environment specifically on Google Cloud ","version":"Next","tagName":"h3"},{"title":"JupyterHub User Directory","type":0,"sectionRef":"#","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/directory/","content":"","keywords":"","version":"Next"},{"title":"1. /home/jovyan","type":1,"pageTitle":"JupyterHub User Directory","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/directory/#1-homejovyan","content":" This is your home directory and is same for all JupyterHub users. Only you can access files in your home directory. Any files you place in your home directory persists between sessions. It is recommended to use only for notebooks and code since it is not suitable for large datasets. ","version":"Next","tagName":"h3"},{"title":"2. /home/jovyan/shared","type":1,"pageTitle":"JupyterHub User Directory","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/directory/#2-homejovyanshared","content":" This is the shared readonly directory. All users can access and read from the shared directory. Only the hub admins can add and delete data from this directory. ","version":"Next","tagName":"h3"},{"title":"3. /tmp","type":1,"pageTitle":"JupyterHub User Directory","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/directory/#3-tmp","content":" This is a non persistient directory. This means any files you add under /tmp direcotry will be deleted once you log out. This directory can be used to store data temporary data. ","version":"Next","tagName":"h3"},{"title":"Persistent Conda Environments on CIROH 2i2c Server","type":0,"sectionRef":"#","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/conda/","content":"","keywords":"","version":"Next"},{"title":"1. Create a directory for Conda environments:","type":1,"pageTitle":"Persistent Conda Environments on CIROH 2i2c Server","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/conda/#1-create-a-directory-for-conda-environments","content":" You can set up a directory within your home folder to store all your Conda environments. This prevents them from being removed when the server is restarted. For example: mkdir -p ~/conda_envs ","version":"Next","tagName":"h3"},{"title":"2. Create a new environment in that directory:","type":1,"pageTitle":"Persistent Conda Environments on CIROH 2i2c Server","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/conda/#2-create-a-new-environment-in-that-directory","content":" Use the --prefix option with conda create to specify the location where you want to create your environment. For example, to create an environment called my_env in ~/conda_envs: conda create --prefix ~/conda_envs/my_env python=3.9 ","version":"Next","tagName":"h3"},{"title":"3. Activate the environment:","type":1,"pageTitle":"Persistent Conda Environments on CIROH 2i2c Server","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/conda/#3-activate-the-environment","content":" You can activate the environment as usual, using the path to where you created it: conda activate ~/conda_envs/my_env ","version":"Next","tagName":"h3"},{"title":"4. Automatically activate the environment on restart:","type":1,"pageTitle":"Persistent Conda Environments on CIROH 2i2c Server","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/conda/#4-automatically-activate-the-environment-on-restart","content":" If you want this environment to be activated every time you log in or the server restarts, you can add the following to your .bashrc or .bash_profile file: conda activate ~/conda_envs/my_env By creating your environments in your home folder (e.g., ~/conda_envs/), they will persist across server restarts, ensuring that you don't have to recreate them every time. ","version":"Next","tagName":"h3"},{"title":"A Step-by-Step Guide: Requesting custom images","type":0,"sectionRef":"#","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/custom-images/","content":"","keywords":"","version":"Next"},{"title":"1. Create an environment.yml file:","type":1,"pageTitle":"A Step-by-Step Guide: Requesting custom images","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/custom-images/#1-create-an-environmentyml-file","content":" Open your terminal or command prompt. Make sure you have conda installed and activated in the environment that contains the packages you want to use for creating custom images. Learn more here. Run the following command, replacing ENVNAME with the actual name of your environment. conda env export -n ENVNAME > environment.yml ","version":"Next","tagName":"h3"},{"title":"2. Submit a Request Form:","type":1,"pageTitle":"A Step-by-Step Guide: Requesting custom images","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/custom-images/#2-submit-a-request-form","content":" Click on the link below to access the Jupyterhub (2i2c) Software Install form.Select Install Software on CIROH 2i2c JupyterHub as a reason for request.Fill out remaining sections of the form and submit it. JupyterHub (2i2c) Software Install Form ","version":"Next","tagName":"h3"},{"title":"3. Share your environment.yml file with CIROH-IT support","type":1,"pageTitle":"A Step-by-Step Guide: Requesting custom images","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/custom-images/#3-share-your-environmentyml-file-with-ciroh-it-support","content":" After submitting the request form, attach the environment.yml file you created in step 1 to an email and send it to [email protected] ","version":"Next","tagName":"h3"},{"title":"A Step-by-Step Guide: Push Code to GitHub from 2i2c JupyterHub","type":0,"sectionRef":"#","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/github-push/","content":"","keywords":"","version":"Next"},{"title":"You can also follow along with this video tutorial that walks you through the same process visually.","type":1,"pageTitle":"A Step-by-Step Guide: Push Code to GitHub from 2i2c JupyterHub","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/github-push/#you-can-also-follow-along-with-this-video-tutorial-that-walks-you-through-the-same-process-visually","content":" ","version":"Next","tagName":"h3"},{"title":"CUAHSI JupyterHub","type":0,"sectionRef":"#","url":"/docs/services/cloudservices/cuahsi/","content":"","keywords":"","version":"Next"},{"title":"An Introduction","type":1,"pageTitle":"CUAHSI JupyterHub","url":"/docs/services/cloudservices/cuahsi/#an-introduction","content":" The CUAHSI JupyterHub is a free cloud computing environment that enables researchers to execute scientific code and explore, modify, and interact with data inside a remote execution environment using Python and/or R programming languages. It is integrated with HydroShare and the Hydrologic Information System data repositories, making it easy to leverage community datasets, collaborate, and disseminate research workflows. NOTE: Below content is rendered from https://github.com/CUAHSI/jupyterhub/blob/main/docs/getting-started.md Getting Started ","version":"Next","tagName":"h2"},{"title":"Access","type":1,"pageTitle":"CUAHSI JupyterHub","url":"/docs/services/cloudservices/cuahsi/#access","content":" To access the CUAHSI JupyterHub platform, you must be a member of the CUAHSI JupyterHub Group. Group membership limits system interruptions and ensures that resources are effectively curated and managed. When first accessing the application, you will be directed to the CUAHSI JupyterHub Group landing page. Request to join the group, and after admission has been granted you will be able to access the computational environment. To expedite the approval process, please ensure that your HydroShare user profile is complete and up-to-date. Contact [email protected] if you have any questions regarding this process. ","version":"Next","tagName":"h2"},{"title":"Launching JupyterHub","type":1,"pageTitle":"CUAHSI JupyterHub","url":"/docs/services/cloudservices/cuahsi/#launching-jupyterhub","content":" There are multiple ways to access the CUAHSI JupyterHub platform which are listed below. All of these methods require that you register your HydroShare account with the CUAHSI JupyterHub Group (see Gaining Access) ","version":"Next","tagName":"h2"},{"title":"HydroShare Web Application","type":1,"pageTitle":"CUAHSI JupyterHub","url":"/docs/services/cloudservices/cuahsi/#hydroshare-web-application","content":" The simplest way to get started with the CUAHSI JupyterHub is by launching it directly from the HydroShare Apps library (hydroshare.org/apps) by clicking on the CUAHSI JupyterHub icon. This redirect you to the CUAHSI JupyterHub server where you will be asked to login using your HydroShare credentials. Once authenticated, you will be asked to choose a Profile Environment which will be used to launch an isolated cloud computing environment for you to work in. In this space, you can create files and execute code from within your web browser. Any data you upload, download, and create is associated with your HydroShare account and will persist between sessions, meaning that it will be there next time you log in. Prior to gaining access, you will be asked join the CUAHSI JupyterHub HydroShare group (see the Access and Authentication section for details). ","version":"Next","tagName":"h3"},{"title":"HydroShare Open-With Menu","type":1,"pageTitle":"CUAHSI JupyterHub","url":"/docs/services/cloudservices/cuahsi/#hydroshare-open-with-menu","content":" Another common way of accessing the CUAHSI JupyterHub environment is by using the HydroShare Open with functionality. This button can be found in the top right corner of any HydroShare resource landing page. After selecting "CUAHSI JupyterHub", a computing environment will be prepared and the content of the current HydroShare resource will be placed inside of it. This is a convenient method for executing code, data, and workflows that have been published in the HydroShare repository. ","version":"Next","tagName":"h3"},{"title":"Direct URL","type":1,"pageTitle":"CUAHSI JupyterHub","url":"/docs/services/cloudservices/cuahsi/#direct-url","content":" Once you are familiar with the this environment, it's often useful to access it directly rather than navigating through HydroShare. This can be done by simply navigating to https://jupyterhub.cuahsi.org. ","version":"Next","tagName":"h3"},{"title":"CIROH Google Account","type":0,"sectionRef":"#","url":"/docs/services/cloudservices/google cloud/","content":"","keywords":"","version":"Next"},{"title":"How to get access to CIROH Google Account?","type":1,"pageTitle":"CIROH Google Account","url":"/docs/services/cloudservices/google cloud/#how-to-get-access-to-ciroh-google-account","content":" CIROH Cloud Hosting services include: Creation of Google Cloud subaccounts for CIROH consortium members and partners.Project PI contact identity creation and access (Google IAM) We encourage PI of the project to start here: (select Public Cloud and fill out details for Google Cloud) Infrastructure Request Form CIROH Consortium member or partner is responsible for: Management of CIROH subaccount assigned to them.project specific software and environment configuration.Additional project's contacts creation and/or access. note Please refer link for references to submitted forms. ","version":"Next","tagName":"h2"},{"title":"Cost of use:","type":1,"pageTitle":"CIROH Google Account","url":"/docs/services/cloudservices/google cloud/#cost-of-use","content":" Some uses may be covered by CIROH Infrastructure project fundsExtensive uses may be charged to individual CIROH projects Unleashing Research Potential with Google Cloud Services Here's some services and tools offered by Google Cloud: Compute Engine: It is a highly scalable computing and hosting service that provides on-demand, high-performance computing resources. It lets you create and run virtual machines on Google infrastructure. Cloud Storage: Google Cloud provides fast, low-cost, highly durable archive and backup storage, allowing researchers to readily access, analyze, and manipulate vast datasets efficiently. BigQuery: Using its serverless architecture, researchers can use SQL queries to analyze huge datasets. It lets you manage all data types across clouds with fine-grained access controls. Google Earth Engine: It is a cloud-based geospatial analysis platform to analyze earth observation data. Google Cloud offers Earth Engine for remote sensing research, predicting desease outbreaks, natural resource management, and more. ","version":"Next","tagName":"h2"},{"title":"Where to go for help:","type":1,"pageTitle":"CIROH Google Account","url":"/docs/services/cloudservices/google cloud/#where-to-go-for-help","content":" Email [email protected] UA CIROH Cloud TeamCIROH Cloud Slack Channel - #ciroh-ua-it-admin ","version":"Next","tagName":"h2"},{"title":"External Resources","type":0,"sectionRef":"#","url":"/docs/services/external-resources/","content":"External Resources CIROH recognizes the importance of leveraging external resources to enhance our research capabilities. This section provides information on external computing resources available to CIROH consortium members. Explore options beyond our on-premise and cloud-based services to expand your research horizons. Learn about NSF Access ACCESS is a program established and funded by the National Science Foundation to help researchers and educators, with or without supporting grants, to utilize the nation’s advanced computing systems and services – at no cost.","keywords":"","version":"Next"},{"title":"Manage files in GCP bucket","type":0,"sectionRef":"#","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/gcp-object-storage/","content":"","keywords":"","version":"Next"},{"title":"1. Overview","type":1,"pageTitle":"Manage files in GCP bucket","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/gcp-object-storage/#1-overview","content":" CIROH JupyterHub uses object Google Cloud Storage to store data in buckets (containers for objects). Currently, there are two buckets available to use on CIROH JupyterHub. Scratch Bucket: It is intended for storing temporary files since any files in scratch bucket get deleted after seven days. Open a terminal in CIROH JupyterHub and run this command to display your scratch bucket name: echo $SCRATCH_BUCKET gs://awi-ciroh-scratch/<username> Note: In the above command output, the name of the bucket is 'awi-ciroh-scratch' and <username> is the folder in the bucket. Persistent Bucket: It is recommended to use for storing files that you will be using for a longer period of time. Open a terminal in CIROH JupyterHub and run this command to display your persistent bucket name: echo $PERSISTENT_BUCKET gs://awi-ciroh-persistent/<username> ","version":"Next","tagName":"h3"},{"title":"2. Copying file to a bucket","type":1,"pageTitle":"Manage files in GCP bucket","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/gcp-object-storage/#2-copying-file-to-a-bucket","content":" You can copy files on your CIROH JupyterHub to an available bucket using the following command. gcloud storage cp <filepath> $PERSISTENT_BUCKET/<filepath> ","version":"Next","tagName":"h3"},{"title":"3. Copying file from a bucket to CIROH JupyterHub","type":1,"pageTitle":"Manage files in GCP bucket","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/gcp-object-storage/#3-copying-file-from-a-bucket-to-ciroh-jupyterhub","content":" You can copy files from an accessible bucket to your CIROH JupyterHub using the following command. gcloud storage cp $PERSISTENT_BUCKET/<filepath> <destination-filepath> ","version":"Next","tagName":"h3"},{"title":"4. Listing files in a bucket","type":1,"pageTitle":"Manage files in GCP bucket","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/gcp-object-storage/#4-listing-files-in-a-bucket","content":" You can list all files/folder in a bucket using the following command. gcloud storage ls $PERSISTENT_BUCKET Note: The above command will list all files/folders in the folder <username>. It won't list files in the sub-folders of folder <username>. To list all files including the files in the sub-folders of the root folder <username>, use the following command. gcloud storage ls --recursive $PERSISTENT_BUCKET ","version":"Next","tagName":"h3"},{"title":"5. Deleting file from a bucket","type":1,"pageTitle":"Manage files in GCP bucket","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/gcp-object-storage/#5-deleting-file-from-a-bucket","content":" You can delete a file in a bucket with the following command: gcloud storage rm $PERSISTENT_BUCKET/<filepath> ","version":"Next","tagName":"h3"},{"title":"6. User permssions on buckets","type":1,"pageTitle":"Manage files in GCP bucket","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/gcp-object-storage/#6-user-permssions-on-buckets","content":" All users have read/write permissions on both the scratch and persistent buckets. note Anyone can access each other's files in buckets on the hub. Please be careful not to delete other user's files. Using the enviornment variables ($SCRATCH_BUCKET & $PERSISTENT_BUCKET) to access buckets in commands would prevent accidententally deleting any other user's files. Your actions impact the entire organization's storage. If unsure, consult with the team lead or ciroh IT support. ","version":"Next","tagName":"h3"},{"title":"7. Accessing buckets in Python","type":1,"pageTitle":"Manage files in GCP bucket","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/gcp-object-storage/#7-accessing-buckets-in-python","content":" You can find information on how to access buckets in Python code, here. ","version":"Next","tagName":"h3"},{"title":"Where to go for help:","type":1,"pageTitle":"Manage files in GCP bucket","url":"/docs/services/cloudservices/ciroh jupyterhub/documentation/gcp-object-storage/#where-to-go-for-help","content":" Email [email protected] UA CIROH Cloud TeamCIROH Cloud Slack Channel - #ciroh-ua-it-adminCIROH Infrastructure Support Slack Channel - #ciroh-infrastructure-support ","version":"Next","tagName":"h2"},{"title":"NSF Access","type":0,"sectionRef":"#","url":"/docs/services/external-resources/nsf-access/","content":"","keywords":"","version":"Next"},{"title":"JetStream2","type":1,"pageTitle":"NSF Access","url":"/docs/services/external-resources/nsf-access/#jetstream2","content":" Jetstream2 is a powerful hybrid-cloud platform designed for researchers and educators. It offers a range of flexible, on-demand, and programmable infrastructure tools, from interactive virtual machines (VMs) to advanced infrastructure and orchestration services. The primary resource consists of AMD Milan 7713 CPUs with 128 cores per node and 512 GB RAM per node, all connected by a high-speed 100 gbps wthernet. Image Source: https://docs.jetstream-cloud.org/ ","version":"Next","tagName":"h2"},{"title":"Use cases:","type":1,"pageTitle":"NSF Access","url":"/docs/services/external-resources/nsf-access/#use-cases","content":" Jetstream2 is ideal for researchers with diverse needs: On-demand virtual machines: It is ideal for research that requires on demand virtual machine services. It is also best for researchers needing to create their own customized virtual machine environment for specific software needs.Always-on research infrastructure: It can host research-supporting infrastructure services that require continuous operation.Educational support: It can be used to provide virtual machines for student use in research or coursework. For information about available instance sizes, visit the JetStream2 VM Sizes page. ","version":"Next","tagName":"h3"},{"title":"To get started with JetStream2:","type":1,"pageTitle":"NSF Access","url":"/docs/services/external-resources/nsf-access/#to-get-started-with-jetstream2","content":" Create an Account: Follow the detailed instructions on the Get Started with JetStream2 page to set up your account and request access. Request Access: After completing the setup, proceed by following the steps outlined on the Accessing JetStream2 page. Log In and Start: Once you’ve gained access, visit the logging in to JetStream2 page. to begin using JetStream2. info For a more detailed information on JetStream2, visit the official NSF ACCESS Jetstream2 website here. ","version":"Next","tagName":"h3"},{"title":"Anvil","type":1,"pageTitle":"NSF Access","url":"/docs/services/external-resources/nsf-access/#anvil","content":" Anvil is a powerful supercomputer, offering computing power for demanding research problems. Purdue's Anvil cluster consists of 1,000 nodes with two 64-core AMD EPYC "Milan" processors each and delivers over one billion CPU core hours each year. With a peak performance of 5.1 petaflops and a speed of 100 Gbps interconnect, Anvil ensures rapid data transfer and processing for efficient research workflows. Standard Anvil nodes have 256GB of DDR4-3200 memory each, ideal for most research tasks. Image Source: https://www.rcac.purdue.edu/anvil ","version":"Next","tagName":"h2"},{"title":"Use cases:","type":1,"pageTitle":"NSF Access","url":"/docs/services/external-resources/nsf-access/#use-cases-1","content":" General-Purpose CPU Power: Anvil's powerful CPUs (with 128 cores per node) are ideal for computationally intensive tasks suitable for modeling and simulation across scientific and engineering fields.Memory-Intensive Workloads: The dedicated large memory nodes (with 1TB of DDR4-3200 memory per node) works bestfor research that demands significant memory resources.Composable Subsystem: It is a private cloud built on Kubernetes and consists of bothe CPU and GPU nodes and S3 data storage. It is suitable for applications such as model inference service (via NVIDIA Triton), Specialized LLMs, dataset hosting, science gateways and web application hosting, and classroom and training applications via interactive access interfaces. Anvil Documentation info For a more detailed information on Anvil, visit the official NSF ACCESS website here. ","version":"Next","tagName":"h3"},{"title":"CIROH JupyterHub via HydroShare","type":0,"sectionRef":"#","url":"/docs/services/cloudservices/HydroShare/","content":"","keywords":"","version":"Next"},{"title":"HydroShare and CIROH JupyterHub Integration","type":1,"pageTitle":"CIROH JupyterHub via HydroShare","url":"/docs/services/cloudservices/HydroShare/#hydroshare-and-ciroh-jupyterhub-integration","content":" Users now have the capability to directly launch and execute computational notebooks from HydroShare resources into the CIROH Jupyterhub environments. Wondering how to do it? Here are the steps for you to follow: First, confirm that you have access to the CIROH Jupyterhub. If not, follow Steps CIROH Jupyterhub is now an approved app, and appears on https://www.hydroshare.org/apps/. Navigate to this page to access it directly, or select it from the "Open with" list on any resource that you have access to containing a Jupyter notebook. A new browser tab will open for the CIROH JupyterHub. Click on the "Login to continue" button and select one of the server options that is appropriate for the analysis you need to run (small, medium, large, or huge) and image : New Pangeo Notebook base image. Then, click the "Start" button to launch your server. You will now be inside the CIROH JupyterHub. All of the files from your HydroShare resource will appear in the file browser on the left, including any notebooks that were in your resource. Double click on a notebook to open it and then run it. ","version":"Next","tagName":"h2"},{"title":"Accessing JetStream2","type":0,"sectionRef":"#","url":"/docs/services/external-resources/nsf-access/access","content":"","keywords":"","version":"Next"},{"title":"How to get access to JetStream2?","type":1,"pageTitle":"Accessing JetStream2","url":"/docs/services/external-resources/nsf-access/access#how-to-get-access-to-jetstream2","content":" Step 1: Submit the Infrastructure Request Form below to get access to JetStream2: We encourage PI of the project to start here: (select Infrastructure Request Form and fill out details) Infrastructure Request Form Step 2: Submit the JetStream2 Access Request form for individual user accounts on JetStream2: JetStream2 Access Request Form Note: If you are unable to access the JetStream2 forms, please contact the CIROH team at [email protected] for assistance. ","version":"Next","tagName":"h3"},{"title":"Overview","type":0,"sectionRef":"#","url":"/docs/services/on-prem/","content":"","keywords":"","version":"Next"},{"title":"What is On-Premises services?","type":1,"pageTitle":"Overview","url":"/docs/services/on-prem/#what-is-on-premises-services","content":" At CIROH, On-Premises services establish a comprehensive platform that facilitates the exchange of research data and access to computational resources and enables collaborative partnerships with academic peers locally and globally. A team of engineers and developers at the University of Alabama operates these services. Our overarching objective is to construct a dynamic "network of services" tailored to enable efficient organization, analysis, and dissemination of research data. Within On-Premises Research Computing, we structure our services around the following fundamental domains: Hydroinformatics: This domain integrates hydrology with information technology, focusing on the development and application of computational tools and techniques for data management, analysis, modeling, and decision support in hydrological studies.Data Science and Big Data: Utilizing data-driven approaches and big data analytics to process and analyze large volumes of hydrological data from various sources, including remote sensing, sensors, and numerical models, to extract meaningful insights and patterns.Numerical Modeling and Simulation: Developing and implementing computational models and simulation techniques to simulate hydrological processes and phenomena, such as rainfall-runoff modeling, groundwater flow, and water quality modeling, to support scientific research and water resources management.Geographic Information Systems (GIS): Applying GIS technology to hydrological research by integrating spatial data with hydrological models, analyzing spatial patterns, and visualizing hydrological processes to understand spatial relationships and make informed decisions in water management.Machine Learning and Artificial Intelligence: Employing machine learning algorithms and AI techniques to enhance hydrological modeling, prediction, and decision-making by learning from data patterns, optimizing model parameters, and improving the accuracy of hydrological forecasts and simulations. These domains reflect the interdisciplinary nature of hydrology and highlight the critical role of On-Premises High-performance computing (HPC) in advancing our understanding of water systems and addressing complex hydrological challenges. In essence, CIROH's Research Computing services are a cornerstone for fostering interdisciplinary collaboration, enabling data-driven research, and advancing scientific discovery in hydrology and related domains. ","version":"Next","tagName":"h2"},{"title":"Available clusters","type":1,"pageTitle":"Overview","url":"/docs/services/on-prem/#available-clusters","content":" 🗃️ Pantarhei 4 items 🗃️ Wukong 3 items ","version":"Next","tagName":"h2"},{"title":"Pantarhei","type":0,"sectionRef":"#","url":"/docs/services/on-prem/Pantarhei/","content":"","keywords":"","version":"Next"},{"title":"Navigational Resources","type":1,"pageTitle":"Pantarhei","url":"/docs/services/on-prem/Pantarhei/#navigational-resources","content":" 📄️ System Architecture System Architecture of Pantarhei 📄️ Obtaining an Account Obtain an account on Pantarhei 📄️ Accessing the System Access of On-Premises Cluster Pantarhei 🗃️ Running Jobs 1 item ","version":"Next","tagName":"h3"},{"title":"Obtaining an Account","type":0,"sectionRef":"#","url":"/docs/services/on-prem/Pantarhei/obtain","content":"","keywords":"","version":"Next"},{"title":"How to get access to Pantarhei?","type":1,"pageTitle":"Obtaining an Account","url":"/docs/services/on-prem/Pantarhei/obtain#how-to-get-access-to-pantarhei","content":" Submit the On-Premise Infrastructure Request Form below to get access to Pantarhei: We encourage PI of the project to start here: (select On-Premises Infrastructure Request Form and fill out details) Infrastructure Request Form Submit the On-premise Access Request form for individual user accounts on Pantarhei: On-Premise Access Request Form note For UA users, please submit On-Premises Access Request Form. For outside UA users, please start with VPN Access Request Form followed by On-Premises Access Request Form Note: If you are unable to access the On-Premise forms, please contact the CIROH team at [email protected] for assistance. ","version":"Next","tagName":"h3"},{"title":"Running Jobs","type":0,"sectionRef":"#","url":"/docs/services/on-prem/Pantarhei/RunningJobs/","content":"","keywords":"","version":"Next"},{"title":"Navigational Resources","type":1,"pageTitle":"Running Jobs","url":"/docs/services/on-prem/Pantarhei/RunningJobs/#navigational-resources","content":" 📄️ Accessing the Compute Nodes ","version":"Next","tagName":"h3"},{"title":"CIROH CyberInfrastructure: Unleashing Potential in Hydrological Research","type":0,"sectionRef":"#","url":"/docs/services/intro","content":"CIROH CyberInfrastructure: Unleashing Potential in Hydrological Research The CIROH CyberInfrastructure empowers CIROH consortium members by providing a scalable, efficient, and user-friendly computing platform. We understand the challenges researchers face in managing computational resources, and the CIROH CyberInfrastructure alleviates these burdens by offering a suite of pre-configured environments and resources. Our team of engineers and developers meticulously optimizes both cloud-based (AWS and Google Cloud) and on-premise infrastructure (Pantarhei HPC cluster) to ensure unparalleled flexibility and scalability. This translates into a powerful platform that includes: Simplified Access: The CIROH CyberInfrastructure streamlines access to computational environments, eliminating the need for time-consuming installations and maintenance. Unmatched Flexibility: Our multi-cloud and on-premise infrastructure provides a diverse range of options to suit your specific research needs. Scalable Resources: The CIROH CyberInfrastrucure readily scales to accommodate your growing data analysis and computational demands. Pre-Installed Software: Leverage pre-installed hydrological software packages to jumpstart your research endeavors. Streamlined Development: Benefit from the secure and rapid application development and deployment capabilities offered by Cloud. The CIROH CyberInfrastructure is meticulously designed to empower CIROH researchers and innovators to achieve groundbreaking results in hydrology. Join us and unlock the full potential of your research today! CIROH CyberInfrastructure Goals Promote reproducible hydrologic computing experiments with the NextGen Water Resource Modeling FrameworkProvide support for, and reduce the barrier to entry for performing NextGen-related experiments at various scales; andAccelerate the interconnection and integration of research products and hydroinformatics innovations from the various ongoing CIROH experiments. CIROH JupyterHub Service CIROH AWS Services Pantarhei HPC CIROH Google Cloud Services","keywords":"","version":"Next"},{"title":"Accessing the System","type":0,"sectionRef":"#","url":"/docs/services/on-prem/Pantarhei/access","content":"","keywords":"","version":"Next"},{"title":"General overview","type":1,"pageTitle":"Accessing the System","url":"/docs/services/on-prem/Pantarhei/access#general-overview","content":" To connect to Pantarhei using SSH, you must follow two high-level steps: Connect to the University of Alabama (UA) NetworkConnect to the Secure Shell (SSH) Obtain Pantarhei Access In the case that access to the Pantarhei system is unavailable to you, please follow the instructions on Obtaining an Account. ","version":"Next","tagName":"h3"},{"title":"Connect to the Network","type":1,"pageTitle":"Accessing the System","url":"/docs/services/on-prem/Pantarhei/access#connect-to-the-network","content":" University of Alabama (UA) requires users to use the Virtual private network (VPN) to connect to the UA campus network in order to connect to the Pantarhei cluster. tip For more information on setting up a VPN, please visit the Office of Information Technology (OIT) website. ","version":"Next","tagName":"h3"},{"title":"Connect to the SSH","type":1,"pageTitle":"Accessing the System","url":"/docs/services/on-prem/Pantarhei/access#connect-to-the-ssh","content":" MacOS and LinuxWindows Once you are connected to the VPN, follow these steps to access Pantarhei: Open a Terminal: Find Terminal in your local machine and open it. tip In MacOS, use Spotlight search (Command + Spacebar) and type Terminal to open a new terminal window. Connect via SSH: In the terminal, Use the SSH command to connect to Pantarhei. ssh <USERNAME>@pantarhei.ua.edu note Replace <USERNAME> with your actual Pantarhei username. Enter your Pantarhei password We hope this guide helps you efficiently utilize the Pantarhei HPC system for your research needs. Happy computing! ","version":"Next","tagName":"h3"},{"title":"Accessing the Compute Nodes","type":0,"sectionRef":"#","url":"/docs/services/on-prem/Pantarhei/RunningJobs/computenode","content":"Accessing the Compute Nodes Pantarhei employs the Slurm Workload Manager for the purpose of job scheduling and management. Utilizing Slurm, a user initiates a request for resources and submits a job to a designated queue. Subsequently, the system undertakes the task of extracting jobs from the queues, assigning the requisite compute nodes, and executing the submitted tasks. Although users typically access the Slurm job scheduler by SSH-ing to a Pantarhei login node, it is imperative to emphasize that the recommended practice entails utilizing Slurm to submit work as a job, as opposed to executing computationally intensive tasks directly on a login node. Given that all users share the login nodes, running anything beyond minimal test jobs can adversely affect the collective ability of users to effectively utilize Pantarhei resources. Pantarhei's framework is tailored to accommodate the moderate-scale computational and data requirements of the majority of CIROH users. Users with allocations possess the capability to submit tasks to a diverse array of queues, each featuring distinct job size and walltime constraints. Dedicated sets of queues are allocated for CPU, GPU, and FPGA nodes, with typically shorter walltime and smaller job size limits translating to expedited turnaround times. Several additional considerations regarding Pantarhei queues merit attention: Pantarhei facilitates shared jobs, whereby multiple tasks can be executed on a single node. This approach enhances job throughput, maximizes overall system utilization, and fosters increased user accessibility to Pantarhei resources.Pantarhei accommodates long-running jobs, with run times extendable up to seven days for tasks utilizing up to 6 full nodes.The maximum permissible job size on Pantarhei is 240 cores. For tasks exceeding this threshold, users are advised to initiate a consulting ticket to engage in further discussion with Pantarhei support personnel.","keywords":"","version":"Next"},{"title":"Wukong","type":0,"sectionRef":"#","url":"/docs/services/on-prem/Wukong/","content":"","keywords":"","version":"Next"},{"title":"Navigational Resources","type":1,"pageTitle":"Wukong","url":"/docs/services/on-prem/Wukong/#navigational-resources","content":" 📄️ System Architecture System Architecture of Wukong 📄️ Obtaining an Account Obtain an account on Wukong 📄️ Accessing the System Access of On-Premises Cluster Wukong ","version":"Next","tagName":"h3"},{"title":"Hardware Specifications","type":0,"sectionRef":"#","url":"/docs/services/on-prem/Pantarhei/sysinfo","content":"","keywords":"","version":"Next"},{"title":"Network","type":1,"pageTitle":"Hardware Specifications","url":"/docs/services/on-prem/Pantarhei/sysinfo#network","content":" All nodes are interconnected by a Mellanox InfiniBand switch with FDR 56 Gb/s networks. ","version":"Next","tagName":"h3"},{"title":"CIROH Subdomain Request Form","type":0,"sectionRef":"#","url":"/docs/services/subdomain","content":"CIROH Subdomain Request Form To request the ciroh.org subdomain, please fill out the following form. CIROH Subdomain Request Form note Your request will be handled by DevOps staff at CIROH and if permitted will be given access to the research cloud or on-premise infrastructure.","keywords":"","version":"Next"},{"title":"Obtaining an Account","type":0,"sectionRef":"#","url":"/docs/services/on-prem/Wukong/obtain","content":"","keywords":"","version":"Next"},{"title":"General overview","type":1,"pageTitle":"Obtaining an Account","url":"/docs/services/on-prem/Wukong/obtain#general-overview","content":" To obtain an account, users will need to follow these step: Submit On-premises Infrastructure Request Form and describe full project information and resouce requirements. warning This GitHub issue must be submitted by Principal Investigator (PI) of the project. Hint In the GitHub issue PI should mention following details: PI's Full NamePI's Affiliated InstitutePI's Affiliated Email AddressList of students who will need access, also please provide following details of each student Full Name of studentGitHub User Name of studentAffiliated Email Address of student Submit On-premise Access Request form in CIROH Service Request. tip The administration of the Wukong cluster falls under the purview of the CIROH IT Computing group at the University of Alabama (UA). Consequently, individuals lacking UA credentials (MyBama ID, VPN User Name, and CWID) are obliged to complete the VPN Access Request form, followed by the On-premise Access Request form within CIROH Service Request. Account Creation Creation of accounts on the Wukong system necessitates submission of individual forms by each respective user. For instance, a Principal Investigator (PI) submits a GitHub issue pertaining to a project, and three students require access to the Wukong system for project-related work, three separate account creation forms must be duly submitted during this process. ","version":"Next","tagName":"h3"},{"title":"System Architecture","type":0,"sectionRef":"#","url":"/docs/services/on-prem/Wukong/sysinfo","content":"","keywords":"","version":"Next"},{"title":"Comupute Node","type":1,"pageTitle":"System Architecture","url":"/docs/services/on-prem/Wukong/sysinfo#comupute-node","content":" Compute Node Specifications Model\tIntel(R) Xeon(R) Platinum 8470 Number of nodes\t1 Sockets per node\t2 Cores per socket\t52 Cores per node\t208 Hardware threads per core\t2 Hardware threads per node\t416 Clock rate\t2.00GHz (3.80GHz max boost) RAM\t1024 GB DDR5-4800 Cache L1d cache: 4.9 MiB (104 instances) L1i cache: 3.3 MiB (104 instances) L2 cache: 208 MiB (104 instances) L3 cache: 210 MiB (2 instances) Local storage per node\t56 TB Number GPUs per node\t8 GPU model\tNVIDIA A100 SXM4 Memory per GPU\t80 GB info Presently, the Wukong operates as a stand-alone, self-contained server, implying that the compute node is the login node. ","version":"Next","tagName":"h3"},{"title":"Network","type":1,"pageTitle":"System Architecture","url":"/docs/services/on-prem/Wukong/sysinfo#network","content":" The Wukong's all GPUs are fully interconnected with NVIDIA NVLink technology. ","version":"Next","tagName":"h3"},{"title":"Accessing the System","type":0,"sectionRef":"#","url":"/docs/services/on-prem/Wukong/access","content":"","keywords":"","version":"Next"},{"title":"General overview","type":1,"pageTitle":"Accessing the System","url":"/docs/services/on-prem/Wukong/access#general-overview","content":" To connect to Wukong using SSH, you must follow two high-level steps: Connect to the University of Alabama (UA) NetworkConnect to the Secure Shell (SSH) Obtain Wukong Access In the case that access to the Wukong system is unavailable to you, please follow the instructions on Obtaining an Account. ","version":"Next","tagName":"h3"},{"title":"Connect to the Network","type":1,"pageTitle":"Accessing the System","url":"/docs/services/on-prem/Wukong/access#connect-to-the-network","content":" University of Alabama (UA) requires users to use the Virtual private network (VPN) to connect to the UA campus network in order to connect to the Wukong cluster. tip For more information on setting up a VPN, please visit the Office of Information Technology (OIT) website. ","version":"Next","tagName":"h3"},{"title":"Connect to the SSH","type":1,"pageTitle":"Accessing the System","url":"/docs/services/on-prem/Wukong/access#connect-to-the-ssh","content":" MacOS and LinuxWindows Once you are connected to the VPN, follow these steps to access Wukong: Open a Terminal: Find Terminal in your local machine and open it. tip In MacOS, use Spotlight search (Command + Spacebar) and type Terminal to open a new terminal window. Connect via SSH: In the terminal, Use the SSH command to connect to Wukong. ssh <USERNAME>@Wukong.ua.edu note Replace <USERNAME> with your actual Wukong username. Enter your Wukong password We hope this guide helps you efficiently utilize the Wukong HPC system for your research needs. Happy computing! ","version":"Next","tagName":"h3"}],"options":{"languages":["en"],"id":"default"}}