Skip to content

Commit

Permalink
Rename master branch to main
Browse files Browse the repository at this point in the history
  • Loading branch information
tbennun committed Oct 24, 2024
1 parent 4f56553 commit 2bf537a
Show file tree
Hide file tree
Showing 21 changed files with 50 additions and 50 deletions.
6 changes: 3 additions & 3 deletions .github/workflows/fpga-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@ name: FPGA Tests

on:
push:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]
pull_request:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]
merge_group:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]

jobs:
test-fpga:
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/general-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@ name: General Tests

on:
push:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]
pull_request:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]
merge_group:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]

jobs:
test:
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/gpu-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@ name: GPU Tests

on:
push:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]
pull_request:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]
merge_group:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]

env:
CUDACXX: /usr/local/cuda/bin/nvcc
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/heterogeneous-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@ name: Heterogeneous Tests

on:
push:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]
pull_request:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]
merge_group:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]

env:
CUDA_HOME: /usr/local/cuda
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/pyFV3-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@ name: NASA/NOAA pyFV3 repository build test

on:
push:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]
pull_request:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]
merge_group:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]

defaults:
run:
Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ For automatic styling, we use the [yapf](https://github.com/google/yapf) file fo
We use [pytest](https://www.pytest.org/) for our testing infrastructure. All tests under the `tests/` folder
(and any subfolders within) are automatically read and run. The files must be under the right subfolder
based on the component being tested (e.g., `tests/sdfg/` for IR-related tests), and must have the right
suffix: either `*_test.py` or `*_cudatest.py`. See [pytest.ini](https://github.com/spcl/dace/blob/master/pytest.ini)
suffix: either `*_test.py` or `*_cudatest.py`. See [pytest.ini](https://github.com/spcl/dace/blob/main/pytest.ini)
for more information, and for the markers we use to specify software/hardware requirements.

The structure of the test file must follow `pytest` standards (i.e., free functions called `test_*`), and
Expand Down
18 changes: 9 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,15 @@
[![FPGA Tests](https://github.com/spcl/dace/actions/workflows/fpga-ci.yml/badge.svg)](https://github.com/spcl/dace/actions/workflows/fpga-ci.yml)
[![Documentation Status](https://readthedocs.org/projects/spcldace/badge/?version=latest)](https://spcldace.readthedocs.io/en/latest/?badge=latest)
[![PyPI version](https://badge.fury.io/py/dace.svg)](https://badge.fury.io/py/dace)
[![codecov](https://codecov.io/gh/spcl/dace/branch/master/graph/badge.svg)](https://codecov.io/gh/spcl/dace)
[![codecov](https://codecov.io/gh/spcl/dace/branch/main/graph/badge.svg)](https://codecov.io/gh/spcl/dace)


![D](dace.svg)aCe - Data-Centric Parallel Programming
=====================================================

_Decoupling domain science from performance optimization._

DaCe is a [fast](https://nbviewer.org/github/spcl/dace/blob/master/tutorials/benchmarking.ipynb) parallel programming
DaCe is a [fast](https://nbviewer.org/github/spcl/dace/blob/main/tutorials/benchmarking.ipynb) parallel programming
framework that takes code in Python/NumPy and other programming languages, and maps it to high-performance
**CPU, GPU, and FPGA** programs, which can be optimized to achieve state-of-the-art. Internally, DaCe
uses the Stateful DataFlow multiGraph (SDFG) *data-centric intermediate
Expand Down Expand Up @@ -61,13 +61,13 @@ be used in any C ABI compatible language (C/C++, FORTRAN, etc.).

For more information on how to use DaCe, see the [samples](samples) or tutorials below:

* [Getting Started](https://nbviewer.jupyter.org/github/spcl/dace/blob/master/tutorials/getting_started.ipynb)
* [Benchmarks, Instrumentation, and Performance Comparison with Other Python Compilers](https://nbviewer.jupyter.org/github/spcl/dace/blob/master/tutorials/benchmarking.ipynb)
* [Explicit Dataflow in Python](https://nbviewer.jupyter.org/github/spcl/dace/blob/master/tutorials/explicit.ipynb)
* [NumPy API Reference](https://nbviewer.jupyter.org/github/spcl/dace/blob/master/tutorials/numpy_frontend.ipynb)
* [SDFG API](https://nbviewer.jupyter.org/github/spcl/dace/blob/master/tutorials/sdfg_api.ipynb)
* [Using and Creating Transformations](https://nbviewer.jupyter.org/github/spcl/dace/blob/master/tutorials/transformations.ipynb)
* [Extending the Code Generator](https://nbviewer.jupyter.org/github/spcl/dace/blob/master/tutorials/codegen.ipynb)
* [Getting Started](https://nbviewer.jupyter.org/github/spcl/dace/blob/main/tutorials/getting_started.ipynb)
* [Benchmarks, Instrumentation, and Performance Comparison with Other Python Compilers](https://nbviewer.jupyter.org/github/spcl/dace/blob/main/tutorials/benchmarking.ipynb)
* [Explicit Dataflow in Python](https://nbviewer.jupyter.org/github/spcl/dace/blob/main/tutorials/explicit.ipynb)
* [NumPy API Reference](https://nbviewer.jupyter.org/github/spcl/dace/blob/main/tutorials/numpy_frontend.ipynb)
* [SDFG API](https://nbviewer.jupyter.org/github/spcl/dace/blob/main/tutorials/sdfg_api.ipynb)
* [Using and Creating Transformations](https://nbviewer.jupyter.org/github/spcl/dace/blob/main/tutorials/transformations.ipynb)
* [Extending the Code Generator](https://nbviewer.jupyter.org/github/spcl/dace/blob/main/tutorials/codegen.ipynb)

Publication
-----------
Expand Down
2 changes: 1 addition & 1 deletion dace/frontend/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ The Python-Frontend aims to assist users in creating SDFGs from Python code
relatively quickly. You may read a list of supported Python features
[here](python_supported_features.md). The frontend supports also operations
among DaCe arrays, in a manner similar to NumPy. A short tutorial can be bound
[here](https://nbviewer.jupyter.org/github/spcl/dace/blob/master/tutorials/numpy_frontend.ipynb).
[here](https://nbviewer.jupyter.org/github/spcl/dace/blob/main/tutorials/numpy_frontend.ipynb).
Please note that the Python-Frontend is still in an early version. For any issues
and feature requests, you can create an issue in the main DaCe project. You can
also address any questions you have to [email protected]
Expand Down
6 changes: 3 additions & 3 deletions doc/codegen/codegen.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,8 @@ There are many features that are enabled by generating code from SDFGs:

.. note::

You can also extend the code generator with new backends externally, see the `Customizing Code Generation tutorial <https://nbviewer.jupyter.org/github/spcl/dace/blob/master/tutorials/codegen.ipynb>`_
and the `Tensor Core sample <https://github.com/spcl/dace/blob/master/samples/codegen/tensor_cores.py>`_ for more information.
You can also extend the code generator with new backends externally, see the `Customizing Code Generation tutorial <https://nbviewer.jupyter.org/github/spcl/dace/blob/main/tutorials/codegen.ipynb>`_
and the `Tensor Core sample <https://github.com/spcl/dace/blob/main/samples/codegen/tensor_cores.py>`_ for more information.


After the code is generated, ``compiler.py`` will invoke CMake on the build folder (e.g., ``.dacecache/<program>/build``)
Expand Down Expand Up @@ -145,7 +145,7 @@ necessary headers. The runtime is used for:
match Python interfaces. This is especially useful to generate matching code when calling functions such as ``range``
inside Tasklets.

The folder also contains other files and helper functions, refer to its contents `on GitHub <https://github.com/spcl/dace/tree/master/dace/runtime/include/dace>`_
The folder also contains other files and helper functions, refer to its contents `on GitHub <https://github.com/spcl/dace/tree/main/dace/runtime/include/dace>`_
for more information.


Expand Down
8 changes: 4 additions & 4 deletions doc/extensions/extensions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,10 @@ The three key mechanisms of extensibility are class inheritance, :ref:`replaceme

For more examples of how to extend DaCe, see the following resources:

* Library nodes: `Einsum specialization library node <https://github.com/spcl/dace/blob/master/dace/libraries/blas/nodes/einsum.py>`_
* Transformations: `Using and Creating Transformations <https://nbviewer.jupyter.org/github/spcl/dace/blob/master/tutorials/transformations.ipynb>`_
* Code generators: `Extending the Code Generator <https://nbviewer.jupyter.org/github/spcl/dace/blob/master/tutorials/codegen.ipynb>`_
* Frontend extensions (enumerations and replacements): `Tensor Core code sample <https://github.com/spcl/dace/blob/master/samples/codegen/tensor_cores.py>`_
* Library nodes: `Einsum specialization library node <https://github.com/spcl/dace/blob/main/dace/libraries/blas/nodes/einsum.py>`_
* Transformations: `Using and Creating Transformations <https://nbviewer.jupyter.org/github/spcl/dace/blob/main/tutorials/transformations.ipynb>`_
* Code generators: `Extending the Code Generator <https://nbviewer.jupyter.org/github/spcl/dace/blob/main/tutorials/codegen.ipynb>`_
* Frontend extensions (enumerations and replacements): `Tensor Core code sample <https://github.com/spcl/dace/blob/main/samples/codegen/tensor_cores.py>`_

.. .. toctree
.. :maxdepth: 1
Expand Down
4 changes: 2 additions & 2 deletions doc/frontend/daceprograms.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ This includes standard Python code (loops, functions, context managers, etc.), b
and (most) functions.

.. note::
For more examples, see the `Getting Started <https://nbviewer.org/github/spcl/dace/blob/master/tutorials/getting_started.ipynb>`_
For more examples, see the `Getting Started <https://nbviewer.org/github/spcl/dace/blob/main/tutorials/getting_started.ipynb>`_
Jupyter Notebook tutorial.

Usage
Expand Down Expand Up @@ -349,7 +349,7 @@ Explicit Dataflow Mode


The DaCe Python frontend allows users to write SDFG tasklets and memlets directly in Python code.
For more example uses, see the `Explicit Dataflow <https://nbviewer.org/github/spcl/dace/blob/master/tutorials/explicit.ipynb>`_
For more example uses, see the `Explicit Dataflow <https://nbviewer.org/github/spcl/dace/blob/main/tutorials/explicit.ipynb>`_
tutorial.

Memlets
Expand Down
2 changes: 1 addition & 1 deletion doc/ide/cli.rst
Original file line number Diff line number Diff line change
Expand Up @@ -123,4 +123,4 @@ nothing is given, the tool will time the entire execution of each program using
+---------------------------+--------------+-----------------------------------------------------------+

For a more detailed guide on how to profile SDFGs and work with the resulting data, see :ref:`profiling` and
`this tutorial <https://nbviewer.org/github/spcl/dace/blob/master/tutorials/benchmarking.ipynb#Benchmarking-and-Instrumentation-API>`_.
`this tutorial <https://nbviewer.org/github/spcl/dace/blob/main/tutorials/benchmarking.ipynb#Benchmarking-and-Instrumentation-API>`_.
4 changes: 2 additions & 2 deletions doc/optimization/gpu.rst
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ Optimizing GPU SDFGs

When optimizing GPU SDFGs, there are a few things to keep in mind. Below is a non-exhaustive list of common GPU optimization
practices and how DaCe helps achieve them. To see some of these optimizations in action, check out the ``optimize_for_gpu``
function in the `Matrix Multiplication optimization example <https://github.com/spcl/dace/blob/master/samples/optimization/matmul.py>`_.
function in the `Matrix Multiplication optimization example <https://github.com/spcl/dace/blob/main/samples/optimization/matmul.py>`_.

* **Minimize host<->GPU transfers**: It is important to keep as much data as possible on the GPU across the application.
This is especially true for data that is accessed frequently, such as data that is used in a loop.
Expand Down Expand Up @@ -234,7 +234,7 @@ function in the `Matrix Multiplication optimization example <https://github.com/

* **Specialized hardware**: Specialized hardware, such as NVIDIA Tensor Cores or AMD's matrix instructions, can
significantly improve performance. DaCe will not automatically emit such instructions, but you can use such operations
in your code. See the `Tensor Core code sample <https://github.com/spcl/dace/blob/master/samples/codegen/tensor_cores.py>`_
in your code. See the `Tensor Core code sample <https://github.com/spcl/dace/blob/main/samples/codegen/tensor_cores.py>`_
to see how to make use of such units.

* **Advanced GPU Map schedules**: DaCe provides two additional built-in map schedules: :class:`~dace.dtypes.ScheduleType.GPU_ThreadBlock_Dynamic`
Expand Down
6 changes: 3 additions & 3 deletions doc/optimization/optimization.rst
Original file line number Diff line number Diff line change
Expand Up @@ -36,9 +36,9 @@ tunes the data layout of arrays.

The following resources are available to help you optimize your SDFG:

* Using transformations: `Using and Creating Transformations <https://nbviewer.org/github/spcl/dace/blob/master/tutorials/transformations.ipynb>`_
* Creating optimized schedules that can match optimized libraries: `Matrix multiplication CPU and GPU optimization example <https://github.com/spcl/dace/blob/master/samples/optimization/matmul.py>`_
* Auto-tuning and instrumentation: `Tuning data layouts sample <https://github.com/spcl/dace/blob/master/samples/optimization/tuning.py>`_
* Using transformations: `Using and Creating Transformations <https://nbviewer.org/github/spcl/dace/blob/main/tutorials/transformations.ipynb>`_
* Creating optimized schedules that can match optimized libraries: `Matrix multiplication CPU and GPU optimization example <https://github.com/spcl/dace/blob/main/samples/optimization/matmul.py>`_
* Auto-tuning and instrumentation: `Tuning data layouts sample <https://github.com/spcl/dace/blob/main/samples/optimization/tuning.py>`_

The following subsections provide more information on the different types of optimization methods:

Expand Down
4 changes: 2 additions & 2 deletions doc/optimization/profiling.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Profiling and Instrumentation

.. note::

For more information and examples, see the `Benchmarking and Instrumentation <https://nbviewer.jupyter.org/github/spcl/dace/blob/master/tutorials/benchmarking.ipynb>`_ tutorial.
For more information and examples, see the `Benchmarking and Instrumentation <https://nbviewer.jupyter.org/github/spcl/dace/blob/main/tutorials/benchmarking.ipynb>`_ tutorial.

Simple profiling
----------------
Expand Down Expand Up @@ -120,7 +120,7 @@ There are more instrumentation types available, such as fine-grained GPU kernel
Instrumentation can also collect performance counters on CPUs and GPUs using `LIKWID <https://github.com/RRZE-HPC/likwid>`_.
The :class:`~dace.dtypes.InstrumentationType.LIKWID_Counters` instrumentation type can be configured to collect
a wide variety of performance counters on CPUs and GPUs. An example use can be found in the
`LIKWID instrumentation code sample <https://github.com/spcl/dace/blob/master/samples/instrumentation/matmul_likwid.py>`_.
`LIKWID instrumentation code sample <https://github.com/spcl/dace/blob/main/samples/instrumentation/matmul_likwid.py>`_.


Instrumentation file format
Expand Down
2 changes: 1 addition & 1 deletion doc/optimization/vscode.rst
Original file line number Diff line number Diff line change
Expand Up @@ -145,5 +145,5 @@ transformations |add-xform-by-folder-btn|. The latter recursively traverses the
for any Python source code files and attempts to load each one as a transformation.

For more information on how to use and author data-centric transformations,
see :ref:`transforming` and the `Using and Creating Transformations <https://nbviewer.jupyter.org/github/spcl/dace/blob/master/tutorials/transformations.ipynb>`_
see :ref:`transforming` and the `Using and Creating Transformations <https://nbviewer.jupyter.org/github/spcl/dace/blob/main/tutorials/transformations.ipynb>`_
tutorial.
2 changes: 1 addition & 1 deletion doc/sdfg/ir.rst
Original file line number Diff line number Diff line change
Expand Up @@ -627,7 +627,7 @@ override default implementations for a library node type, or for an entire libra
Internally, an expansion is a subclass of :class:`~dace.transformation.transformation.ExpandTransformation`. It is
responsible for creating a new SDFG that implements the library node, and for connecting the inputs and outputs of the
library node to the new SDFG. An example of such an expansion is Einstein summation specialization
(`see full file <https://github.com/spcl/dace/blob/master/dace/libraries/blas/nodes/einsum.py>`_):
(`see full file <https://github.com/spcl/dace/blob/main/dace/libraries/blas/nodes/einsum.py>`_):

.. code-block:: python
Expand Down
2 changes: 1 addition & 1 deletion doc/sdfg/transformations.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ All transformations extend the :class:`~dace.transformation.transformation.Trans

Transformations can have properties and those can be used when applying them: for example, tile sizes in :class:`~dace.transformation.dataflow.tiling.MapTiling`.

For more information on how to use and author data-centric transformations, see the `Using and Creating Transformations <https://nbviewer.jupyter.org/github/spcl/dace/blob/master/tutorials/transformations.ipynb>`_
For more information on how to use and author data-centric transformations, see the `Using and Creating Transformations <https://nbviewer.jupyter.org/github/spcl/dace/blob/main/tutorials/transformations.ipynb>`_
tutorial.


Expand Down
2 changes: 1 addition & 1 deletion doc/setup/integration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ you to call the SDFG's entry point function, perform basic type checking, and ar
Python callback to function pointer, etc.).

Since the compiled SDFG is a low-level interface, it is much faster to call than the Python interface.
`We show this behavior in the Benchmarking tutorial <https://nbviewer.org/github/spcl/dace/blob/master/tutorials/benchmarking.ipynb>`_.
`We show this behavior in the Benchmarking tutorial <https://nbviewer.org/github/spcl/dace/blob/main/tutorials/benchmarking.ipynb>`_.
However, it requires caution as opposed to calling the ``@dace.program`` or the ``SDFG`` object because:

* Each array return value is represented internally as a single array (not reallocated every call) and will be
Expand Down
4 changes: 2 additions & 2 deletions doc/setup/quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -36,5 +36,5 @@ From here on out, you can optimize (:ref:`interactively <vscode>`, :ref:`program
your code.


For more examples of how to use DaCe, see the `samples <https://github.com/spcl/dace/tree/master/samples>`_ and
`tutorials <https://github.com/spcl/dace/tree/master/tutorials>`_ folders on GitHub.
For more examples of how to use DaCe, see the `samples <https://github.com/spcl/dace/tree/main/samples>`_ and
`tutorials <https://github.com/spcl/dace/tree/main/tutorials>`_ folders on GitHub.
2 changes: 1 addition & 1 deletion tutorials/benchmarking.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1260,7 +1260,7 @@
"source": [
"### Instrumentation API\n",
"\n",
"The Instrumentation API allows more fine-grained control over measuring program metrics. It creates a JSON report in `.dacecache/<program>/perf`, which can be obtained with the API or viewed with any Chrome Tracing capable viewer. More usage information and how to use the API to tune programs can be found in the [program tuning sample](https://github.com/spcl/dace/blob/master/samples/optimization/tuning.py)."
"The Instrumentation API allows more fine-grained control over measuring program metrics. It creates a JSON report in `.dacecache/<program>/perf`, which can be obtained with the API or viewed with any Chrome Tracing capable viewer. More usage information and how to use the API to tune programs can be found in the [program tuning sample](https://github.com/spcl/dace/blob/main/samples/optimization/tuning.py)."
]
},
{
Expand Down

0 comments on commit 2bf537a

Please sign in to comment.