diff --git a/README.md b/README.md index be0c8de..d1616c6 100644 --- a/README.md +++ b/README.md @@ -4,7 +4,7 @@

Official implementation of Conflict-Free Inverse Gradients Method

-
Towards Conflict-free Training for everything!
+
Towards Conflict-free Training for Everything and Everyone!

[📄 Research Paper]•[📖 Documentation & Examples] @@ -14,7 +14,7 @@ * **What is the ConFIG method?** -​ The conFIG method is a generic method for optimization problems involving **multiple loss terms** (e.g., Multi-task Learning, Continuous Learning, and Physics Informed Neural Networks). It prevents the optimization from getting stuck into a local minimum of a specific loss term due to the conflict between losses. On the contrary, it leads the optimization to the **shared minimal of all losses** by providing a **conflict-free update direction.** +​ The conFIG method is a generic method for optimization problems involving **multiple loss terms** (e.g., Multi-task Learning, Continuous Learning, and Physics Informed Neural Networks). It prevents the optimization from getting stuck into a local minimum of a specific loss term due to the conflict between losses. On the contrary, it leads the optimization to the **shared minimum of all losses** by providing a **conflict-free update direction.**

@@ -35,7 +35,7 @@ Then the dot product between $\boldsymbol{g}_{ConFIG}$ and each loss-specific gradient is always positive and equal, i.e., $`\boldsymbol{g}_{i}^{\top}\boldsymbol{g}_{ConFIG}=\boldsymbol{g}_{j}^{\top}\boldsymbol{g}_{ConFIG}> 0 \quad \forall i,j \in [1,m]`$​. -* **Is the ConFIG Computationally expensive?** +* **Is the ConFIG computationally expensive?** ​ Like many other gradient-based methods, ConFIG needs to calculate each loss's gradient in every optimization iteration, which could be computationally expensive when the number of losses increases. However, we also introduce a **momentum-based method** where we can reduce the computational cost **close to or even lower than a standard optimization procedure** with a slight degeneration in accuracy. This momentum-based method is also applied to another gradient-based method. diff --git a/docs/assets/config_white.png b/docs/assets/config_white.png new file mode 100644 index 0000000..7684ca7 Binary files /dev/null and b/docs/assets/config_white.png differ diff --git a/docs/assets/config_white.svg b/docs/assets/config_white.svg index 9d3cb3f..3c897d9 100644 --- a/docs/assets/config_white.svg +++ b/docs/assets/config_white.svg @@ -6,9 +6,9 @@ version="1.1" id="svg171" sodipodi:docname="config_white.svg" - inkscape:version="1.2.2 (732a01da63, 2022-12-09)" + inkscape:version="1.3.2 (1:1.3.2+202311252150+091e20ef0f)" xml:space="preserve" - inkscape:export-filename="config.png" + inkscape:export-filename="config_white.png" inkscape:export-xdpi="96" inkscape:export-ydpi="96" xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape" @@ -27,14 +27,14 @@ inkscape:pagecheckerboard="0" inkscape:deskcolor="#d1d1d1" inkscape:document-units="pt" - inkscape:zoom="1.2245469" - inkscape:cx="331.95952" - inkscape:cy="12.249429" - inkscape:window-width="2560" - inkscape:window-height="1494" - inkscape:window-x="-11" - inkscape:window-y="-11" - inkscape:window-maximized="1" + inkscape:zoom="2.4490938" + inkscape:cx="115.55294" + inkscape:cy="91.666559" + inkscape:window-width="1464" + inkscape:window-height="773" + inkscape:window-x="1472" + inkscape:window-y="449" + inkscape:window-maximized="0" inkscape:current-layer="svg171" showgrid="false" /> + + + + + + + + + + + + + + + + + + + + + Open Locally + Open Locally + + diff --git a/docs/examples/mtl_toy.ipynb b/docs/examples/mtl_toy.ipynb index 1a6100b..9896612 100644 --- a/docs/examples/mtl_toy.ipynb +++ b/docs/examples/mtl_toy.ipynb @@ -8,9 +8,8 @@ "\n", "Here, we would like to show a classic and interesting toy example of multi-task learning (MTL). \n", "\n", - "\n", - " \"Open\n", - "\n", + "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tum-pbs/ConFIG/blob/main/docs/examples/mtl_toy.ipynb)\n", + "[![Open Locally](../assets/download.svg)](https://github.com/tum-pbs/ConFIG/blob/main/docs/examples/mtl_toy.ipynb)\n", "\n", "In this example, there are two tasks represented by two loss functions, which are" ] @@ -439,7 +438,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The results are similar to ConFIG, but it needs more iterations to converge. You may notice that we give an additional 1000 optimization iterations for the momentum version. This is because we only update a single gradient direction every iteration, so it usually requires more iterations to get a similar or better performance than the ConFIG method. You can have a try by yourself to see the optimization trajectory. The acceleration of the momentum version is not so significant in this case since the backpropagation of gradients is not the main bottleneck of the optimization." + "The results are similar to ConFIG, but it needs more iterations to converge. You may notice that we give an additional 1000 optimization iterations for the momentum version. This is because we only update a single gradient direction every iteration, so it usually requires more iterations to get a similar or better performance than the ConFIG method. You can have a try by yourself to see the optimization trajectory. The acceleration of the momentum version is not so significant in this case since the backpropagation of gradients is not the main bottleneck of the optimization.\n", + "\n", + "Click [here](https://github.com/tum-pbs/ConFIG/tree/main/experiments/MTL) to have a check in the MTL experiment in our research paper." ] } ], diff --git a/docs/examples/pinn_burgers.ipynb b/docs/examples/pinn_burgers.ipynb index 83cec47..f70b07a 100644 --- a/docs/examples/pinn_burgers.ipynb +++ b/docs/examples/pinn_burgers.ipynb @@ -9,9 +9,8 @@ "\n", "In this example, we would like to show you another example of how to use ConFIG method to train a physics informed neural network (PINN) for solving a PDE. \n", "\n", - "\n", - " \"Open\n", - "\n", + "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tum-pbs/ConFIG/blob/main/docs/examples/pinn_burgers.ipynb)\n", + "[![Open Locally](../assets/download.svg)](https://github.com/tum-pbs/ConFIG/blob/main/docs/examples/pinn_burgers.ipynb)\n", "\n", "In this example, we will solve the 1D Burgers' equation:\n", "\n", @@ -471,7 +470,9 @@ "id": "bb56fffe", "metadata": {}, "source": [ - "As the result shows, both the training speed and test accuracy are improved by using the momentum version of the ConFIG method. Please note that the momentum version does not always guarantee a better performance than the non-momentum version. The main feature of the momentum version is the acceleration, as it only requires a single gradient update in each iteration. We usually will just give the momentum version more training epochs to improve the performance further." + "As the result shows, both the training speed and test accuracy are improved by using the momentum version of the ConFIG method. Please note that the momentum version does not always guarantee a better performance than the non-momentum version. The main feature of the momentum version is the acceleration, as it only requires a single gradient update in each iteration. We usually will just give the momentum version more training epochs to improve the performance further.\n", + "\n", + "Click [here](https://github.com/tum-pbs/ConFIG/tree/main/experiments/PINN) to have a check in the PINN experiment in our research paper." ] } ], diff --git a/docs/index.md b/docs/index.md index b8f7a23..75a7359 100644 --- a/docs/index.md +++ b/docs/index.md @@ -8,7 +8,7 @@ hide:

-

Towards Conflict-free Training for everything!

+

Towards Conflict-free Training for Everything and Everyone!

[ 📄 Research Paper ]•[ GitHub Repository ] @@ -20,7 +20,7 @@ hide: * **What is the ConFIG method?** -​ The conFIG method is a generic method for optimization problems involving **multiple loss terms** (e.g., Multi-task Learning, Continuous Learning, and Physics Informed Neural Networks). It prevents the optimization from getting stuck into a local minimum of a specific loss term due to the conflict between losses. On the contrary, it leads the optimization to the **shared minimal of all losses** by providing a **conflict-free update direction.** +​ The conFIG method is a generic method for optimization problems involving **multiple loss terms** (e.g., Multi-task Learning, Continuous Learning, and Physics Informed Neural Networks). It prevents the optimization from getting stuck into a local minimum of a specific loss term due to the conflict between losses. On the contrary, it leads the optimization to the **shared minimum of all losses** by providing a **conflict-free update direction.**

@@ -41,7 +41,7 @@ $$ Then the dot product between $\mathbf{g}_{ConFIG}$ and each loss-specific gradient is always positive and equal, i.e., $\mathbf{g}_{i}^{\top}\mathbf{g}_{ConFIG}=\mathbf{g}_{j}^{\top}\mathbf{g}_{ConFIG} > 0 \quad \forall i,j \in [1,m]$​. -* **Is the ConFIG Computationally expensive?** +* **Is the ConFIG computationally expensive?** ​ Like many other gradient-based methods, ConFIG needs to calculate each loss's gradient in every optimization iteration, which could be computationally expensive when the number of losses increases. However, we also introduce a **momentum-based method** where we can reduce the computational cost **close to or even lower than a standard optimization procedure** with a slight degeneration in accuracy. This momentum-based method is also applied to another gradient-based method. diff --git a/mkdocs.yml b/mkdocs.yml index a08bdc8..79d5ab9 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -15,6 +15,7 @@ theme: - toc.integrate # Table of contents is integrated on the left; does not appear separately on the right. - header.autohide # header disappears as you scroll - navigation.top + - navigation.footer palette: - scheme: default primary: brown @@ -30,7 +31,7 @@ theme: name: Switch to light mode icon: repo: fontawesome/brands/github # GitHub logo in top right - logo: assets/config_white.svg + logo: assets/config_white.png favicon: assets/config_colorful.svg extra: