From f58c59f131ac82fe259432d38d6b939ecd490543 Mon Sep 17 00:00:00 2001
From: Da Yin <42200725+WadeYin9712@users.noreply.github.com>
Date: Wed, 8 Nov 2023 15:14:29 -0800
Subject: [PATCH] Update index.html

---
 docs/index.html | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/docs/index.html b/docs/index.html
index 33ba7d9..37ee39a 100644
--- a/docs/index.html
+++ b/docs/index.html
@@ -3,7 +3,7 @@
 <head>
   <meta charset="utf-8">
   <meta name="viewport" content="width=device-width, initial-scale=1">
-  <title>Lumos: Language Agents with Unified Data Formats, Modular Design, and Open-Source LLMs</title>
+  <title>🪄 Lumos: Language Agents with Unified Data Formats, Modular Design, and Open-Source LLMs</title>
 
   <!-- Global site tag (gtag.js) - Google Analytics -->
   <script async src="https://www.googletagmanager.com/gtag/js?id=G-PYVRSFMDRL"></script>
@@ -369,7 +369,7 @@ <h2 class="title is-3">Comparison with Baseline Formulations</h2>
           <img src="static/images/lumos_results_2.png" class="center">
           </p>
           <p>
-            We compare <strong>Lumos</strong> formulation with other baseline formulations to train open-source agents. The baseline formulations are Vanilla Training, Chain-of-Thought Training, 
+            We compare <strong>Lumos</strong> formulation with other baseline formulations to train open-source agents. The baseline formulations are Chain-of-Thought Training 
             and Integrated Agent Training. 
           </p>
           <p>
@@ -399,8 +399,7 @@ <h2 class="title is-3">Generalizability of Lumos</h2>
           </p>
           <p>
             We find that after the unified training, <strong>Lumos</strong> would have slightly higher
-            performance on web and complex QA tasks. We also observe that <strong>Lumos</strong> can bring an improvement over domain-specific 
-            agents 5-10 reward improvement, and also better performance than larger agents with 13B and 30B sizes.
+            performance on web and complex QA tasks. We also observe that <strong>Lumos</strong> can bring an improvement over domain-specific agents 5-10 reward improvement, and also better performance than larger agents with 13B and 30B sizes.
           </p>
         </div>
       </div>
@@ -425,7 +424,7 @@ <h2 class="title is-3">Further Analysis on Annotations</h2>
             We also conduct deeper analysis about annotation quality and the choice of annotation formats. We answer the following questions: 
             <ul>
             <li><strong>Q1: How good is our converted training annotations?</strong></li>
-            <li><strong>Q2: Would it be better if we adopt low-level subgoals instead of our proposed high-level subgoals? </strong></li>
+            <li><strong>Q2: Would it be better if we adopt low-level subgoals instead of our proposed high-level subgoals?</strong></li>
             </ul>
           </p>
           <p>