diff --git a/.DS_Store b/.DS_Store index f0ba327..413c605 100644 Binary files a/.DS_Store and b/.DS_Store differ diff --git a/404.html b/404.html index aa7ee22..fd7cdbd 100644 --- a/404.html +++ b/404.html @@ -5,13 +5,13 @@ Page Not Found | Symphony - +
Skip to main content

Page Not Found

We could not find what you were looking for.

Please contact the owner of the site that linked you to the original URL and let them know their link is broken.

- + \ No newline at end of file diff --git a/api/client.html b/api/client.html index 5ce1d0b..fa89219 100644 --- a/api/client.html +++ b/api/client.html @@ -5,13 +5,13 @@ Symphony Client | Symphony - +
Skip to main content

Symphony Client

The Symphony client provides a set of intutive APIs to interact with the Symphony runtime.


Getting Started

Install the Symphony client:

npm install @symphony-rtc/client

Require the client module in your application:

import { SymphonyClient } from @symphony-rtc/client

Instantiate a new Symphony Client that can be used to connect to a Room:

const client = new SymphonyClient(websocketUrl)

API

SymphonyClient

enter(roomId)

Enters a room and returns it.

leave(roomId)

Leaves a room.


Room

A Room object is returned when calling Client.enter().

bundle(callback)

Merges operations in the callback function into a single operation.

getClientId()

Returns the id of the client.

getOthers()

Returns all other users in the Room.

getRoomId()

Returns the Room id.

newList(id)

Returns a new top-level SyncedList.

newMap(id)

Returns a new top-level SyncedMap.

newNestedList()

Returns a new SyncedList that can be nested within another synced type.

newNestedMap()

Returns a new SyncedMap that can be nested within another synced type.

subscribe(subscribedItem, callback)

Subscribes to updates for an item. If the subscribedItem is a SyncedList or SyncedMap, the provided callback is executed whenever that shared type changes. If the subscribedItem is 'others', the provided callback is executed whenever another client's presence changes.

unsubscribe(subscribedItem)

Stops subscribing to the SyncedList or SyncedMap passed as an argument.

updatePresence(props)

Updates the presence of the client. Properties passed as arguments will be updated, while other properties of presence will remain unchanged.


History

canRedo()

Checks whether there are any operations to redo, and returns a boolean.

canUndo()

Checks whether there are any operations to undo, and returns a boolean.

clear()

Removes all operations from the history.

mergeAll()

Merges all subsequent operations into a single operation until stopMergingAll is called.

redo()

Redoes the last operation by the client.

stopCaptureTimeout()

Prevents the next operation from being merged with the previous based on captureTimeout.

stopMergingAll()

Stops merging operations; subsequent operations will be treated as separate.

undo()

Undoes the last operation by the client.


SyncedList

The SyncedList is a shared type that is similar to the JavaScript Array.

length

Returns the number of elements of the SyncedList.

clear()

Removes all elements from the SyncedList.

delete(index, length)

Removes length elements from the SyncedList starting at the specified index.

every(callback)

Checks whether all elements in the SyncedList pass the test implemented by the provided function, and returns a Boolean value.

filter(callback)

Returns a new array containing all elements in the SyncedList that pass the test implemented by the provided function.

find(callback)

Returns the first element in the SyncedList that satisfies the provided testing function.

forEach(callback)

Calls the provided function once for each element of the SyncedList.

get(index)

Returns the element at the specified index of the SyncedList.

indexOf(element)

Returns the first index at which a given element can be found in the SyncedList, or -1 if not present.

insert(index, ...elements)

Inserts one or more elements at the specified index.

lastIndexOf(element)

Returns the index of the last occurrence of the specified element in the SyncedList, or -1 if not present.

map(callback)

Returns an array containing the elements of the SyncedList for which the provided function returns a truthy value.

move(oldIndex, newIndex)

Moves the element at a specified index of the SyncedList to a new index.

newHistory(captureTimeout=0)

Returns a new History object that can be used to undo/redo the current client's changes.

push(...elements)

Adds one or more elements to the SyncedList and returns the new length of the SyncedList.

set(index, element)

Replaces the element at the specified index of the SyncedList with the provided element.

slice(start, end)

Returns an array containing the elements of the SyncedList from start to end (non-inclusive).

some(callback)

Checks whether at least one element in the SyncedList passes the test implemented by the provided function, and returns a Boolean value.

toArray()

Returns an array containing all the elements of the SyncedList.

toJSON()

Returns a JSON representation of the SyncedList.

unshift(...elements)

Adds one or more elements to the beginning of the SyncedList and returns the new length of the SyncedList.


SyncedMap

The SyncedMap is a shared type that is similar to the JavaScript Map.

clear()

Removes all elements from the SyncedMap.

copy()

Returns a new SyncedMap with the same #values as the caller.

delete(key)

Removes the specified entry from the SyncedMap by key. Returns true if the entry existed and has been removed, or false if it did not exist.

entries()

Returns a new Iterator object of [key, value] pairs for each entry in the SyncedMap.

forEach(callback)

Calls the provided function once for each [key, value] pair of the SyncedMap.

get(key)

Returns a specified entry from the SyncedMap.

has(key)

Returns a Boolean indicating whether the SyncedMap contains an entry with the specified key or not.

keys()

Returns a new Iterator object containing the keys for each entry in the SyncedMap.

newHistory()

Returns a new History object that can be used to undo/redo the current client's changes.

set(key, value)

Adds or updates an entry in the SyncedMap with a specified key and a value.

size()

Returns the number of elements in the SyncedMap.

toJSON()

Returns a JSON representation of the SyncedMap.

values()

Returns a new Iterator object that contains the the values for each entry in the SyncedMap.

- + \ No newline at end of file diff --git a/assets/images/manual-750de189201c10a7a95ca8845a464856.png b/assets/images/conflict-comparison-750de189201c10a7a95ca8845a464856.png similarity index 100% rename from assets/images/manual-750de189201c10a7a95ca8845a464856.png rename to assets/images/conflict-comparison-750de189201c10a7a95ca8845a464856.png diff --git a/assets/images/conflict-comparison-9fd64e64f630a4a9b2b2b3dd21d8065a.png b/assets/images/manual-9fd64e64f630a4a9b2b2b3dd21d8065a.png similarity index 100% rename from assets/images/conflict-comparison-9fd64e64f630a4a9b2b2b3dd21d8065a.png rename to assets/images/manual-9fd64e64f630a4a9b2b2b3dd21d8065a.png diff --git a/assets/images/relational-model-3a055499946dc54fb030a4c5440a8ba3.png b/assets/images/relational-model-3a055499946dc54fb030a4c5440a8ba3.png deleted file mode 100644 index 3addb5f..0000000 Binary files a/assets/images/relational-model-3a055499946dc54fb030a4c5440a8ba3.png and /dev/null differ diff --git a/assets/images/relational-model-6839055da94556f7d7fde7a1a63f3eb3.png b/assets/images/relational-model-6839055da94556f7d7fde7a1a63f3eb3.png new file mode 100644 index 0000000..b24fed6 Binary files /dev/null and b/assets/images/relational-model-6839055da94556f7d7fde7a1a63f3eb3.png differ diff --git a/assets/js/6fdce000.08cfe3f0.js b/assets/js/6fdce000.08cfe3f0.js new file mode 100644 index 0000000..007616e --- /dev/null +++ b/assets/js/6fdce000.08cfe3f0.js @@ -0,0 +1 @@ +"use strict";(self.webpackChunksymphony_collaboration=self.webpackChunksymphony_collaboration||[]).push([[210],{3905:(e,t,a)=>{a.d(t,{Zo:()=>d,kt:()=>m});var o=a(7294);function i(e,t,a){return t in e?Object.defineProperty(e,t,{value:a,enumerable:!0,configurable:!0,writable:!0}):e[t]=a,e}function n(e,t){var a=Object.keys(e);if(Object.getOwnPropertySymbols){var o=Object.getOwnPropertySymbols(e);t&&(o=o.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),a.push.apply(a,o)}return a}function r(e){for(var t=1;t=0||(i[a]=e[a]);return i}(e,t);if(Object.getOwnPropertySymbols){var n=Object.getOwnPropertySymbols(e);for(o=0;o=0||Object.prototype.propertyIsEnumerable.call(e,a)&&(i[a]=e[a])}return i}var l=o.createContext({}),c=function(e){var t=o.useContext(l),a=t;return e&&(a="function"==typeof e?e(t):r(r({},t),e)),a},d=function(e){var t=c(e.components);return o.createElement(l.Provider,{value:t},e.children)},h="mdxType",u={inlineCode:"code",wrapper:function(e){var t=e.children;return o.createElement(o.Fragment,{},t)}},p=o.forwardRef((function(e,t){var a=e.components,i=e.mdxType,n=e.originalType,l=e.parentName,d=s(e,["components","mdxType","originalType","parentName"]),h=c(a),p=i,m=h["".concat(l,".").concat(p)]||h[p]||u[p]||n;return a?o.createElement(m,r(r({ref:t},d),{},{components:a})):o.createElement(m,r({ref:t},d))}));function m(e,t){var a=arguments,i=t&&t.mdxType;if("string"==typeof e||i){var n=a.length,r=new Array(n);r[0]=p;var s={};for(var l in t)hasOwnProperty.call(t,l)&&(s[l]=t[l]);s.originalType=e,s[h]="string"==typeof e?e:i,r[1]=s;for(var c=2;c{a.r(t),a.d(t,{assets:()=>d,contentTitle:()=>l,default:()=>m,frontMatter:()=>s,metadata:()=>c,toc:()=>h});var o=a(7462),i=a(7294),n=a(3905);const r=e=>{let{center:t}=e;return 1==t?i.createElement("div",{className:"mx-auto mb-8 h-[2px] max-w-sm bg-gradient-to-r from-transparent via-[#65147c]"}):i.createElement("div",{className:"h-[2px] mb-8 max-w-sm bg-gradient-to-r from-[#c15bde] via-[#65147c]"})},s={title:"Case Study",description:"Symphony Technical Case Study - Challenges, System Design, and Engineering Decisions"},l="Case Study",c={unversionedId:"case-study",id:"case-study",title:"Case Study",description:"Symphony Technical Case Study - Challenges, System Design, and Engineering Decisions",source:"@site/docs/case-study.mdx",sourceDirName:".",slug:"/case-study",permalink:"/case-study",draft:!1,tags:[],version:"current",frontMatter:{title:"Case Study",description:"Symphony Technical Case Study - Challenges, System Design, and Engineering Decisions"}},d={},h=[{value:"Introduction",id:"introduction",level:2},{value:"Collaboration",id:"collaboration",level:2},{value:"Evolution of Web Applications",id:"evolution-of-web-applications",level:2},{value:"Introducing Real-Time",id:"introducing-real-time",level:3},{value:"WebRTC",id:"webrtc",level:4},{value:"WebSocket",id:"websocket",level:4},{value:"Conflict",id:"conflict",level:3},{value:"Methods of Conflict Resolution & Maintaining Distributed Consistency",id:"methods-of-conflict-resolution--maintaining-distributed-consistency",level:3},{value:"Operational Transformation (OT)",id:"operational-transformation-ot",level:4},{value:"Conflict Free Replicated Data Types (CRDTs)",id:"conflict-free-replicated-data-types-crdts",level:4},{value:"Custom Conflict Resolution Mechanisms (Not sure whether to include)",id:"custom-conflict-resolution-mechanisms-not-sure-whether-to-include",level:4},{value:"Choosing a Method of Conflict Resolution",id:"choosing-a-method-of-conflict-resolution",level:3},{value:"Manually Building a Real-time Collaborative Application",id:"manually-building-a-real-time-collaborative-application",level:2},{value:"Existing Solutions",id:"existing-solutions",level:3},{value:"DIY Solutions",id:"diy-solutions",level:4},{value:"Commercial Solutions",id:"commercial-solutions",level:4},{value:"A Solution for Our Use Case",id:"a-solution-for-our-use-case",level:2},{value:"Symphony",id:"symphony",level:2},{value:"Overview",id:"overview",level:3},{value:"Using Symphony",id:"using-symphony",level:3},{value:"Architecture Overview",id:"architecture-overview",level:3},{value:"Terminology",id:"terminology",level:4},{value:"Fundamental Requirements",id:"fundamental-requirements",level:4},{value:"Design Philosophy",id:"design-philosophy",level:4},{value:"Core Architecture",id:"core-architecture",level:4},{value:"Implementing the Core Architecture",id:"implementing-the-core-architecture",level:3},{value:"Conflict Resolution",id:"conflict-resolution",level:4},{value:"State Change Propagation",id:"state-change-propagation",level:4},{value:"Persisting Room Data",id:"persisting-room-data",level:4},{value:"Storing Document Data",id:"storing-document-data",level:4},{value:"Front-end Client API",id:"front-end-client-api",level:4},{value:"Load Testing",id:"load-testing",level:2},{value:"Constructing a Test Environment",id:"constructing-a-test-environment",level:3},{value:"Scaling",id:"scaling",level:2},{value:"Looking to Existing Solutions",id:"looking-to-existing-solutions",level:3},{value:"Redis Pub/Sub",id:"redis-pubsub",level:3},{value:"Querying for Documents",id:"querying-for-documents",level:4},{value:"Adding and Removing Instances",id:"adding-and-removing-instances",level:4},{value:"Evaluating the Current Scaling Solution",id:"evaluating-the-current-scaling-solution",level:4},{value:"A Better Scaling Solution",id:"a-better-scaling-solution",level:2},{value:"Implementation",id:"implementation",level:3},{value:"Isolating Room Processes",id:"isolating-room-processes",level:4},{value:"Orchestrating and Scaling Room Processes",id:"orchestrating-and-scaling-room-processes",level:4},{value:"Serverless",id:"serverless",level:4},{value:"Proxying Requests",id:"proxying-requests",level:4},{value:"Overview of the Final Architecture",id:"overview-of-the-final-architecture",level:3},{value:"Additional Improvements",id:"additional-improvements",level:3},{value:"Monitoring and Visibility",id:"monitoring-and-visibility",level:4},{value:"Reducing Pod Cold Start Time",id:"reducing-pod-cold-start-time",level:4},{value:"Securing the Deployment",id:"securing-the-deployment",level:4},{value:"Snapshotting",id:"snapshotting",level:4},{value:"Future Work",id:"future-work",level:2},{value:"References",id:"references",level:2}],u={toc:h},p="wrapper";function m(e){let{components:t,...i}=e;return(0,n.kt)(p,(0,o.Z)({},u,i,{components:t,mdxType:"MDXLayout"}),(0,n.kt)("h1",{id:"case-study"},"Case Study"),(0,n.kt)(r,{mdxType:"HeaderLine"}),(0,n.kt)("blockquote",null,(0,n.kt)("p",{parentName:"blockquote"},(0,n.kt)("em",{parentName:"p"},"\u201cAlone we can do so little; together we can do so much.\u201d - Helen Keller"))),(0,n.kt)("h2",{id:"introduction"},"Introduction"),(0,n.kt)("p",null,"Symphony is an open source framework designed to make it easy for developers to build collaborative web applications. Symphony handles the complexities of implementing collaboration, including conflict resolution and real-time infrastructure, freeing developers to focus on creating unique and engaging features for their applications."),(0,n.kt)("video",{loop:!0,playsInline:!0,muted:!0,autoPlay:!0,className:"max-w-full"},(0,n.kt)("source",{src:"/img/symphony.mp4",type:"video/mp4"})),(0,n.kt)("p",null,"In this case study, we\u2019ll discuss the challenges that arise when building collaborative experiences on the web, the limitations of traditional approaches in solving these problems, and how we designed Symphony to overcome them."),(0,n.kt)("h2",{id:"collaboration"},"Collaboration"),(0,n.kt)("p",null,"Real-time collaboration, where multiple users can concurrently work together on a common task, has been a notable feature since the earliest days of the internet. It\u2019s origin can be traced back to the 1960s, when Douglas Engelbart in his famous ",(0,n.kt)("em",{parentName:"p"},"Mother of All Demos"),", demonstrated the first real-time collaborative editor, built on the oN-Line System (NLS), that allowed users to create and edit documents, link them together, and share them with others.",(0,n.kt)("sup",{parentName:"p",id:"fnref-1"},(0,n.kt)("a",{parentName:"sup",href:"#fn-1",className:"footnote-ref"},"1"))),(0,n.kt)("p",null,"However, for much of the web\u2019s history, the majority of applications have notably been non-collaborative. Without the ability to work together on a common task in real-time, users have to instead enter into a tedious cycle of changing, exporting, and manually syncing or emailing copies of files."),(0,n.kt)("p",null,(0,n.kt)("img",{alt:"Modify-Export-Send feedback loop",src:a(1561).Z,width:"847",height:"197"})),(0,n.kt)("p",null,"This slow feedback loop harms productivity.",(0,n.kt)("sup",{parentName:"p",id:"fnref-2"},(0,n.kt)("a",{parentName:"sup",href:"#fn-2",className:"footnote-ref"},"2"))," In other words, this workflow is sub-optimal and restrictive."),(0,n.kt)("p",null,"With the rise of remote work where users are geographically separated, the need to improve this workflow has become even more acute."),(0,n.kt)("p",null,"As noted by industry leaders, the optimal solution is for applications to allow multiple users to collaborate online in real-time."),(0,n.kt)("blockquote",null,(0,n.kt)("p",{parentName:"blockquote"},(0,n.kt)("strong",{parentName:"p"},(0,n.kt)("em",{parentName:"strong"},'"',"[Real-time collaboration]",' eliminates the need to export, sync, or email copies of files and allows more people to take part in the design process." - Evan Wallace, Figma')))),(0,n.kt)("p",null,"Popular products such as\xa0",(0,n.kt)("a",{parentName:"p",href:"https://www.figma.com/"},"Figma"),",\xa0",(0,n.kt)("a",{parentName:"p",href:"https://www.google.co.uk/docs/about/"},"Google Docs"),", and\xa0",(0,n.kt)("a",{parentName:"p",href:"https://code.visualstudio.com/"},"Visual Studio Code"),", incorporate this as a defining feature, allowing multiple users to concurrently modify the same state."),(0,n.kt)("p",null,"The problem is that building these types of applications is non-trivial. To understand why, we need to consider the characteristics of traditional web applications."),(0,n.kt)("h2",{id:"evolution-of-web-applications"},"Evolution of Web Applications"),(0,n.kt)("p",null,"Traditionally, the architecture of most web applications have conformed to the client-server model, where client and server communicate in a request-response cycle."),(0,n.kt)("p",null,"When a user makes a change to the client state, the change is propagated to the application server via a HTTP request, which in turn updates the database i.e. the true application state and confirms the change to the client via a response."),(0,n.kt)("p",null,(0,n.kt)("img",{alt:"Three-tier Architecture",src:a(4619).Z,width:"903",height:"209"})),(0,n.kt)("p",null,"This architecture is fine for applications that are designed to be used by only one user at a time. However, for applications that seek to provide a multiplayer experience, the stateless nature of HTTP is problematic."),(0,n.kt)("p",null,"Since each state change by a given client is scoped to the request-response cycle, other users who wish to view the change must first request the data from the server, usually by refreshing the page."),(0,n.kt)("p",null,"In situations where multiple users are frequently modifying the same state, the need for each client to constantly send requests can quickly become burdensome and inefficient."),(0,n.kt)("h3",{id:"introducing-real-time"},"Introducing Real-Time"),(0,n.kt)("p",null,"As companies began wanting to create applications that allowed multiple users to interact in realtime, the stateless nature of HTTP request-response cycle became a limitation. These applications such as online games, chat rooms, and social media platforms, needed to maintain updated state without requiring the user to take any specific action such as a page refresh. In other words, a different approach to data transmission was needed- one that allowed data to be shared bi-directionally between clients and/or a server in real-time."),(0,n.kt)("p",null,"In response, new web protocols were developed to help facilitate this. Two of the most popular include WebRTC and WebSocket."),(0,n.kt)("h4",{id:"webrtc"},"WebRTC"),(0,n.kt)("p",null,"Web Real-Time Communication (WebRTC) is an open-source technology that enables real-time communication between web browsers over the internet.",(0,n.kt)("sup",{parentName:"p",id:"fnref-3"},(0,n.kt)("a",{parentName:"sup",href:"#fn-3",className:"footnote-ref"},"3"))," The protocol uses a combination of JavaScript APIs and peer-to-peer networking to establish direct communication channels between browsers, without the need for a permanent, central server. UDP is used as the primary transport protocol for real-time data transmission. This makes WebRTC an especially attractive choice for collaborative applications that require very low-latency communication at the expense of reduced reliability and error correction, such as video conferencing, online gaming, and live streaming."),(0,n.kt)("h4",{id:"websocket"},"WebSocket"),(0,n.kt)("p",null,"WebSocket is a web protocol that provides a persistent, bi-directional communication channel between a client and a server over a single, long-lived TCP connection.",(0,n.kt)("sup",{parentName:"p",id:"fnref-4"},(0,n.kt)("a",{parentName:"sup",href:"#fn-4",className:"footnote-ref"},"4"))," The connection is established via a handshake between client and server. Since TCP is used as the primary transport protocol, WebSocket is a suitable choice for collaborative applications that require stronger guarantees on the reliability and security of the communication channel at the expense of higher latency, such as real-time dashboards, stock price tickers, and live chat."),(0,n.kt)("p",null,"Using technologies such as WebRTC and WebSocket, clients and/or servers are able to maintain persistent, stateful communication channels, no longer bound by the limits of the request-response cycle. As such, it permitted the development of so-called real-time applications to be built, where state updates are perceived to be received instantaneously without page refresh."),(0,n.kt)("p",null,"It may initially seem that the addition of real-time solves the collaboration problem since multiple users can now see changes immediately."),(0,n.kt)("p",null,"This is not the case."),(0,n.kt)("p",null,"The problem is that many real-time applications such as chat applications have the implicit constraint that each piece of state can only have a single mutable reference to it. In other words, the same piece of state cannot be modified concurrently by multiple users. For example, in a chat application, a given message is owned by a single user and they alone can edit it at any given time."),(0,n.kt)("p",null,"For an application to be truly collaborative, it must allow users to work together in real-time on shared state, where multiple users can modify the same piece of state ",(0,n.kt)("em",{parentName:"p"},"at the same time, without conflicts or inconsistencies.")),(0,n.kt)("p",null,"The possibility of conflict radically increases the complexity of implementing collaborative applications."),(0,n.kt)("h3",{id:"conflict"},"Conflict"),(0,n.kt)("p",null,"In the context of real-time collaborative applications, conflict refers to a situation where two or more users attempt to modify the same piece of state, without knowledge of one another (concurrently), resulting in conflicting versions of that data."),(0,n.kt)("p",null,"For example, multiple users working on a shared task or document may make changes to the same part of the document at the same time. Alternatively, network delays could cause state to diverge between different users which must be reconciled."),(0,n.kt)("p",null,"We can concretely demonstrate how conflict arises using the following examples."),(0,n.kt)("p",null,"Suppose that Alice and Bob are collaborating on a text document, when both Bob and Alice attempt to write at the same spot:"),(0,n.kt)("video",{loop:!0,playsInline:!0,muted:!0,autoPlay:!0,className:"max-w-full"},(0,n.kt)("source",{src:"/img/case-study/text-editor-conflict.mp4",type:"video/mp4"})),(0,n.kt)("p",null,"When conflicts arise, Alice and Bob\u2019s modifications can be seen as branching off from the previous state of the system, creating a parallel version of the application state."),(0,n.kt)("p",null,(0,n.kt)("img",{alt:"Branching",src:a(5939).Z,width:"723",height:"570"})),(0,n.kt)("p",null,"For a collaborative application, we need a method of reconciling such conflicts and enforcing distributed consistency across clients."),(0,n.kt)("figure",{className:"mb-5"},(0,n.kt)("img",{src:"/img/case-study/merge.png",alt:"merging"}),(0,n.kt)("figcaption",{className:"italic"},"The role of a conflict resolution mechanism is to merge branches in a deterministic way, until all branches have converged to a single, consistent state that all parties agree upon.")),(0,n.kt)("p",null,"In other words, after applying all state changes, the application should deterministically converge to an eventually consistent state across the whole system that all parties agree upon."),(0,n.kt)("h3",{id:"methods-of-conflict-resolution--maintaining-distributed-consistency"},"Methods of Conflict Resolution & Maintaining Distributed Consistency"),(0,n.kt)("p",null,"Over the years, there have been multiple solutions that have been proposed to the problem of conflict resolution."),(0,n.kt)("p",null,"The simplest strategy, as mentioned previously, is to prevent conflicts from occurring in the first place. This can be implemented via locking. When a given user is making edits, the document is locked, becoming read-only to other users. In other words, we impose the constraint that only a single user can have a mutable reference to the document at any given time."),(0,n.kt)("p",null,"Thanks to its simplicity, this approach is widely used even today. For example, Basecamp, a web-based project management tool, employs locking to prevent conflicts:"),(0,n.kt)("p",null,(0,n.kt)("img",{alt:"Basecamp locking",src:a(1191).Z,width:"2235",height:"1057"})),(0,n.kt)("p",null,"However, as noted previously, this approach provides a very limited workflow since it solely facilitates asynchronous collaboration, where users have to implicitly arrange times when they can edit the document or work on separate documents and then merge changes."),(0,n.kt)("p",null,"For real-time, ",(0,n.kt)("em",{parentName:"p"},"synchronous")," collaboration, more advanced conflict resolution mechanisms are required."),(0,n.kt)("h4",{id:"operational-transformation-ot"},"Operational Transformation (OT)"),(0,n.kt)("p",null,"One possible approach is to use the operational transformation (OT) algorithm, famously used by Google Docs ",(0,n.kt)("sup",{parentName:"p",id:"fnref-5"},(0,n.kt)("a",{parentName:"sup",href:"#fn-5",className:"footnote-ref"},"5")),"."),(0,n.kt)("p",null,"OT represent each user\u2019s edits as a sequence of operations that can be applied to the shared application state. For example, in the case of a collaborative text editor, where the sequence of characters is zero-indexed, the operation to insert the character ",(0,n.kt)("inlineCode",{parentName:"p"},"'a'")," at the beginning of the first sentence may be represented as ",(0,n.kt)("inlineCode",{parentName:"p"},"insert('a', 0)"),"."),(0,n.kt)("p",null,"When a client makes an edit to the state, the corresponding operation is transmitted to the server, which broadcasts it to all other collaborating clients."),(0,n.kt)("p",null,"In cases where multiple users attempt to modify the same piece of state concurrently, the OT algorithm defines a set of rules, which encode how conflicting operations should be ",(0,n.kt)("em",{parentName:"p"},"transformed")," such that the operations can be applied in any order, without causing conflict."),(0,n.kt)("p",null,"For example, in the case of the collaborative text editor, two clients may attempt to concurrently insert text at the start of the document i.e. ",(0,n.kt)("inlineCode",{parentName:"p"},"O1 = insert('a', 0, 1)")," and ",(0,n.kt)("inlineCode",{parentName:"p"},"O2 = insert('b', 0, 2)"),", where the third argument represents the client id. The transform rule may be to shift one of the insertions to the right by the length of the other insertion i.e. ",(0,n.kt)("inlineCode",{parentName:"p"},"insert('a', 0, 1)")," and ",(0,n.kt)("inlineCode",{parentName:"p"},"T(O1) = insert('b', 1, 2)"),"."),(0,n.kt)("p",null,(0,n.kt)("img",{alt:"Operational Transform",src:a(408).Z,width:"1424",height:"541"})),(0,n.kt)("p",null,"This ensures that both insertions can be applied whilst still capturing user intent and not modifying the intended meaning of the document."),(0,n.kt)("p",null,"Since OT only requires operations to be incrementally broadcast, the algorithm is efficient and has low memory overhead."),(0,n.kt)("p",null,"The problem is that OT is very complex to implement correctly. The OT algorithm assumes that every state change is captured, which in modern rich browser environments, can be difficult to guarantee. Further, since operations have a finite transit time to the server, the states of clients naturally diverge over time from one another. The larger the divergence, the larger the number of possible combinations of operations that result in conflict, each of which must be accounted for by the transform rules. Since many of these conflicting combinations are very difficult to foresee, formally proving the correctness of OT is complicated and error-prone, even for the simplest of OT algorithms."),(0,n.kt)("p",null,"This sentiment is widely shared by practitioners in the field, as highlighted by Joseph Gentle, a former Google Wave engineer, and author of the ShareJS OT library, who said:"),(0,n.kt)("blockquote",null,(0,n.kt)("p",{parentName:"blockquote"},"Unfortunately, implementing OT sucks. There's a million algorithms with different tradeoffs, mostly trapped in academic papers. ","[\u2026]"," Wave took 2 years to write and if we rewrote it today, it would take almost as long to write a second time.")),(0,n.kt)("p",null,"In fact, 4 out of 8 different implementations of OT from the original 1989 paper to 2006 were found to be incorrect, missing subtle edge cases. The consequence of this incorrectness was that client state would irrevocably diverge, with no way to return to a consistent state."),(0,n.kt)("p",null,"The complexity of OT led researchers to find alternatives, the most promising of which are conflict-free replicated data types, or CRDTs."),(0,n.kt)("h4",{id:"conflict-free-replicated-data-types-crdts"},"Conflict Free Replicated Data Types (CRDTs)"),(0,n.kt)("p",null,"A conflict-free replicated data type (CRDT) is an abstract data type designed to be replicated at multiple processes.",(0,n.kt)("sup",{parentName:"p",id:"fnref-6"},(0,n.kt)("a",{parentName:"sup",href:"#fn-6",className:"footnote-ref"},"6"))," By definition, CRDTs have the following properties:"),(0,n.kt)("ul",null,(0,n.kt)("li",{parentName:"ul"},(0,n.kt)("strong",{parentName:"li"},"Independent-")," Any replica can be modified without coordinating with other replicas."),(0,n.kt)("li",{parentName:"ul"},(0,n.kt)("strong",{parentName:"li"},"Strongly eventually consistent-")," When any two replicas have received the same set of updates (in any order), the mathematical properties of CRDTs guarantee that both replicas will deterministically converge to the same state.")),(0,n.kt)("p",null,"By imposing these mathematical properties on the CRDT and it\u2019s associated algorithms, clients can optimistically update their own state locally and broadcast their updates to all other remote, state replicas. Since CRDTs are strongly eventually consistent, upon a given remote replica receiving all updates, the remote replica is guaranteed to converge to the same state as the local replica without conflict."),(0,n.kt)("p",null,"The advantage of CRDTs is that they are guaranteed to be conflict-free, as long as the required mathematical properties are imposed. Since these mathematical properties are well-defined, it is easier to prove the correctness of a CRDT than any corresponding OT implementation. Further, since each replica is independent and that CRDTs make no assumption about the network topology, CRDTS are partition tolerant by default and can be used in a variety of network topologies including client-server and P2P. This property also means they are offline-capable by default."),(0,n.kt)("p",null,"However, the mathematical constraints of CRDTs, in particular that operations should be commutative adds some unavoidable overhead. Most commonly-used data structures do not have commutative operations by default. For example, the ",(0,n.kt)("inlineCode",{parentName:"p"},"add")," and ",(0,n.kt)("inlineCode",{parentName:"p"},"remove")," operations of a Set are not naturally commutative. To ensure commutativity, the CRDT must retain additional metadata.",(0,n.kt)("sup",{parentName:"p",id:"fnref-7"},(0,n.kt)("a",{parentName:"sup",href:"#fn-7",className:"footnote-ref"},"7"))),(0,n.kt)("p",null,"For example, in the case of the ",(0,n.kt)("inlineCode",{parentName:"p"},"add")," and ",(0,n.kt)("inlineCode",{parentName:"p"},"remove")," operations of a Set, tombstones are typically used as placeholders for removed entries- if a replica receives a ",(0,n.kt)("inlineCode",{parentName:"p"},"remove")," operation for an element before it receives the ",(0,n.kt)("inlineCode",{parentName:"p"},"add")," operation that actually added the element, the tombstone ensures that the ",(0,n.kt)("inlineCode",{parentName:"p"},"remove")," operation is still correctly processed. Since the metadata must be retained for the required mathematical properties to be upheld, the use of CRDTs inevitably results in additional memory overhead, which can become significant for large state. As noted by Jospeh Gentle:"),(0,n.kt)("blockquote",null,(0,n.kt)("p",{parentName:"blockquote"},(0,n.kt)("strong",{parentName:"p"},(0,n.kt)("em",{parentName:"strong"},'"Because of how CRDTs work, documents grow without bound. \u2026 Can you ever delete that data? Probably not. And that data can\u2019t just sit on disk. It needs to be loaded into memory to handle edits." - Joseph Gentle, former Google Wave engineer')))),(0,n.kt)("p",null,"While recent research has sought to introduce garbage-collection methods to reduce the amount of metadata, there is still significant additional memory overhead when using CRDTs to represent a data model."),(0,n.kt)("h4",{id:"custom-conflict-resolution-mechanisms-not-sure-whether-to-include"},"Custom Conflict Resolution Mechanisms (Not sure whether to include)"),(0,n.kt)("p",null,"Whilst OT and CRDTs represent the most popular approaches to conflict-resolution, the complexity of OT and the memory overhead of CRDTs can sometimes be unacceptable for certain use-cases. As such, some choose to create custom, proprietary data models that are inspired by the OT and CRDT approaches and are highly specialised to a particular use-case."),(0,n.kt)("p",null,"For example, Figma relax many of the constraints imposed by CRDTs by adopting much simpler conflict-resolution semantics. In particular, they use simple last-write wins (LWW) semantics when two clients try to modify a value of a Figma object concurrently. This works great for Figma objects where changes are mutually exclusive i.e. a single value must be chosen, but would fail if used for text editing. In Figma\u2019s case, this was a valid tradeoff for their use case but would not be a suitable model for other applications.",(0,n.kt)("sup",{parentName:"p",id:"fnref-8"},(0,n.kt)("a",{parentName:"sup",href:"#fn-8",className:"footnote-ref"},"8"))),(0,n.kt)("p",null,"The advantage of implementing a custom conflict-free data model is that the mechanism can be made highly-specialised to the target use-case. This can mean that many of the constraints that come with OT and CRDTs can be relaxed which may result in a simpler and efficient data representation. However, developing a custom model can be potentially risky since it requires a number of assumptions to be made about the use-case. In Figma\u2019s case, for example, introducing text-editing may require significant changes to their current conflict-resolution semantics."),(0,n.kt)("h3",{id:"choosing-a-method-of-conflict-resolution"},"Choosing a Method of Conflict Resolution"),(0,n.kt)("p",null,"When choosing a conflict-resolution mechanism, there is no single best, one-size fits all solution. Each conflict-resolution mechanism has it\u2019s own set of tradeoffs and choosing a particular approach requires a deep understanding of the usage pattern of the target application."),(0,n.kt)("p",null,"Some aspects of the target application that should be considered include:"),(0,n.kt)("ul",null,(0,n.kt)("li",{parentName:"ul"},"What CAP (Consistency, Availability, Partition-tolerance) properties should the system have?"),(0,n.kt)("li",{parentName:"ul"},"What is the application architecture? Client-server? P2P?"),(0,n.kt)("li",{parentName:"ul"},"Is the system required to operate offline?"),(0,n.kt)("li",{parentName:"ul"},"Are there any system-level constraints including CPU/memory limits?"),(0,n.kt)("li",{parentName:"ul"},"Is the data model generic or highly specialised to a particular use-case?")),(0,n.kt)("p",null,"Answering these questions influences the suitability of each conflict resolution mechanism to a specific use-case."),(0,n.kt)("p",null,(0,n.kt)("img",{alt:"Conflict resolution mechanisms comparison",src:a(4102).Z,width:"1179",height:"1063"})),(0,n.kt)("h2",{id:"manually-building-a-real-time-collaborative-application"},"Manually Building a Real-time Collaborative Application"),(0,n.kt)("p",null,"Building a collaborative application from scratch can be time-consuming and difficult, particularly when dealing with the intricacies of real-time infrastructure and conflict-resolution mechanisms. It means that creating rich, collaborative experiences on the web has traditionally only been open to companies with the human and financial resources to roll their own solutions."),(0,n.kt)("p",null,"For smaller teams of modest means, who may lack familiarity with these specialised topics, implementing such systems has remained out of reach."),(0,n.kt)("p",null,"Provided below is a sample list of tasks involved in creating a production-ready real-time collaborative web application:"),(0,n.kt)("p",null,(0,n.kt)("img",{alt:"Manually building collaborative application",src:a(7234).Z,width:"1140",height:"762"})),(0,n.kt)("p",null,"As a result, solutions have started to emerge that lower this barrier."),(0,n.kt)("h3",{id:"existing-solutions"},"Existing Solutions"),(0,n.kt)("p",null,"Existing solutions typically fall into two categories: DIY solutions and commercial solutions."),(0,n.kt)("h4",{id:"diy-solutions"},"DIY Solutions"),(0,n.kt)("p",null,"For organisations who have complex, specialised requirements for their collaborative functionality or want to tightly integrate with existing infrastructure, a DIY solution might be the best fit. This involves manually synthesising the various components required for a real-time collaborative application."),(0,n.kt)("p",null,"There are numerous open-source libraries providing implementations of popular conflict-resolution algorithms- teams would likely need to research, choose, and integrate the solution that best fits their use case. Alternatively, a bespoke solution may be best suited for highly specialised applications."),(0,n.kt)("p",null,"For the real-time network and persistence layer which handles the propagation of updates to collaborating clients and/or server(s) and storing of state, one could use a backend-as-a-service such as ",(0,n.kt)("a",{parentName:"p",href:"https://ably.com/"},"Ably"),", ",(0,n.kt)("a",{parentName:"p",href:"https://pusher.com/"},"Pusher"),", or ",(0,n.kt)("a",{parentName:"p",href:"https://www.pubnub.com/"},"PubNub")," or provision a custom implementation using open-source libraries like ws or ",(0,n.kt)("a",{parentName:"p",href:"https://peerjs.com/"},"PeerJS")," on cloud infrastructure."),(0,n.kt)("p",null,"Whilst the DIY approach offers a high degree of customisation, it does require developers to have a high-level of proficiency in the relevant technologies. Thus, less experienced teams might reach for a Software-as-a-Service (SaaS) product to help manage their collaborative functionality needs."),(0,n.kt)("h4",{id:"commercial-solutions"},"Commercial Solutions"),(0,n.kt)("p",null,"The advent of commercial offerings providing Collaboration-as-a-Service is a relatively recent phenomenon."),(0,n.kt)("p",null,"One of the most popular solutions, released in 2021, is ",(0,n.kt)("a",{parentName:"p",href:"https://liveblocks.io/"},"Liveblocks"),". Whilst not as flexible as the DIY approach, Liveblocks provides a great developer experience, exposing all the components required for adding real-time collaboration to an application through an intuitive client API. This includes a collection of custom CRDT-like data types, autoscaling real-time infrastructure with persistence, and a developer dashboard for easily monitoring usage patterns. However, this convenience comes at a cost, with Liveblocks charging $299 per month for an application with up to 2000 monthly active users (MAU), valid as of September 2023."),(0,n.kt)("p",null,"A compelling alternative is ",(0,n.kt)("a",{parentName:"p",href:"https://fluidframework.com/"},"Fluid Framework")," developed by Microsoft. Fluid provides a collection of client libraries that also expose custom CRDT-like distributed data structures. The client libraries connect to an implementation of the Fluid service, a runtime which handles the complexities of propagating updates in real-time and persisting state. Whilst Fluid is open-source, it provides a very limited implementation of the Fluid service by default, capable of handling only 100s of concurrent users. For larger applications, developers are forced to use either the Azure Managed Service or write a custom scaled implementation."),(0,n.kt)("h2",{id:"a-solution-for-our-use-case"},"A Solution for Our Use Case"),(0,n.kt)("p",null,"Looking at the above solutions, it is clear that until now, developers who want to incorporate collaboration into their products have been to partially or fully roll their own solutions or turn to a closed-source, managed provider."),(0,n.kt)("p",null,"The first option has significant implementation cost, particularly given that the expertise require to develop collaborative functionality is often orthogonal to the businesses\u2019 core offering. The latter option suffers from vendor lock-in and can attract considerable expense, as noted with Liveblocks."),(0,n.kt)("p",null,"Following this, we wanted to build a tool for small teams that want to add collaborative functionality to their applications without having to spend time implementing and deploying their own conflict resolution and real-time infrastructure."),(0,n.kt)("p",null,"Further, we want to make our framework open-source, scalable and fully self-hosted so that developers have complete control of code and data ownership."),(0,n.kt)("p",null,"With globalisation and the rise of remote work, providing seamless web-native collaboration is no longer the preserve of the largest companies. Smaller teams increasingly want to reap the benefits of fast collaborative feedback loops in their products."),(0,n.kt)("p",null,"An example of this is Propellor Aero, who wanted the ability to collaborate with their customers on 3D interactive site survey maps."),(0,n.kt)("blockquote",null,(0,n.kt)("p",{parentName:"blockquote"},(0,n.kt)("strong",{parentName:"p"},"\u201cWe started looking at building a service ourselves\u2026 We really didn't want to because it's a whole lot of work and it's a really difficult problem. This was a very new problem to us, our engineering team had different levels of experience in synchronisation in real-time as a whole.\u201d ",(0,n.kt)("em",{parentName:"strong"},"- Jye Lewis, Engineering Manager, Propellor Aero")))),(0,n.kt)("p",null,"We sought to assist companies with similar profiles in adding collaborative functionality to their web application."),(0,n.kt)("p",null,"The availability of an open-source tool which handles the complexities of implementing collaboration, including conflict resolution and real-time infrastructure, would free Propellor Aero developers to focus on creating features that have direct business value, whilst still retaining control over all their data."),(0,n.kt)("p",null,(0,n.kt)("img",{alt:"Comparing existing solutions",src:a(8997).Z,width:"982",height:"349"})),(0,n.kt)("h2",{id:"symphony"},"Symphony"),(0,n.kt)("h3",{id:"overview"},"Overview"),(0,n.kt)("p",null,"Symphony is an open-source runtime designed to make it easy for developers to add collaborative functionality to their applications."),(0,n.kt)("p",null,"It comes with a client library that provides an intuitive API to a collection of conflict-free data types that are composed to construct a distributed data model. Symphony automatically provisions the required network infrastructure to propagate state changes to all collaborating clients in real-time and persist state between users sessions. It also provides real-time application- and system-level monitoring via a developer dashboard that exposes pertinent metrics including the number of active users, the size of persisted state (bytes), and the CPU/memory usage of each collaborative session."),(0,n.kt)("h3",{id:"using-symphony"},"Using Symphony"),(0,n.kt)("p",null,"Symphony has been designed with ease-of-use in mind. In three simple steps, developers can create and deploy a real-time collaborative application."),(0,n.kt)("p",null,"After installing the required dependencies stated in the documentation, and globally downloading the Symphony CLI tool via ",(0,n.kt)("inlineCode",{parentName:"p"},"npm"),":"),(0,n.kt)("ol",null,(0,n.kt)("li",{parentName:"ol"},"Run ",(0,n.kt)("inlineCode",{parentName:"li"},"symphony compose "),". This command creates a new ",(0,n.kt)("inlineCode",{parentName:"li"},"projectName")," directory, initializes a new Node project with the required ",(0,n.kt)("inlineCode",{parentName:"li"},"package.json"),", and scaffolds some initial starter files including the Symphony configuration file, ",(0,n.kt)("inlineCode",{parentName:"li"},"symphony.config.js"),"."),(0,n.kt)("li",{parentName:"ol"},"Write and deploy the front-end client code by composing the collection of conflict-free data types provided by the Symphony client."),(0,n.kt)("li",{parentName:"ol"},"Run ",(0,n.kt)("inlineCode",{parentName:"li"},"symphony deploy "),", which deploys the application on Google Cloud Platform (GCP). After provisioning is complete, developers can run ",(0,n.kt)("inlineCode",{parentName:"li"},"symphony dashboard")," to view the developer monitoring dashboard.")),(0,n.kt)("p",null,"Following these steps, developers can enhance existing their web applications with collaborative functionality using Symphony."),(0,n.kt)("p",null,"To illustrate this, here\u2019s a simple whiteboard application where users can draw lines, shapes, and change colours. In it\u2019s current form, the whiteboard is single-user and non-collaborative."),(0,n.kt)("div",{id:"singleplayer-demo"},(0,n.kt)("iframe",{id:"singleplayer-demo-iframe",width:"100%",height:"600",frameBorder:"0"})),(0,n.kt)("p",null,"To make this whiteboard multiplayer, we modify the whiteboard code to make use of the conflict-free data types provided by the Symphony client. After deploying the application to GCP, user\u2019s can now work together in the same collaborative space and see what others are doing in real-time."),(0,n.kt)("div",{id:"multiplayer-demo",className:"flex justify-between max-w-full mb-3"},(0,n.kt)("iframe",{id:"multiplayer-demo-iframe-1",width:"45%",height:"600",frameBorder:"5"}),(0,n.kt)("iframe",{id:"multiplayer-demo-iframe-2",width:"45%",height:"600",frameBorder:"5"})),(0,n.kt)("p",null,"We\u2019ll now turn to how we built Symphony and the technical challenges we faced."),(0,n.kt)("h3",{id:"architecture-overview"},"Architecture Overview"),(0,n.kt)("p",null,"We\u2019ll being by outlining the fundamental requirements we had to address and a description our design philosophy. We\u2019ll then provide a high-level overview of our core architecture and discuss important design decisions, tradeoffs and improvements that were made."),(0,n.kt)("h4",{id:"terminology"},"Terminology"),(0,n.kt)("p",null,"In order to express the system requirements accurately, we introduce some useful terminology:"),(0,n.kt)("ul",null,(0,n.kt)("li",{parentName:"ul"},(0,n.kt)("strong",{parentName:"li"},"Document-")," refers to the shared state that clients modify during a session."),(0,n.kt)("li",{parentName:"ul"},(0,n.kt)("strong",{parentName:"li"},"Room-")," a collaboration session in which one or more clients connect to in order to modify the room document. A given room has a single document i.e. shared state that clients modify."),(0,n.kt)("li",{parentName:"ul"},(0,n.kt)("strong",{parentName:"li"},"Presence-")," represents the ephemeral state of a room which defines user\u2019s movements and actions inside a room including cursor positions, user avatars, online/offline indicators, or any other visual representation that reflects the real-time activity or availability of users within the collaborative session.")),(0,n.kt)("h4",{id:"fundamental-requirements"},"Fundamental Requirements"),(0,n.kt)("p",null,"When building our initial prototype, we focussed on the fundamental problems that needed to be solved in order to build the core of a real-time collaborative framework. These included:"),(0,n.kt)("ul",null,(0,n.kt)("li",{parentName:"ul"},"Deciding how to model the shared state of a room i.e. document, and selecting a suitable mechanism to resolve conflicts and understanding the constraints that such a choice would impose on the rest of our architecture."),(0,n.kt)("li",{parentName:"ul"},"Determining how ephemeral and persistent state changes on one client would be propagated in real-time to all other subscribed clients and/or servers."),(0,n.kt)("li",{parentName:"ul"},"Constructing a suitable persistence layer, where state can be stored between collaborative sessions and system metadata can be retained.")),(0,n.kt)("h4",{id:"design-philosophy"},"Design Philosophy"),(0,n.kt)("p",null,"Symphony is designed with the principle that developers should be able to include collaboration into their products without having to radically modify their existing workflow and tools. With this as our guiding principle, we explain our choice of architecture and how it attempts to meet the fundamental requirements of a real-time collaborative framework."),(0,n.kt)("h4",{id:"core-architecture"},"Core Architecture"),(0,n.kt)("p",null,"After some initial prototyping, we arrived at the following high-level flow on how a collaboration session involving multiple users starts, progresses and terminates."),(0,n.kt)("p",null,"A client connects to a server via WebSocket. The clients specifies the room to connect to by specifying the room ID in the URL path. The server extracts the room ID and queries the database to check if a room with that id already exists. If the id exists i.e. the room has been used before, the server retrieves the associated room document from storage and loads it into memory; otherwise, a new document is created in memory and a new room record created in the database."),(0,n.kt)("p",null,"Additional clients can connect to the active room and modify the state. Each update is propagated to the server which in turn updates the document state in memory and broadcasts it to all the other collaborating clients. Upon receiving updates, clients update their local state. When the last remaining client disconnects from the room, the document is serialized and written to storage. The document and room metadata is subsequently purged from memory, and the room is marked as closed in the database."),(0,n.kt)("p",null,"With an overall direction in mind, we then explored different options for each component of our core architecture."),(0,n.kt)("h3",{id:"implementing-the-core-architecture"},"Implementing the Core Architecture"),(0,n.kt)("h4",{id:"conflict-resolution"},"Conflict Resolution"),(0,n.kt)("p",null,"As mentioned previously, a key component of implementing real-time collaboration is the ability to deterministically reconcile conflicts, which arise as a result of multiple users concurrently modifying the same piece of state."),(0,n.kt)("p",null,"While we found that the performance and low memory overhead of OT was attractive, it\u2019s complexity and the fact that it\u2019s most suited to editing large text documents, made it less applicable to supporting generic data models."),(0,n.kt)("p",null,"For Symphony, we instead decided to use CRDTs as the primary conflict resolution mechanism. Their strong eventual consistency guarantees mean that client changes can be optimistically applied resulting in a faster user experience. In addition, they are highly available and fault-tolerant which means that the users can continue to change state even during network failure or disconnection- the state will simply synchronise with other clients upon reconnection."),(0,n.kt)("p",null,"Although CRDTs have traditionally suffered from inadequate performance and very large memory overhead, they have become exponentially faster and more memory efficient in recent years, thanks to an active research effort.",(0,n.kt)("sup",{parentName:"p",id:"fnref-9"},(0,n.kt)("a",{parentName:"sup",href:"#fn-9",className:"footnote-ref"},"9"))," To ensure suitable performance, we decided to use an operation-based CRDT, which unlike state-based CRDTs, only propagate operations over the wire instead of the entire state. The tradeoff is that operation-based CRDTs require a reliable network channel which could be easily included given our chosen network topology (see below)."),(0,n.kt)("p",null,"For our collection of CRDTs, we chose to use ",(0,n.kt)("a",{parentName:"p",href:"https://github.com/yjs/yjs"},"Yjs"),", a library which provides a collection of generic, operation-based CRDT implementations based on the YATA algorithm. We chose Yjs since it had strong community support, has a very efficient linked-list data model with optimisations such as a garbage collector, making it one of the most memory-efficient and performant implementations. It also provided defined synchronisation and awareness protocols to propagate across persistent and ephemeral updates across a generic network layer."),(0,n.kt)("p",null,"We also considered using ",(0,n.kt)("a",{parentName:"p",href:"https://automerge.github.io/"},"Automerge"),", the other leading open-source offering in this space. Whilst equally performant, it is less mature and was 2x less memory efficient than Yjs in recent benchmarks."),(0,n.kt)("h4",{id:"state-change-propagation"},"State Change Propagation"),(0,n.kt)("p",null,"Since we now have a collection of conflict-free data types that can be used to construct a distributed data model, we need to consider how to propagate state updates to all collaborating clients in real-time."),(0,n.kt)("p",null,"CRDTs have strong eventual consistency, they can theoretically support any network layer capable of propagating updates from one replica to another. Given our use-case is for web applications, we are constrained to technologies supported by modern browsers- the two primary choices being WebSocket and WebRTC."),(0,n.kt)("p",null,"WebRTC is primarily used in peer-to-peer (P2P) topologies. Whilst WebRTC is scalable and minimises infrastructure requirements since it does not require the use of a central server, it lacks suitability for our use case."),(0,n.kt)("p",null,"Firstly, the majority of modern web applications already use a centralised client-server model. Companies want to retain control of data and enforce security measures such as authentication across all users, which is difficult in a P2P topology. Additionally, traversing firewalls and Network Address Translation (NAT) devices is not trivial with WebRTC- a consequence of this is that the applications will fail to propagate updates in geographies with national firewalls e.g. China."),(0,n.kt)("p",null,"As a result of these limitations, we chose WebSocket as the underlying protocol for our real-time infrastructure. It's support for the client-server model and stability across all major browsers made it a natural choice for us. Since WebSocket provides a bidirectional communication channel over TCP, the reliable network channel required for operation-based CRDTs is inherently provided."),(0,n.kt)("h4",{id:"persisting-room-data"},"Persisting Room Data"),(0,n.kt)("p",null,"When a collaboration session ends, we need to persist room data so that room documents are not lost and users can recreate the room in the future to continue working on it."),(0,n.kt)("p",null,"To do this, we need to construct a data model which allows us to represent created rooms and their associated metadata. The model consists of a single Room entity:"),(0,n.kt)("p",null,(0,n.kt)("img",{alt:"Relational Model",src:a(1730).Z,width:"490",height:"466"})),(0,n.kt)("p",null,"We chose to store this data in a Postgres relational database since we have a ready-heavy system and each room has a fixed schema. It also the permits analytical queries to be more easily executed. We rely on the Prisma ORM which provides a high-level, type-safe abstraction for schema creation and database interaction."),(0,n.kt)("h4",{id:"storing-document-data"},"Storing Document Data"),(0,n.kt)("p",null,"In line with Yjs best practice, we serialize room documents into a highly compressed binary format. This has the benefit of significantly reducing the amount of storage space required per document, faster data transmission and minimised bandwidth consumption across the network."),(0,n.kt)("p",null,"We initially thought of storing these binary blobs in the Postgres database. However, we realised that this was suboptimal."),(0,n.kt)("p",null,"Firstly, document sizes can become very large, particularly after lengthy collaboration sessions which can result in a large amount of accumulated CRDT metadata. Storing these documents in Postgres would affect the scalability of the database."),(0,n.kt)("p",null,"Secondly, Postgres is not optimized for large scale writes- the number of writes scales linearly with the number of rooms and can become particularly problematics if large documents are saved multiple times during a collaborative session. Implementing other useful features such as document versioning also becomes tricky."),(0,n.kt)("p",null,"One potential solution is to use a NoSQL database like AWS DynamoDB. However, these often have limits on the size of a single database item (DynamoDB has a 400kb limit), which is impractical for use cases like ours where document size can potentially be unbounded."),(0,n.kt)("p",null,"Considering these limitations, we decided to store documents in object storage, namely AWS Simple Storage Service (S3). Object storage is highly scalable, optimized to handle large amounts of unstructured data making it ideal for persisting schemaless room documents. It\u2019s also cheaper than alternative NoSQL solutions like DynamoDB and supports large-scale read and write operations, making it suitable for scenarios where there is a large number of concurrent rooms and documents needs to be ingested and retrieved at high volumes. Further, our use case only requires documents to be persisted as atomic binary blobs- we do not need to query ",(0,n.kt)("em",{parentName:"p"},"within")," a document making object storage more suitable than a NoSQL database."),(0,n.kt)("p",null,"Integrating Postgres and S3 object storage, we are now able to persist room data between collaboration sessions. When a user connects to a room, we query Postgres to determine if an existing room exists. If it does, we can retrieve the associated document from S3 and load it into memory for editing; otherwise we create a new Room record and initalize an empty document. After the last user leaves the room, we serialize the in-memory document, store it in object storage and purge the document from server memory, returning memory resource to the system."),(0,n.kt)("h4",{id:"front-end-client-api"},"Front-end Client API"),(0,n.kt)("p",null,"Whilst the conflict-free data types provided by Yjs come with a primitive API, it requires the developer to have some knowledge of the underlying data model to use optimally."),(0,n.kt)("p",null,"In line with our design philosophy of seamlessly integrating into developers\u2019 existing workflow, we created a JavaScript client API wrapper with sensible defaults and intuitive abstractions, through which a developer interacts with Symphony\u2019s components."),(0,n.kt)("p",null,"The client exposes the conflict-free data structures including a ",(0,n.kt)("inlineCode",{parentName:"p"},"SyncedList")," and ",(0,n.kt)("inlineCode",{parentName:"p"},"SyncedMap"),", which are composed to form a distributed document model. Importantly, the underlying communication and persistence infrastructure, allowing the application developer to remain at a familiar level of abstraction."),(0,n.kt)("p",null,"The client internally implements additional quality of life improvements for the developer, provide an enhanced developer experience. These include:"),(0,n.kt)("ul",null,(0,n.kt)("li",{parentName:"ul"},"Implementing performance optimizations such as auto bulk-insertion of updates which significantly reduces memory consumption."),(0,n.kt)("li",{parentName:"ul"},"Automatically converting between CRDT and plain JS objects when logical to do so such that developers do not need to keep manually converting."),(0,n.kt)("li",{parentName:"ul"},"Providing undo/redo functionality with a History API. This allows undo/redo functionality to be manually paused and resumed."),(0,n.kt)("li",{parentName:"ul"},"Convenience iterator methods on ",(0,n.kt)("inlineCode",{parentName:"li"},"SyncedList")," including ",(0,n.kt)("inlineCode",{parentName:"li"},"filter"),", ",(0,n.kt)("inlineCode",{parentName:"li"},"map"),", and ",(0,n.kt)("inlineCode",{parentName:"li"},"find"),", allowing it to be used more like a regular JavaScript Array.")),(0,n.kt)("p",null,"The full feature set provided by the Symphony client is described in our ",(0,n.kt)("a",{parentName:"p",href:"/api/client"},"API documentation"),"."),(0,n.kt)("h2",{id:"load-testing"},"Load Testing"),(0,n.kt)("p",null,"Once Symphony\u2019s core functionality was operational, developers were able to easily create real-time collaborative applications."),(0,n.kt)("p",null,"However, the current architecture is limited."),(0,n.kt)("p",null,"The responsibility for creating, maintaining and updating state in memory for all rooms, handling user WebSocket connections, and serializing/deserializing state all fall to a single server. In other words, the system has a single point of failure."),(0,n.kt)("p",null,"Also, since the single server is responsible for handling all collaborative sessions and supporting the additional memory overhead resulting from our use of CRDTs, we hypothesised that whilst this architecture is suitable for a small number of rooms, it would not suffice in real-world applications that would typically have thousands of concurrent users."),(0,n.kt)("p",null,"To empirically verify this, we turned to load testing the system. This would also allow us to determine the system\u2019s service level objectives (SLOs) including the concurrent user limit and identify potential bottlenecks such as compute or memory, which would later inform our scaling strategy."),(0,n.kt)("h3",{id:"constructing-a-test-environment"},"Constructing a Test Environment"),(0,n.kt)("p",null,"We first needed a way to establish a large number of virtual user connections to the server which each send state updates and broadcasted presence."),(0,n.kt)("p",null,"To do this, we wrote a program which spawned N separate processes, where each process modelled a virtual user connecting to the server. Since creating a large number of virtual users and propagating updates proved to be CPU intensive, we provisioned multiple EC2 instances to execute the script concurrently."),(0,n.kt)("p",null,"For the test itself, we selected the following load parameters."),(0,n.kt)("p",null,(0,n.kt)("img",{alt:"Load testing parameters",src:a(5778).Z,width:"845",height:"526"})),(0,n.kt)("p",null,"A single room server with 1vCPU and 4GB of memory, handling 240 virtual users with 4 users per room, resulting in a total of 60 rooms, propagating one state update per second and 5 presence updates per second, for a period of 30 minutes."),(0,n.kt)("p",null,"While the rates of document and presence updates would vary widely depending on the specific use case, we felt that these were reasonable values to model real-world usage (in comparison Liveblocks\u2019 default settings throttle user updates to 10 per second)."),(0,n.kt)("p",null,"Using AWS CloudWatch, we instrumented our server to extract application-level and system-level metrics including total number of WebSocket connections and CPU/memory usage."),(0,n.kt)("p",null,"We observed CPU usage steadily increase as a function of the number of connected virtual users. Once all connections were established, CPU usage had reached 92%. As the in-memory document size grew as a result of user updates, CPU usage peaked at 94% before we detected performance degradation in the form of dropped connections."),(0,n.kt)("p",null,"The results confirmed our hypothesis- that our current architecture could only handle a few hundred concurrent users for 30 minutes of real-world usage before failing."),(0,n.kt)("p",null,"It would be possible to vertically scale the server with greater compute and memory. However, this approach is not optimal. Firstly, the architecture would continue to have a single point of failure. Secondly, scaling would be hard-capped by the maximum instance size offered by the AWS."),(0,n.kt)("p",null,"For these reasons, we decided to explore horizontal scaling, which means increasing the number rather than the size of our servers. This would make our system capable of handling more users, while also being more resilient to server failures."),(0,n.kt)("h2",{id:"scaling"},"Scaling"),(0,n.kt)("h3",{id:"looking-to-existing-solutions"},"Looking to Existing Solutions"),(0,n.kt)("p",null,"Horizontally scaling the Symphony room server is not trivial. Unlike stateless services which can be scaled simply by adding more instances, clients connect to the room server via persistent WebSocket connections which are stateful. This means that clients who connect to the same room may be connected to different room server instances. This raises two problems."),(0,n.kt)("p",null,"The first problem is that if a client connected to a given server instance makes an update to document of a particular room on that server, then this update must be propagated to other servers which have that room document in memory; otherwise, the update will not be received by the other servers which have that room document and the state will diverge."),(0,n.kt)("p",null,"The second problem arises when a client attempts to connect to an already active room. It\u2019s possible that the connecting client may be routed to a server instance which does not have the document in-memory- while the server needs a way of retrieving the most recently updated document from another server."),(0,n.kt)("h3",{id:"redis-pubsub"},"Redis Pub/Sub"),(0,n.kt)("p",null,"The first problem is not unique to the Symphony room server. One common pattern to ensure updates on one server are propagated to other server is by adding a backplane, a shared component that facilitates the synchronization of data across multiple server instances."),(0,n.kt)("p",null,"A popular backplane is a Redis node, where each server connects to Redis channels i.e. to a \u2018publish\u2019 channel to send all updates received by the server from connected clients and to a \u2018subscribe\u2019 channel to receive all updates published by other servers. This publisher-subscribe mechanism ensures that when a client updates a room document on a particular server, the update is broadcast to all other servers- if a receiving server has the corresponding room document in memory, it can apply the update locally, ensuring that the document replicas of a given room maintain synchronised."),(0,n.kt)("h4",{id:"querying-for-documents"},"Querying for Documents"),(0,n.kt)("p",null,"One way of solving the second problem, namely that the document of an active room is missing in the particular server instance that a client connect to is to retain copies of every document on each server. However, this nullifies the benefit of scaling since the memory demands on each server is not reduced."),(0,n.kt)("p",null,"Instead, we implemented a system where a server could query another server instance, that had the required document in memory. For this, we maintain a key-value mapping of room id\u2019s to room server IP addresses which defines which room documents are present in which room servers. We chose AWS DynamoDB, a NoSQL key-value database to store this data."),(0,n.kt)("p",null,"When a client connects to a room and is routed to a server that does not have the corresponding document in memory, the server queries DynamoDB for the list of server IP addresses that are handling that room."),(0,n.kt)("p",null,"If one or more IP addresses are returned, it means that the room is active and thus the latest version of the document is one that is being currently edited on one or more other servers. Using one of the returned IP addresses, the server retrieves the document from the corresponding server. If no IP addresses are returned, the room is not active and the latest version of the document is simply retrieved from object storage. Once the querying server had retrieved the document, it subscribes to Redis to receive all future document updates."),(0,n.kt)("p",null,"This solution ensured that clients could access a room document via any server instance, without having to replicate all active room documents on every server."),(0,n.kt)("h4",{id:"adding-and-removing-instances"},"Adding and Removing Instances"),(0,n.kt)("p",null,"Since the single-server load test had identified CPU utilisation as a notable bottleneck, we set our scaling policy to target 50% CPU utilisation. This means that the system will scale out when CPU usage of any server exceeds that limit and scale in when it falls below that number."),(0,n.kt)("h4",{id:"evaluating-the-current-scaling-solution"},"Evaluating the Current Scaling Solution"),(0,n.kt)("p",null,"The chosen scaling solution represents a significant improvement the single-server approach. It can support a larger number of concurrent users by elastically deploying room server instances. However, while the architecture has historically been the most prescription for scaling WebSocket-based stateful services, we found a number of significant limitations specific to our use case during load testing."),(0,n.kt)("p",null,"When a plurality of clients attempted to join a particular room, they were often routed to different server instances. When the number of users in each room approached the number of server instances, it would invariably lead to copies of the document being present on every server. This nullified the benefits of scaling since there was no intended decrease in memory overhead. This additional overhead was also expensive since it would lead to extraneous CPU usage as a result of updates having to be broadcast and applied at every replica. This in turn resulted in more server instanced being provisioned and additional load on the Redis node. In fact, the Redis node approached 90% CPU utilisation at a few thousand concurrent users and represented a single point failure."),(0,n.kt)("p",null,"These findings led us to rethink the suitability of our current architecture for our use case."),(0,n.kt)("h2",{id:"a-better-scaling-solution"},"A Better Scaling Solution"),(0,n.kt)("p",null,"Upon reflection, there are two primary problems with the Pub-Sub architecture."),(0,n.kt)("p",null,"The first is that there is unnecessary duplication of documents across multiple server instances. The second is that the Redis node constitutes a single point of failure."),(0,n.kt)("p",null,"To overcome these limitations, we took inspiration from Figma."),(0,n.kt)("blockquote",null,(0,n.kt)("p",{parentName:"blockquote"},(0,n.kt)("strong",{parentName:"p"},"\u201cOur servers currently spin up a separate process for each multiplayer document which everyone editing that document connects to.\u201d - ",(0,n.kt)("em",{parentName:"strong"},"Evan Wallace, CTO, Figma")))),(0,n.kt)("p",null,"This approach has the advantage of keeping document state confined to a single process. This means that there is no longer a need for distributed document state, eliminating the difficulties in horizontally scaling a stateful service. Further, each process/room can be scaled independently of others resulting in minimised cost and efficient utilisation of system resources."),(0,n.kt)("p",null,"This improved architecture has the following requirements:"),(0,n.kt)("ul",null,(0,n.kt)("li",{parentName:"ul"},"Isolating each process/room from other rooms running on the same host."),(0,n.kt)("li",{parentName:"ul"},"Dynamically orchestrating process creation, execution, and termination. Processes should also automatically be restarted in case of crashes."),(0,n.kt)("li",{parentName:"ul"},"Autoscaling processes according to a specified scaling metric- in our case, this would likely be CPU or memory utilisation."),(0,n.kt)("li",{parentName:"ul"},"Proxying requests to the correct service")),(0,n.kt)("h3",{id:"implementation"},"Implementation"),(0,n.kt)("p",null,"We arrived at the following high-level architecture."),(0,n.kt)("p",null,(0,n.kt)("img",{alt:"Architecture overview",src:a(1029).Z,width:"1611",height:"587"})),(0,n.kt)("p",null,"A client sends a request to connect to a room via WebSocket. As before, the client specifies the room to connect to by specifying the room ID in the URL path. The request is intercepted by a proxy server. The proxy server extracts the room ID and queries a database to check if a room process with that id is active. If there isn\u2019t, the server requests a process, uniquely identified by room ID to be started. Once a process with the requested ID is running and ready to accept requests, a key-value record mapping the room ID to the IP address of the process is added to the database and the server proxies the client request to the relevant process and the standard collaboration session, described in Section 1 can begin."),(0,n.kt)("p",null,"When the last remaining client disconnects from the room, the process waits for a predefined grace period after which the process is terminated. The corresponding process record is removed from the database."),(0,n.kt)("p",null,"With an overall direction in mind, we then explored different options for each component of our core architecture."),(0,n.kt)("h4",{id:"isolating-room-processes"},"Isolating Room Processes"),(0,n.kt)("p",null,"To execute isolated room server processes, we had two potential choices of infrastructure: containers or virtual machines."),(0,n.kt)("p",null,"Since rooms should be ephemeral and rapidly scalable, we chose to use containers. Containers are more lightweight resulting in shorter cold start times and faster scaling. While they are less secure than virtual machine due to having a shared kernel and not providing full hardware virtualisation, this is an acceptable tradeoff for our use case since we are running trusted code."),(0,n.kt)("p",null,"We now needed a way of efficiently orchestrating room containers."),(0,n.kt)("h4",{id:"orchestrating-and-scaling-room-processes"},"Orchestrating and Scaling Room Processes"),(0,n.kt)("p",null,"One solution was to use the AWS-native way of orchestrating containers, namely AWS Elastic Container Service (ECS), as we did in our original architecture. However, we found that this suffered from considerable vendor lock-in and would make supporting multi-cloud deployment difficult in the future. Since many developers may use other cloud providers, this went against our philosophy of integrating into existing developer workflows."),(0,n.kt)("p",null,"Instead, we chose to use ",(0,n.kt)("a",{parentName:"p",href:"https://kubernetes.io/"},"Kubernetes"),", a open-source container orchestration tool thanks to it\u2019s large community, extensive tooling, and flexibility."),(0,n.kt)("h4",{id:"serverless"},"Serverless"),(0,n.kt)("p",null,"Our next decision was whether to run containers in a serverless fashion or to have direct access to the virtual machines hosting the containers. In line with our design philosophy, we wanted to make it as easy as possible for developers to create real-time collaborative web applications without having to manage the underlying infrastructure. Moreover, we wanted our solution to be cost effective. Given these requirements, we chose a serverless model, where usage-based billing model i.e. per K8s pod is employed- this means that a developer will only be charged for the number of active rooms."),(0,n.kt)("p",null,"For hosting the cluster, we initially turned to ",(0,n.kt)("a",{parentName:"p",href:"https://aws.amazon.com/eks/"},"AWS Elastic Kubernetes Service (EKS) with Fargate"),". However, we found a number of drawbacks to it. The most significant drawback is that EKS does not provide a fully managed option- while automated cluster creation tools such as ",(0,n.kt)("inlineCode",{parentName:"p"},"eksctl")," give the illusion of a fully-managed service, it simple auto generates the required resources and does not abstract away their existence. This means that the developer is still implicitly responsible for maintaining them and may mistakenly modify the cluster configuration."),(0,n.kt)("p",null,"EKS also has less flexibility than other solutions. For example, EKS insists that namespaces that require Fargate compute profiles must be specified before cluster creation. If namespaces are modified in the future, it means the infrastructure configuration also needs to be changed and the cluster recreated. Thirdly, upgrading EKS clusters can be difficult- to upgrade Kubernetes version, service pods needs to be deleted so that the underlying node is destroyed and a new one with the correct Kubernetes version is created. The lack of zero-downtime upgrades adds further burden on developers."),(0,n.kt)("p",null,"Instead, we found that a better solution for our Kubernetes deployment was ",(0,n.kt)("a",{parentName:"p",href:"https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview"},"Google Kubernetes Engine (GKE) Autopilot"),". GKE Autopilot provides faster cluster creation, global serverless compute across all namespaces by default, and abstracts away all the underlying components such as provisioning node pools etc. from the developer, providing a cleaner developer experience."),(0,n.kt)("h4",{id:"proxying-requests"},"Proxying Requests"),(0,n.kt)("p",null,"When a client request to connect to a particular room is received via the Kubernetes Ingress, it is intercepted by the Symphony proxy service. This service has two requirements:"),(0,n.kt)("ul",null,(0,n.kt)("li",{parentName:"ul"},"Find or create the requested room service"),(0,n.kt)("li",{parentName:"ul"},"Proxy the request to the requested room service")),(0,n.kt)("p",null,"To satisfy the first requirement, we query etcd to check if a service with name corresponding to the room ID exists. If it doesn\u2019t, we send a request to the K8s API server to create a new room deployment where the service name is the room id. We then poll service endpoints in etcd until the service is marked as ready. In this case, polling was justified over a more complex mechanism such as using Kubernetes Watch since pods typically spin up within a few seconds so polling does not add much additional load. Each service has been configured with K8s readiness and liveness probes to ensure that it is not prematurely added to the list of available service endpoints and marked as healthy before the room server is ready to accept requests."),(0,n.kt)("p",null,"As implied by the above, we decided to use etcd as the source of truth on the existence and status of services instead of keeping a service registry cached locally- this ensures the proxy services remains stateless. Since etcd is strongly consistent, it is guaranteed to represent the true state of the system when queried. By keeping the proxy service stateless, we can horizontally scale by simply adding additional replicas without having to worry about state synchronisation. Whilst this does introduce additional latency since we need to make network calls to etcd, we decided this was a valid tradeoff as having a stateful service would radically increase complexity."),(0,n.kt)("p",null,"Once the required room service is ready to accept requests, the server proxies the client request to it."),(0,n.kt)("h3",{id:"overview-of-the-final-architecture"},"Overview of the Final Architecture"),(0,n.kt)("p",null,"Ultimately, we settled on the following implementation for our final architecture:"),(0,n.kt)("ol",null,(0,n.kt)("li",{parentName:"ol"},"A client requests to connect to a room. The request is intercepted by the Symphony proxy."),(0,n.kt)("li",{parentName:"ol"},"The proxy extracts the room id from the URL pathname and queries etcd to check if a service with that name exists."),(0,n.kt)("li",{parentName:"ol"},"If the service does not exists, a request is sent to the K8s API server to create a new room deployment where the service name is the room id."),(0,n.kt)("li",{parentName:"ol"},"The proxy polls etcd to check if the service is ready to accept requests. Once it is, the client request is proxied to the service."),(0,n.kt)("li",{parentName:"ol"},"If the number of connections to the room remains at 0 for a specified grace period (by default 30s), the room sends a request to the K8s API server to terminate the room, returning resources back to the system.")),(0,n.kt)("p",null,"The creation of the K8s infrastructure and the required services is automated using Terraform. We use a K8s job to automate the initialization of the database schema."),(0,n.kt)("p",null,(0,n.kt)("img",{alt:"Final Architecture",src:a(7770).Z,width:"3280",height:"1682"})),(0,n.kt)("h3",{id:"additional-improvements"},"Additional Improvements"),(0,n.kt)("p",null,"With our final architecture in place, there were a few additional considerations and features remaining for us to review. We wanted to make Symphony more performant, scalable, and secure. We also wanted to add features that would make it easier for developers to monitor the state of the system."),(0,n.kt)("h4",{id:"monitoring-and-visibility"},"Monitoring and Visibility"),(0,n.kt)("p",null,"In production applications, it\u2019s imperative that developers have the ability to observe the usage patterns and condition of the system."),(0,n.kt)("p",null,"To integrate observability into Symphony, we first needed a way to scrape metrics from Symphony services, particularly room servers. We sought a flexible system that would allow us to expose and inspect large volumes of custom metrics. We chose Prometheus, an open-source, industry-standard monitoring tool that provides a variety of integrations to instrument applications and a powerful query language to querying and analyze scraped metrics."),(0,n.kt)("p",null,"For each room, we expose pertinent application- and system-level metrics such as the number of active WebSocket connections CPU usage and memory usage via the Prometheus client for Node.js. After provisioning the Prometheus server and configuring it to dynamically detect rooms, we deployed the Prometheus UI which allowed us to query scraped room metrics Prometheus Query Language (PromQL)."),(0,n.kt)("p",null,"Whilst this provided satisfactory visibility, using PromQL has a small learning curve. In line with our design philosophy of creating a developer-friendly experience, we wanted the ability to visualise these metrics in an intuitive manner."),(0,n.kt)("p",null,"To achieve this, we integrated Prometheus with Grafana, an open-source tool that is widely used for creating interactive and customizable dashboards."),(0,n.kt)("p",null,"As a final touch, we created an intuitive developer dashboard UI which provides a centralised location for the developer to monitor the system. In particular, the UI provides a visualisation of room metrics that are scraped and aggregated by Prometheus in real-time as a collection of pre-configured Grafana dashboards. It also exposes historical metadata about each room by querying the Cloud SQL Postgres database such as the last time the room was active, the size of room state (bytes) per room, and the total number of rooms created (inactive + active rooms)."),(0,n.kt)("h4",{id:"reducing-pod-cold-start-time"},"Reducing Pod Cold Start Time"),(0,n.kt)("p",null,"When clients attempt to connect to a room which does not exist, the proxy must wait for the K8s scheduler to match a pod to a node and the node kubelet to run it before proxying can begin."),(0,n.kt)("p",null,"In certain cases, we noticed that when room deployment took as long as 2 minutes. This was surprising since K8s guarantees that \u201c99% of pods (with pre-pulled images) start within 5 seconds\u201d ",(0,n.kt)("sup",{parentName:"p",id:"fnref-10"},(0,n.kt)("a",{parentName:"sup",href:"#fn-10",className:"footnote-ref"},"10")),". After some investigation, we realised that the delay was introduced when the K8s scheduler has no available node to schedule the pod on. This resulted in a lengthy autoscaling operation until a new node was provisioned."),(0,n.kt)("p",null,"To mitigate this, we provisioned spare capacity using balloon pods ",(0,n.kt)("sup",{parentName:"p",id:"fnref-11"},(0,n.kt)("a",{parentName:"sup",href:"#fn-11",className:"footnote-ref"},"11")),". A balloon pod is a low priority (defined using a K8s ",(0,n.kt)("inlineCode",{parentName:"p"},"PriorityClass")," resource) pod, which reserves extra node capacity. When a room is scheduled, the balloon pod is evicted so that the room can immediately start booting. The balloon pod is also then re-scheduled continuing to reserve capacity for the next room pod."),(0,n.kt)("figure",{className:"mb-5 text-center"},(0,n.kt)("img",{src:"/img/case-study/balloon-pods.png",alt:"balloon pods"}),(0,n.kt)("figcaption",{className:"italic"},"Image from ",(0,n.kt)("a",{href:"https://wdenniss.com/gke-autopilot-spare-capacity"},"William Denniss"))),(0,n.kt)("p",null,"This reduced pod-startup times by 10x. Whilst this solution eliminated the problem of prolonged cold-start times, it is more expensive and the \u2018always-on\u2019 balloon pods reduces the benefit of a serverless compute layer. To minimise this disadvantage, we provision only 3 balloon pods by default, where the size of each balloon pod is equal to the size of the smallest room pod."),(0,n.kt)("h4",{id:"securing-the-deployment"},"Securing the Deployment"),(0,n.kt)("p",null,"To ensure our infrastructure conformed to security best practice, we added the following configurations."),(0,n.kt)("p",null,"Firstly, we regulated access to all K8s services in line with the principle of least privilege using Role-based access control (RBAC). We also configured Workload Identity with Google Cloud Platform (GCP) which ensures that each K8s service has least privilege when accessing GCP services external to the cluster including the database and object storage. Additionally, all non-public facing services including the Postgres database were added to private subnets to prevent direct network access."),(0,n.kt)("h4",{id:"snapshotting"},"Snapshotting"),(0,n.kt)("p",null,"Currently, documents are only persisted to object storage once, immediately preceding room termination. This means that a process or system failure during a collaboration session would lead to irrevocable data loss, particularly given that pods are ephemeral in K8s."),(0,n.kt)("p",null,"To mitigate this occurrence, we implemented checkpointing, where the in-memory document is periodically serialized and persisted to object storage. This approach does, however, lead to increased costs since cloud storage has an operation-billing component, where developers are charged per use of the API. In order to balance the need to snapshot with the associated additional costs, we set the default snapshot interval to 30s i.e. in the worst-case, a user could lose 30s of work. We felt this was reasonable since a client also has a local copy which could be used to replay the state- in combination with snapshotting, this makes the system adequately fault-tolerant."),(0,n.kt)("h2",{id:"future-work"},"Future Work"),(0,n.kt)("p",null,"Going forward, there are additional features that we think would enhance Symphony:"),(0,n.kt)("ul",null,(0,n.kt)("li",{parentName:"ul"},"Integrating authentication so that users can only interact with rooms they have access to."),(0,n.kt)("li",{parentName:"ul"},"Expanding deployment targets beyond Google Kubernetes Engine (GKE). Since Symphony is built on Kubernetes and provisioned with Terraform, we can easily add support for other providers of K8s services include AWS EKS and Azure AKS."),(0,n.kt)("li",{parentName:"ul"},"Develop a set of React hooks and providers enabling Symphony to be used declaratively.")),(0,n.kt)("h2",{id:"references"},"References"),(0,n.kt)("div",{className:"footnotes"},(0,n.kt)("hr",{parentName:"div"}),(0,n.kt)("ol",{parentName:"div"},(0,n.kt)("li",{parentName:"ol",id:"fn-1"},(0,n.kt)("a",{parentName:"li",href:"https://en.wikipedia.org/wiki/The_Mother_of_All_Demos"},"https://en.wikipedia.org/wiki/The_Mother_of_All_Demos"),(0,n.kt)("a",{parentName:"li",href:"#fnref-1",className:"footnote-backref"},"\u21a9")),(0,n.kt)("li",{parentName:"ol",id:"fn-2"},(0,n.kt)("a",{parentName:"li",href:"https://erikbern.com/2017/07/06/optimizing-for-iteration-speed.html"},"https://erikbern.com/2017/07/06/optimizing-for-iteration-speed.html"),(0,n.kt)("a",{parentName:"li",href:"#fnref-2",className:"footnote-backref"},"\u21a9")),(0,n.kt)("li",{parentName:"ol",id:"fn-3"},(0,n.kt)("a",{parentName:"li",href:"https://webrtc.org/"},"https://webrtc.org/"),(0,n.kt)("a",{parentName:"li",href:"#fnref-3",className:"footnote-backref"},"\u21a9")),(0,n.kt)("li",{parentName:"ol",id:"fn-4"},(0,n.kt)("a",{parentName:"li",href:"https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API"},"https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API"),(0,n.kt)("a",{parentName:"li",href:"#fnref-4",className:"footnote-backref"},"\u21a9")),(0,n.kt)("li",{parentName:"ol",id:"fn-5"},(0,n.kt)("a",{parentName:"li",href:"https://svn.apache.org/repos/asf/incubator/wave/whitepapers/operational-transform/operational-transform.html"},"https://svn.apache.org/repos/asf/incubator/wave/whitepapers/operational-transform/operational-transform.html"),(0,n.kt)("a",{parentName:"li",href:"#fnref-5",className:"footnote-backref"},"\u21a9")),(0,n.kt)("li",{parentName:"ol",id:"fn-6"},(0,n.kt)("a",{parentName:"li",href:"https://crdt.tech/"},"https://crdt.tech/"),(0,n.kt)("a",{parentName:"li",href:"#fnref-6",className:"footnote-backref"},"\u21a9")),(0,n.kt)("li",{parentName:"ol",id:"fn-7"},(0,n.kt)("a",{parentName:"li",href:"https://arxiv.org/pdf/1805.06358"},"https://arxiv.org/pdf/1805.06358"),(0,n.kt)("a",{parentName:"li",href:"#fnref-7",className:"footnote-backref"},"\u21a9")),(0,n.kt)("li",{parentName:"ol",id:"fn-8"},(0,n.kt)("a",{parentName:"li",href:"https://www.figma.com/blog/how-figmas-multiplayer-technology-works"},"https://www.figma.com/blog/how-figmas-multiplayer-technology-works"),(0,n.kt)("a",{parentName:"li",href:"#fnref-8",className:"footnote-backref"},"\u21a9")),(0,n.kt)("li",{parentName:"ol",id:"fn-9"},(0,n.kt)("a",{parentName:"li",href:"https://www.bartoszsypytkowski.com/crdt-optimizations/"},"https://www.bartoszsypytkowski.com/crdt-optimizations/"),(0,n.kt)("a",{parentName:"li",href:"#fnref-9",className:"footnote-backref"},"\u21a9")),(0,n.kt)("li",{parentName:"ol",id:"fn-10"},(0,n.kt)("a",{parentName:"li",href:"https://kubernetes.io/blog/2015/09/kubernetes-performance-measurements-and/#:~:text=%E2%80%9CPod"},"https://kubernetes.io/blog/2015/09/kubernetes-performance-measurements-and/#:~:text=\u201cPod"),(0,n.kt)("a",{parentName:"li",href:"#fnref-10",className:"footnote-backref"},"\u21a9")),(0,n.kt)("li",{parentName:"ol",id:"fn-11"},(0,n.kt)("a",{parentName:"li",href:"https://wdenniss.com/gke-autopilot-spare-capacity"},"https://wdenniss.com/gke-autopilot-spare-capacity"),(0,n.kt)("a",{parentName:"li",href:"#fnref-11",className:"footnote-backref"},"\u21a9")))))}m.isMDXComponent=!0},1029:(e,t,a)=>{a.d(t,{Z:()=>o});const o=a.p+"assets/images/architecture-overview-2d79de1ef288a1cbfe2e891d4f78a3cc.png"},1191:(e,t,a)=>{a.d(t,{Z:()=>o});const o=a.p+"assets/images/basecamp-locking-90594eff8ddab973e2e4993399111964.png"},5939:(e,t,a)=>{a.d(t,{Z:()=>o});const o=a.p+"assets/images/branch-96f9faa44059d9f43e8c2bc3c1a95054.png"},8997:(e,t,a)=>{a.d(t,{Z:()=>o});const o=a.p+"assets/images/comparing-solutions-8ad521b0cfa33d59083166ecf925ab7b.png"},4102:(e,t,a)=>{a.d(t,{Z:()=>o});const o=a.p+"assets/images/conflict-comparison-750de189201c10a7a95ca8845a464856.png"},1561:(e,t,a)=>{a.d(t,{Z:()=>o});const o="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAA08AAADFCAIAAAADn15uAAAheklEQVR4nO3dB3Bc12Hu8b3bd7G7KIveCBCFAEECLCBIsHexyxRNiYpk1WTs2E7s5I2dOO89T16SeXEyGT87Tsaxx45s2ZYs0bYsiSoUqyiwobCAHewkQBCNqAts33eABZcrFpCQSC5x8P8NhrN79+7FWfCee75z7rl3lY6ePhUAAAAkpY50AQAAAPAAkfYAAABkRtoDAACQGWkPAABAZqQ9AAAAmZH2AAAAZEbaAwAAkBlpDwAAQGakPQAAAJmR9gAAAGRG2gMAAJCZNtIFkJzb7e5y9Ho9vkgXBACAR5eiqMxGY1SUSa1mHOr+I+3dZyLeNTa3XWlqbWxuFv/29PRGukQAAIwMGrU6IT42NTEhJSk+JSnBHhutiBiIz03p6OmLdBkk4fZ4qg4d33+g1u3xRrosAACMeKnJCfNmTB2TnhLpgox4pL37IKBSHa87t6OisqeXPyYAAPdTTlbG0rnTo23WSBdkBCPtfV4ut/u9bRV1Zy/e9lW9TmuzWLQ6zUMuFQAAI0ggoPQ6+xyOPr/ff+urOp1u+cJZ4/OyH37B5EDa+1z6XO7X3/qwubUtfKHJaBibmZaZlpKZkRJLXwQAgHvj9foam1su1jdeuNzY0NgU+PSri2aXTZtUFJmSjXBc+fLZ+fz+t97fFh71tDrtzNKSP39+/eql80qK8ol6AADcO61Wk5GaPLts8rPrVnxp/er0lKTwV7dXVJ48cyFCRRvZSHuf3Y7d1ZcaroaeJiXEv7Th8bkzpuh1ugiWCgAACaQmxYvMt2TeDI12cDZUQKV6b2tF67WOyBZsJCLtfUbnLzXUHD4WepqZlvKldcvjom0RLBIAAJKZOrHwqdVLQoHP4/W889Eu3+3m9mEIpL3Pwuv1bt65NzSfINZmfWLFQq2WmxcCAHCfZaalLJ8/M/S0ubWt+tCxIdbHrUh7n0VN7cmOru7gY41avWbZAqNBH9kiAQAgqwkFueIn9HR31eE+pyuC5RlxSHvD5vX5Kg8dCT2dWjI+JdEewfIAACC9xXOmm03G4GO3x1NTeyKy5RlZOPk4bGfOX3b0OoOP9TrdzNKSyJYHAADpGQ36GVOKt++uDD49fKJu1rQSvlftHjG2N2x1527cSLm4MI9zuAAAPASTivJ02sG7XnR3Oxqb24ZeHyGkvWG7dOXGXVcK8rIiVxAAAEYRvV6fPebGd+bWhzXHGBppb3jcbndPT2/wsVqtTk6Mj2x5AAAYPdKTk0OPW9u58d69Iu0Nj6PPGXpsiTJpNXwBLgAAD0lMtCX0ODSHHndF2hueQODGt/ZpNVzjAgDAw6NWk1s+C/5qwxN++U948gMAAHg0kfYAAABkRtoDAACQGWkPAABAZqQ9AAAAmZH2AAAAZEbaAwAAkBlpDwAAQGakPQAAAJmR9gAAAGRG2gMAAJAZaQ8AAEBmpD0AAACZkfYAAABkRtoDAACQGWkPAABAZqQ9AAAAmZH2AAAAZEbaAwAAkBlpDwAAQGakPQAAAJmR9gAAAGRG2gMAAJAZaQ8AAEBmpD0AAACZkfYAAABkRtoDAACQGWkPAABAZqQ9AAAAmZH2AAAAZEbaAwAAkBlpDwAAQGakPQAAAJmR9gAAAGRG2gMAAJAZaQ8AAEBmpD0AAACZkfYAAABkRtoDAACQGWkPAABAZqQ9AAAAmZH2AAAAZEbaAwAAkBlpDwAAQGakPQAAAJmR9gAAAGRG2gMAAJAZaQ8AAEBmpD0AAACZkfYAAABkRtoDAACQGWkPAABAZqQ9AAAAmZH2AAAAZEbaAwAAkBlpDwAAQGakPQAAAJlpI12AEaDP6ao9Xufx+sRjp8sVttxZUXko+NhmiZpQmKtWlMgUEQAA4A5Ie3e3t+Zw5cFjty53utwVlQdDT6NtUWPSUx9iuQAAAO6OM7l3V5g79q7raNTqlMT4h1AYAACAYSHt3V1KUvzyBTOHWEGn1b349ON6vf6hFQkAAOAekfbuSUnRuOLxebd9SVFUKxfPjo+NechFAgAAuBekvXu1dG55cqL91uXTJhUV5GY99OIAAADcE9LevdJqNWuXLTQZDeELM1KT55eXRqpIAAAAd0XaG4Zom2X10nnK9dusWCzmLyxboFbzNwQAAI8uksrwjM1Mm102WTzQaDQi6kWZjZEuEQAAo4WiunFfW4V73N4z0t6wzSwtXjp/xpOrl6QnJ0a6LAAAjCJpKYkmw+AdMHLHpEe2MCOI5m//7n9FugwjjOhMpCQmxNiskS4IAACji1arKcrPMZmMM6YWF+RlM7h3j/guDQAAMGJYrVEzS4sjXYoRhjO5AAAAMiPtAQAAyIy0BwAAILNhz9vzOfzOS+6eI30PojQYEdQGxZCiM2bqdQlaRcMc2ZHH3+d31nscR/v8nkCky4LIULSKPllrytTrk3XicaSLg2HzuwKuRo/jeJ+vxx/psiBCFJU+SSdqsSFNp+juUouHl/Z83f7mP3SI3atzv+NzFBAjm9qkFlEvarwxpjzKNi1KRUsxovh6/W3vd3Uf7u2qdIgGI9LFQWSItsGYoY8aZ7BNj4qdZ6UWjyx+p799Z09XVW9XtcPb6Yt0cRAhisqQro/KM1hLzfZlNkU9VDVWOnrueZQuoGr9oPPyD5p10VpLvlEXw/W8o1JA9Cn9ffXuvivuqGJTxtcSDKm6SJcJw9D+cfflH7UofpUl16i365jNMToFPAFno7vnnNOQoc/+TrJxjD7SJcIwiJzX8JMWT6svKttgSGR0drTyq1zNnq4TvRqrOuef0sx5hiHWHU7a86tO/Pmlvjpn+ob4uOlWfRxpb5Ty9fm7T/Rd/aDd3eVNecFuf8wW6RJhGM7+zyudlY7kZbHxs62GJL1C2huV/J5A73nnlXev9Zx1pj5nT3wyNtIlwjBc/mFz20ddcWXWhPk2U7pBfbezeJBSwK9yXnFfebvtWmVPyrP21JfsQwzSD+NIH1CpnOdcKkWJn2sj6o1mGpM6KsdgyTOK2Odu9ES6OBge52V3wBuIm24xJhP1Ri+RD8xjDNFFZrEziF0i0sXB8LgaPf5ef/REszmTqDd6iQO4KV0fO8WiKKo+Ec+GnJgzvIN9/5xuRaWN0nyuAmLkUzSKyHxi3xJNRaTLguEJiFocUGnMaqZqjXaKohEH84DKTy0eaQK+QEDUYqOa6+SgsfQHOb/nLhfr0LUHAACQGWkPAABAZqQ9AAAAmZH2AAAAZEbaAwAAkBlpDwAAQGakPQAAAJmR9gAAAGRG2gMAAJAZaQ8AAEBmpD0AAACZkfYAAABkRtoDAACQGWkPAABAZqQ9AAAAmZH2AAAAZEbaAwAAkBlpDwAAQGakPQAAAJmR9gAAAGRG2gMAAJAZaQ8AAEBmpD0AAACZkfYAAABkRtoDAACQGWkPAABAZqQ9AAAAmZH2AAAAZEbaAwAAkBlpDwAAQGakPQAAAJlpI12AR0tAFXB7PL3OviijWa/TRbo4AB4Gj9fT3euIMpkNOn2kywIA95/MaS8QCLR0tP3HxlfTE5P/7PGnFUUJveT1eY+erXt715bJ+ePXzF0ilvj9/oaWpg/27jxbf8Hj9ZqMxryM7KXT56TGJwU3daW16ce//3Xw7WJTZqMxOzVjXObYibmFWo3GHwg0NF99ZdPGkrzCx+cu6el1iE0dPn3ipiJFW6wLS8trT59s7+5cv2hVRlJKcPm1ro43t73ndLmeX7Eu1hb9MP46kMjRU2evNrfNmzFZJ10XRXS9/u3XP3V7PTctT4y1f+2Lz/3qw7fO1l98eumawqzcYAU/f+XyL97bmByX8OUnnnW6nN979ceiagffolYrsbaYgjE5j02fq1YPntZo7Wh/95Otpy6ddXncep0+Kzl95eyFmUmpqoFav2n3tv1HDw3+SkVl1BvTE5JmT5qWm54lXq1vvvqTt35za5lFQRJj4//iyRceyF8EkMKug/srDlf1Op03LV8xc/7UgmJ/wP+Tt17z+jwvrnrSHh0bvsKFxvqNW99LjIt/YsEyqzkqtFxsbeeBfZPzixaWzjQZjKHl26t3766tWb9oZV56ljiS/MfGX4oNPrVkdZTR9EA/4KNG6rSnUon++pb9nyTZ4xdMnZmXkRV6qc/lem/3drETGPQ6kfbEgbvpWqvYCQ6fOS4eZySmHDtfd/DUsYuN9V9f/4I9OkYs7Ozp3la122w0JcTG+fw+8XTvkQMp9sSlM+auW7BcrNDR07mjZo9OqxVpT7QctWdO7jywNyHWbtQZQr/XHhPr9ni8ft/mfbtEqhP7sWpgXOHUxXObKrZNyBlnDNtHgXu0acsu8e+ZC5fWr15ij5GqtyCq0vaaPaJ+iboW3mET9SigCljNlu3Ve0TD8H/+7K9Fp0ssf2PLu6LKf3ntM8rAe7dVVYgqmSzeq1I8Pm+no3tPbU1Le9tzK9apBqLkv7/x31Unat0et+jdHT9/5sDJI6frz3/nua+Kmiu2f/zc6a1VFcn2BJPeKJ62drbrNdqaU0f/8skXROAT2xdZUzVwTkCkRkefI2lgTVFM0WOM1F8MGBHONVwS4Uyv1UWZzGrlxqSyLkePqNEilu2prW5qb51SUDyreGr4oLt410eVu8Zn5a2ctTA87X2472ORIC9ebZg+YXJ42jt9+cKOmr3zp8zIScsUNX3XoUrRxD8xf5mKtCcR0UYExH7T0d0lDtmhtCeWNLe3in5AYGAFscTpdounlccPZqVk/OnjG6Ittobmq6999LbIcyKBiTAX3JZWqynOLXhuxRP+QKC9u0s0DG/v+kjslOUTpoijfCAQ3Frg+u/226KsL616Mj0xJVQgvU6XlpgsmqgP9uysOFy9fOaC5LgEsXOLHCkap6XT5pgMhps/BHBvOjq7X33z3ZWL5ubnZEa6LPeTqFbZqelf++LzWs2N45U4mmvUmoWl5e9+snXf0YMH646VFkw8fv60aAnSk1KXlc8X0TBYI7NTM7++/nnRnIhOWkNL03/+7pdvbN20vHyB6Lb1DwYc3JcYa//OC1+Nj45r7+78xaaN+48d+mDvzmAcFO8X23lu+brs1AxxuGjv6hRHkl0HK0Vf8ZsbXk6NT/z2l74iVhNbfmPLpj1Ha555bG1u+pj+4hnptgFD8Q9Uz9VzFk/KG68PC3Ppicki2wW7c06Xa2fNnpLcAkP04Aqiudx9uMrR1xdQ3WhthcbW5rqL58S76i6dq29qjLXYNAPdP1UwBwy2zgPv6X8UUI0+cqe9QaILXnGoasOS1bHW/mEPl9stYlx7V4deP7gDOd3OTw5VirZk3cIVMydOFcd3cch2ul3/9pufir5CMO0JosGwx8ROHjdBNXB8H5OcerbhouhJ1J45ucSecOvvFbtsYXZubnrWzS8EVHMnT/9g744dNfvWL1whtrD/+KG8jOxpRSUP7G8Amel1WrenfzDJ5fa89eG26VMmzp0xVR02EjbSiY5TSf54vfbm89SiRj/92Jp/+NkPX9/8x+Kcca9tfrujp+vLa59JiLWHvdci6mzwrzExx115/NDWyoqTF88kxJZtqawQR4MnF6+eU1IWTIc+n/9v/vOfP9z3cTDtqfrP3yoFWTnjs/PEY5/PlxKf+MmhKpEvxVOz0TR5XJFqYGbI9uo94leMyxxbnFfwcP4mkIwIIB6PZ7TNF89KyRBV26i/4zCH6H1daWmOsdpE7048rT5RK/psov29abWqE7Wi7mckpjS2NVefrM3LyIoymR9s0UeaUZH2/H5/07WW3YdrVs1eqBo4ffPR/l1K2NCxx+s9U39RtApzSqYFzxaJoDY5v0i0JWLHEuvfui+KPS8tIXnBlPKfvv36+SuXh1Ueizlq/pQZonn4+MDessLiHTV7A37/4rLZ4YPSI1dPT2+v06nTjYpd6xERCChhj1X7ao40NrU+/th8s0n+EaZZxaVTxhUdrDv20z++LkJY0dj8+VNn3CnpGvT6oux8kfZEnZ0zqezEhTN6nX5J2exgrRf/lhZOTI5LqG9q7OnrNd8yPqfRaLJTM4wGQ0v7NZH8QoMHwOe3a2915eHji2eVTZ5Ih+GGju6uPUeqs1LTrGaLP+DfWlXh6Ou7aR3RT9tdW+32ep9asvqVTW/uqa3+wtylpL2byN8k67Ta3PTsi431m/d9vKx8rtgtjp6tq2++OjG3QPTvg+uIjkJ3b0+KPSHaYg0uEcf9KJMpISa2paO9p89x256HTqvLSk0XUbLT0XWn3x4cPb6JVqPJTE6dXVK6rXr3b7e8IzorWakZM4un3o+PG2GdXT0VVYeOnDgd6YKMdhfrG1958521jy1ITb7NqPOI4/Z4Oru7dNfH9tRqxRY1WFUtJvMzy9Z++0f/951Ptva6+p5dtjbGYrvTdkS/7vj5OtXAYKGom9e6Ouy2mOCQf5DJYEy2J1y91tLt6Lk17fW6nBWHKl1uV2p8Uug6D+C+OHyszuf1bf5479XWtiVzZmi1o6Iv4ehzdPZ0O3Wu4FPRlTLo9KEZuvmZYxtbm7ZX710xc4HFFFV36fypi+fyM7PEg/CNXG5uPFt/MSs5bcaEyRWHq4+cPXmp6UqMNVpLfyyM/GlPo9ZkpaTHWm2nLp0TO0FOWtaWyk9EBFw1a2Eo7flFl8EfMH16zqbY4QaWXLvThGvR5Ii2QbQZd1pBtCXff+1nlus9DKs5auWsRVMLJ6oGzi4tmjZrR83enQf2iTZMPB6iiRpBuh0Oot4jorvb8btNW7724gaNZsTnkrMNF//+Zz8Ijdjpdfp/+frfhqbxTcofX5idd/j08XGZY6cVliifHthrbG1+c+smsbDP1Xfywtmak0cSY+1l40v8/edtfbcOAIhunvg3dBWwODa8sunNWGuMeNzS0dbQfFVU+acWr1YkOlGOR4HVaul19oceEfuaW9rWLl9os1oiXagHbuP297dW7dZc7zstmjZ7SdnsUK0UXa/0xOTqE7WHTh+Pj4nbUb23o6frT5auEZU6fCNVxw93OboXlfY3o9OLJh09e7LqeK04GmgZ3gsjf9pT9Z85Nc+dXHbg5z94f8/O9QtXHDh1dMLY/Ak548JWuf2cTWVwUucQ2x7qiO/z918g4nQP9lo8Pq/b6w4+Fg1VVnL6zOKpH+7dOSEnc97k6cP5QI+uKPPousrpERdvj5MjkygqRa/VKerBD3PTWHtPr6PpWosIYc3tbZ09XaK+h7/a2Nb8m81/VAYqoNPlys/IfmrJatGKDD1TOzQqLx7UXTovfqPX52touWo1Rf3V03+6uGzWff18wKc0Nre98sY7a5bOy85Mi3RZHiytRqPX6UIj5TqtNrwfpVVrlpfPP3HhzNbKCpHe9h87mJaQNCm/yKB/J7SOqKF7+k/jeqZPmGQ0GMsnTnn9o7fFknULlnEyN9yoSHs6jXZCTn5+ZrbYA/w+v8frWTFzYXiD0X/5t6Jye9zh7xL7kMvjVvrffvvRYLGC2+MRu2b4pYLhYizWF1evzxi4d5dqYLdOjI0PvRplNk8tmLijZq9ofuJsMZ/3Qz4aoq2WBbOmXapvDF40gIejobHJf0unpLR4/ILZ0+Q44ZiZnPqVdc/qrlc0jVoTnLId9Nst77S0t41NzbzQWP/rD9/61rNfDv/UogI+v/KL6v42RLFGWWKstqyUdLFC8Hpb1/XOWIjT3X8cCDtrrH5h5frs1HS31/uPP/9hl6OnOLcgdB5ZYr1OV83hYx2d3ZEuyGjR1d0T/rTP6dq4acvsssnlpSVSdNlub+n0udMKS0LXpojqeVNfLi8ja1L++P1HD73+0Tv1zVefW/FEfExceCIUtf78lXpxcDhx/vTV1mbRKxOvioUXrjZwMjfcqEh7KkWJjrKJhPcvr/644nBVdmpmWdGkbseNqqXRaGxmS3tXZ5/LGbpPjwiFTddaRZITLcRtt+rxekVfXzQGtjusIBqMsWmZt7kmd4B4o2XgPkMy9T/Eh5o+eYL4iXRBRpfv/+RX4fFaq9UuXzirKH9sBIt0f5mNpuzUjFuvyRWOnD35wd6dyXEJ337uK//6q//aVrV7QenMaYXFofbAHh27YEq50n8aWAk/9IsVRC+rrauj19lnvj6Lwx/wX2lpErUyJjSFV6WMGzN2fHaez+9fNWvRL97/3eZ9H+dlZD/gTxx523btO1Z3LtKlGNX8/sCufQeuNLV+Ydl8WVNLYmx8Vmr6ENfk6rT6ZTPmV5+o3XWwMsZqnTFhyk1zrvYfO9jd2yPa7o3b3g/OWhFNucvjrjp2uGBMjkWi5vVzkqHffy9E10F0IMakpHl93sdmzL3p6lfx6sScgvbuzvcqtgeXiF1nR80+0Y/Pz8w23e6Ox+LQX9/c+Mddm416fdHYvIfxGYB7EBdte379Spmi3hDcXs8r72681tX5pRXrinMLX1y13uHs/e933hABLrSOiHki/oqu/63tZWlhscvtfnPrpuDTgEq1ed+u5vY2UaNvvWGeRq1eM2exCIgf7vv4clPjA/1cj4IzF4Z3qwE8IGfOX+p29Ea6FJFUnFc4bkyO2+ueN2VGsj1eEzaw5/f79x45KJrjl9dseHH1+udWfFH8/OVTL9qjY/bUVvc5b756dzQbHWN7A/34xDj7N5566Wpbi+jo3/SqiP9PLl5Zd+ncax+9fe7KpZT4xItXG2pOHkmIiXtm2drgjbhUA3fVOnnhzE/ees3n87V0tF28Wl/f1Dh/6owp4yZG4jMBg0IDe3ljM1ctnmPQy/ZlryJd/fzt34afvY21RT+xYLkIZwfrjpWNL1kwtVyksdkl08TPniMH3v1k61NLVt91s8+vWFd75uTG7e9faW3OSklvaLm67+hBk8Hw5bXPqBW1P+C/af1Ee/yqWYte/eD3v9/x/jc3vHyfP+Qjpig/58DRk5EuBVSzyybF2KSdObCtuuJs/YXw2VDlxVMLxnyqs2o2GEVVLZ8wpWz8pP5TYWGTVs42XLp0taEgM2dJ2ezQF6z5/YHqE7VVxw9faKy/7TeRXmlp+uX7vzeFDSjGWG2r5yy+7ciONOROe0rwbI56YHK3XqsTXXmPxzN45lTpfzk4v0fsauOz815a89Smiq0fH9wnXlMrSlZqxoqZ8wuzckLbEiHvYmNDS/sHqoGTv2kJyWvmLlkzZ4ktyiL6Ftdv2aW+vn7/XMDwL4S5uXDB8ikqOWZWIYJio23tnV2zyybPKpsk5RSf5va2tz7+KPyjjUlNX1g687XNf9Ro1C+tfjI4m8JsNL24+snaMyde3/LO4rLZwfvhqdV3PAWWnZbx9fXP/2HH+7trq3bXVot4l56Q/CePPV6cWxhcYfAAcr0Wi009Pm/Jpt3btuz/5OmljyfFhabhKsGKL8HlzyEzS0uSEuzeW25jiwdkX01td8+nxvCMBv3qJfNystIjVaQHamAqrarqeO2huuNK2PWOCbH2nLTM683p4IwM0UDnpI0RaUw0lz6/r39ahrr/teoTh3v6HHMml8XZYsKz2tzJ0w+cOlpz8mhhdl5YLe5/n9L/7djX3qvYFj75LyM5den0uaS9kUrsC6nxSd//5neT4gZvrC8CX2jqj+gH/L9vfNceM9gbEP/NovEozMrt6Olq62gXO1ycLToxzh7sc4h9bkxy2o++9Y/BlZWBtGc1R0VHWe0xcaqBQJmdmvn9b/7vYPfCFmV9cdX6nr7elPjEOxVPbHliTsG//sXfhV+6AXwGz6xd7nS54u2xd191pBG1TFSrW69BERVWvNR/QYaiHp+dHzpw56Vnfe/r3xGdOqvZIrLXv/+Pv4+2WO90s2Xx3tnFpdmp6Z093Y2tLaK+x1psyfYEnVYbfHXD0jWLps3KTB680EoZuCXEP3/1bzxeT+jenKqBk7xPLV65sLQ8K0WehtliMZcU5Ue6FKNI7fHT4WlPRO21yxbEREs7qvfYjHkl+eO9vpu7E5lJqQa9QR/Qf/flb5iNxuAVUaK51JoG44oIfN99+ZsGvV68JFLduDE5onU2fvpLR2cVl4rWP9YWbdQbVs1eNG18ydi0TK1GE2Uy/9NXvhW6v1KISW+QaQL9bcmc9lQDTcLUgttfMWDQ6aeEvSRaC4vJnJeR1X+lrdfTf7uHsBZCdAjErjCtsPhOv0isIN4+ZdzgBkVrIfatocsmth9jtU22Fg3j8wC3Ixpm8RPpUjwQ4ig/teCOMyVCNS5EdMNKro/MqQZm5g29fdFI5KSNCfQPHrjCb+saJFoR8RO+RETA4tybv+pAvCszOS0zWfKbZeCB8vlvzByYUJC7bH65VitzA50SnzjEaIiq/3sOx912uWhtQy+lJSSLn1vXEZ2xSfnjb11HBL6SvMJb1x8NZN6ZPhtx4DboZJv2BGAIyi338AMeMqfTGXzw2LxyvjwN9508s0wAABihls4rT0qIf+6Lq4h6eBAY2wMAIMLyx44RP5EuBaTF2B4AAIDMSHsAAAAyI+0BAADIjLQHAAAgM9IeAACAzEh7AAAAMiPtAQAAyIy0BwAAIDPSHgAAgMxIewAAADIj7QEAAMiMtAcAACAz0h4AAIDMSHsAAAAyI+0BAADIjLQHAAAgM9IeAACAzEh7AAAAMiPtAQAAyIy0BwAAIDPSHgAAgMxIewAAADIj7QEAAMiMtAcAACAz0h4AAIDMSHsAAAAyI+0BAADIjLQHAAAgs+GlPUWsHlAFvIEHVBqMGAGVP7gbqJVIFwXDFPwf80W4FHgUBIK7gUItHmGC/2MBv2iPI10URJrfc09tsfbetyi2ZMzUOy+5r+3vsc+yfp7CYaTzdPq6T/apTWp94jB2ITwKDGk6d7O3/YAjcaFNbWB0f/Tyu/wdB3sUrWJM00W6LBgeXaJOHH67TvSZMw26GA7Co1p7ZU8goDJl6lVD5r3h7CWKKnFD7MXvNV14tfna/m5DEgeIUcrn8DvOOZ0tHkuJKbo8KtLFwfDEr47urXM1/KG1s9ZhTNErmkgXCJEQcPt7zrn6GtyGVF3cYluki4PhiZ1vdRzra97S0VPXZ0o3qPWMzo5K/oDjnMtx0aU2KQlrou9n2rMvsnlbfFd+da2ztpcpf6NXQKVoVdZJ5tSX7bo4upUjTMwsi/ear/HVa90n+7pP9UW6OIiQQP8h3ZRtSP9aAiP0I451sin5GXvjr6/1N/bnXEM385CZP6CL12X+daI+VT/0isOr5IpOSdoQG7fc5jjh9LR4P0cBMYJpTGrTWL0xU99/HpCjzEijaJT4VdEx8yy9p1zuq56AP9IFQiQoWsWUrTeNNahN1OKRR9Ti2AUW2zRz71mXu8EzOHMLo4+oxeZcgzpKo9xtAG7YXToR+PR2rW5mFJNDRy9FpagVWoiRS7T0ulitrUyrCjDLe7QSNVhROEUzconAp43WWCeZVSXU4lFMrdw15wV9pgF8pX8/+yxvBPDIGDhGUJGBEYxajHtEzw4AAEBmpD0AAACZkfYAAABkRtoDAACQ2f8HUnOioWERWj0AAAAddEVYdFNvZnR3YXJlAEBsdW5hcGFpbnQvcG5nLWNvZGVj9UMZHgAAAABJRU5ErkJggg=="},7770:(e,t,a)=>{a.d(t,{Z:()=>o});const o=a.p+"assets/images/final-architecture-a62cd0c314c95be3960ac11adf5c1948.png"},5778:(e,t,a)=>{a.d(t,{Z:()=>o});const o=a.p+"assets/images/load-testing-9d89941d1c804a828252e999f5f9a50d.png"},7234:(e,t,a)=>{a.d(t,{Z:()=>o});const o=a.p+"assets/images/manual-9fd64e64f630a4a9b2b2b3dd21d8065a.png"},408:(e,t,a)=>{a.d(t,{Z:()=>o});const o=a.p+"assets/images/ot-96add64dc7ceed68eaea7e7083b21a7f.png"},1730:(e,t,a)=>{a.d(t,{Z:()=>o});const o=a.p+"assets/images/relational-model-6839055da94556f7d7fde7a1a63f3eb3.png"},4619:(e,t,a)=>{a.d(t,{Z:()=>o});const o=a.p+"assets/images/three-tier-b4304aedaa428b4ff30442eca531fb50.png"}}]); \ No newline at end of file diff --git a/assets/js/6fdce000.57fb553f.js b/assets/js/6fdce000.57fb553f.js deleted file mode 100644 index 15166d6..0000000 --- a/assets/js/6fdce000.57fb553f.js +++ /dev/null @@ -1 +0,0 @@ -"use strict";(self.webpackChunksymphony_collaboration=self.webpackChunksymphony_collaboration||[]).push([[210],{3905:(e,t,a)=>{a.d(t,{Zo:()=>d,kt:()=>m});var o=a(7294);function i(e,t,a){return t in e?Object.defineProperty(e,t,{value:a,enumerable:!0,configurable:!0,writable:!0}):e[t]=a,e}function n(e,t){var a=Object.keys(e);if(Object.getOwnPropertySymbols){var o=Object.getOwnPropertySymbols(e);t&&(o=o.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),a.push.apply(a,o)}return a}function r(e){for(var t=1;t=0||(i[a]=e[a]);return i}(e,t);if(Object.getOwnPropertySymbols){var n=Object.getOwnPropertySymbols(e);for(o=0;o=0||Object.prototype.propertyIsEnumerable.call(e,a)&&(i[a]=e[a])}return i}var l=o.createContext({}),c=function(e){var t=o.useContext(l),a=t;return e&&(a="function"==typeof e?e(t):r(r({},t),e)),a},d=function(e){var t=c(e.components);return o.createElement(l.Provider,{value:t},e.children)},h="mdxType",u={inlineCode:"code",wrapper:function(e){var t=e.children;return o.createElement(o.Fragment,{},t)}},p=o.forwardRef((function(e,t){var a=e.components,i=e.mdxType,n=e.originalType,l=e.parentName,d=s(e,["components","mdxType","originalType","parentName"]),h=c(a),p=i,m=h["".concat(l,".").concat(p)]||h[p]||u[p]||n;return a?o.createElement(m,r(r({ref:t},d),{},{components:a})):o.createElement(m,r({ref:t},d))}));function m(e,t){var a=arguments,i=t&&t.mdxType;if("string"==typeof e||i){var n=a.length,r=new Array(n);r[0]=p;var s={};for(var l in t)hasOwnProperty.call(t,l)&&(s[l]=t[l]);s.originalType=e,s[h]="string"==typeof e?e:i,r[1]=s;for(var c=2;c{a.r(t),a.d(t,{assets:()=>d,contentTitle:()=>l,default:()=>m,frontMatter:()=>s,metadata:()=>c,toc:()=>h});var o=a(7462),i=a(7294),n=a(3905);const r=e=>{let{center:t}=e;return 1==t?i.createElement("div",{className:"mx-auto mb-8 h-[2px] max-w-sm bg-gradient-to-r from-transparent via-[#65147c]"}):i.createElement("div",{className:"h-[2px] mb-8 max-w-sm bg-gradient-to-r from-[#c15bde] via-[#65147c]"})},s={title:"Case Study",description:"Symphony Technical Case Study - Challenges, System Design, and Engineering Decisions"},l="Case Study",c={unversionedId:"case-study",id:"case-study",title:"Case Study",description:"Symphony Technical Case Study - Challenges, System Design, and Engineering Decisions",source:"@site/docs/case-study.mdx",sourceDirName:".",slug:"/case-study",permalink:"/case-study",draft:!1,tags:[],version:"current",frontMatter:{title:"Case Study",description:"Symphony Technical Case Study - Challenges, System Design, and Engineering Decisions"}},d={},h=[{value:"Introduction",id:"introduction",level:2},{value:"Collaboration",id:"collaboration",level:2},{value:"Evolution of Web Applications",id:"evolution-of-web-applications",level:2},{value:"Introducing Real-Time",id:"introducing-real-time",level:3},{value:"WebRTC",id:"webrtc",level:4},{value:"WebSocket",id:"websocket",level:4},{value:"Conflict",id:"conflict",level:3},{value:"Methods of Conflict Resolution & Maintaining Distributed Consistency",id:"methods-of-conflict-resolution--maintaining-distributed-consistency",level:3},{value:"Operational Transformation (OT)",id:"operational-transformation-ot",level:4},{value:"Conflict Free Replicated Data Types (CRDTs)",id:"conflict-free-replicated-data-types-crdts",level:4},{value:"Custom Conflict Resolution Mechanisms (Not sure whether to include)",id:"custom-conflict-resolution-mechanisms-not-sure-whether-to-include",level:4},{value:"Choosing a Method of Conflict Resolution",id:"choosing-a-method-of-conflict-resolution",level:3},{value:"Manually Building a Real-time Collaborative Application",id:"manually-building-a-real-time-collaborative-application",level:2},{value:"Existing Solutions",id:"existing-solutions",level:3},{value:"DIY Solutions",id:"diy-solutions",level:4},{value:"Commercial Solutions",id:"commercial-solutions",level:4},{value:"A Solution for Our Use Case",id:"a-solution-for-our-use-case",level:2},{value:"Symphony",id:"symphony",level:2},{value:"Overview",id:"overview",level:3},{value:"Using Symphony",id:"using-symphony",level:3},{value:"Architecture Overview",id:"architecture-overview",level:3},{value:"Terminology",id:"terminology",level:4},{value:"Fundamental Requirements",id:"fundamental-requirements",level:4},{value:"Design Philosophy",id:"design-philosophy",level:4},{value:"Core Architecture",id:"core-architecture",level:4},{value:"Implementing the Core Architecture",id:"implementing-the-core-architecture",level:3},{value:"Conflict Resolution",id:"conflict-resolution",level:4},{value:"State Change Propagation",id:"state-change-propagation",level:4},{value:"Persisting Room Data",id:"persisting-room-data",level:4},{value:"Storing Document Data",id:"storing-document-data",level:4},{value:"Front-end Client API",id:"front-end-client-api",level:4},{value:"Load Testing",id:"load-testing",level:2},{value:"Constructing a Test Environment",id:"constructing-a-test-environment",level:3},{value:"Scaling",id:"scaling",level:2},{value:"Looking to Existing Solutions",id:"looking-to-existing-solutions",level:3},{value:"Redis Pub/Sub",id:"redis-pubsub",level:3},{value:"Querying for Documents",id:"querying-for-documents",level:4},{value:"Adding and Removing Instances",id:"adding-and-removing-instances",level:4},{value:"Evaluating the Current Scaling Solution",id:"evaluating-the-current-scaling-solution",level:4},{value:"A Better Scaling Solution",id:"a-better-scaling-solution",level:2},{value:"Implementation",id:"implementation",level:3},{value:"Isolating Room Processes",id:"isolating-room-processes",level:4},{value:"Orchestrating and Scaling Room Processes",id:"orchestrating-and-scaling-room-processes",level:4},{value:"Serverless",id:"serverless",level:4},{value:"Proxying Requests",id:"proxying-requests",level:4},{value:"Overview of the Final Architecture",id:"overview-of-the-final-architecture",level:3},{value:"Additional Improvements",id:"additional-improvements",level:3},{value:"Monitoring and Visibility",id:"monitoring-and-visibility",level:4},{value:"Reducing Pod Cold Start Time",id:"reducing-pod-cold-start-time",level:4},{value:"Securing the Deployment",id:"securing-the-deployment",level:4},{value:"Snapshotting",id:"snapshotting",level:4},{value:"Future Work",id:"future-work",level:2},{value:"References",id:"references",level:2}],u={toc:h},p="wrapper";function m(e){let{components:t,...i}=e;return(0,n.kt)(p,(0,o.Z)({},u,i,{components:t,mdxType:"MDXLayout"}),(0,n.kt)("h1",{id:"case-study"},"Case Study"),(0,n.kt)(r,{mdxType:"HeaderLine"}),(0,n.kt)("blockquote",null,(0,n.kt)("p",{parentName:"blockquote"},(0,n.kt)("em",{parentName:"p"},"\u201cAlone we can do so little; together we can do so much.\u201d - Helen Keller"))),(0,n.kt)("h2",{id:"introduction"},"Introduction"),(0,n.kt)("p",null,"Symphony is an open source framework designed to make it easy for developers to build collaborative web applications. Symphony handles the complexities of implementing collaboration, including conflict resolution and real-time infrastructure, freeing developers to focus on creating unique and engaging features for their applications."),(0,n.kt)("video",{loop:!0,playsInline:!0,muted:!0,autoPlay:!0,className:"max-w-full"},(0,n.kt)("source",{src:"/img/symphony.mp4",type:"video/mp4"})),(0,n.kt)("p",null,"In this case study, we\u2019ll discuss the challenges that arise when building collaborative experiences on the web, the limitations of traditional approaches in solving these problems, and how we designed Symphony to overcome them."),(0,n.kt)("h2",{id:"collaboration"},"Collaboration"),(0,n.kt)("p",null,"Real-time collaboration, where multiple users can concurrently work together on a common task, has been a notable feature since the earliest days of the internet. It\u2019s origin can be traced back to the 1960s, when Douglas Engelbart in his famous ",(0,n.kt)("em",{parentName:"p"},"Mother of All Demos"),", demonstrated the first real-time collaborative editor, built on the oN-Line System (NLS), that allowed users to create and edit documents, link them together, and share them with others.",(0,n.kt)("sup",{parentName:"p",id:"fnref-1"},(0,n.kt)("a",{parentName:"sup",href:"#fn-1",className:"footnote-ref"},"1"))),(0,n.kt)("p",null,"However, for much of the web\u2019s history, the majority of applications have notably been non-collaborative. Without the ability to work together on a common task in real-time, users have to instead enter into a tedious cycle of changing, exporting, and manually syncing or emailing copies of files."),(0,n.kt)("p",null,(0,n.kt)("img",{alt:"Modify-Export-Send feedback loop",src:a(1561).Z,width:"847",height:"197"})),(0,n.kt)("p",null,"This slow feedback loop harms productivity.",(0,n.kt)("sup",{parentName:"p",id:"fnref-2"},(0,n.kt)("a",{parentName:"sup",href:"#fn-2",className:"footnote-ref"},"2"))," In other words, this workflow is sub-optimal and restrictive."),(0,n.kt)("p",null,"With the rise of remote work where users are geographically separated, the need to improve this workflow has become even more acute."),(0,n.kt)("p",null,"As noted by industry leaders, the optimal solution is for applications to allow multiple users to collaborate online in real-time."),(0,n.kt)("blockquote",null,(0,n.kt)("p",{parentName:"blockquote"},(0,n.kt)("strong",{parentName:"p"},(0,n.kt)("em",{parentName:"strong"},'"',"[Real-time collaboration]",' eliminates the need to export, sync, or email copies of files and allows more people to take part in the design process." - Evan Wallace, Figma')))),(0,n.kt)("p",null,"Popular products such as\xa0",(0,n.kt)("a",{parentName:"p",href:"https://www.figma.com/"},"Figma"),",\xa0",(0,n.kt)("a",{parentName:"p",href:"https://www.google.co.uk/docs/about/"},"Google Docs"),", and\xa0",(0,n.kt)("a",{parentName:"p",href:"https://code.visualstudio.com/"},"Visual Studio Code"),", incorporate this as a defining feature, allowing multiple users to concurrently modify the same state."),(0,n.kt)("p",null,"The problem is that building these types of applications is non-trivial. To understand why, we need to consider the characteristics of traditional web applications."),(0,n.kt)("h2",{id:"evolution-of-web-applications"},"Evolution of Web Applications"),(0,n.kt)("p",null,"Traditionally, the architecture of most web applications have conformed to the client-server model, where client and server communicate in a request-response cycle."),(0,n.kt)("p",null,"When a user makes a change to the client state, the change is propagated to the application server via a HTTP request, which in turn updates the database i.e. the true application state and confirms the change to the client via a response."),(0,n.kt)("p",null,(0,n.kt)("img",{alt:"Three-tier Architecture",src:a(4619).Z,width:"903",height:"209"})),(0,n.kt)("p",null,"This architecture is fine for applications that are designed to be used by only one user at a time, this architecture is fine. However, for applications that seek to provide a multiplayer experience, the stateless nature of HTTP is problematic."),(0,n.kt)("p",null,"Since each state change by a given client is scoped to the request-response cycle, other user\u2019s who wish to view the change must first request the data from the server, usually by refreshing the page."),(0,n.kt)("p",null,"In situations where multiple users are frequently modifying the same state, the need for each client to constantly send requests can quickly become burdensome and inefficient."),(0,n.kt)("h3",{id:"introducing-real-time"},"Introducing Real-Time"),(0,n.kt)("p",null,"As companies began wanting to create applications that allowed multiple users to interact in realtime, the stateless nature of HTTP request-response cycle became a limitation. These applications such as online games, chat rooms, and social media platforms, needed to maintain updated state without requiring the user to take any specific action such as a page refresh. In other words, a different approach to data transmission was needed- one that allowed data to be shared bi-directionally between clients and/or a server in real-time."),(0,n.kt)("p",null,"In response, new web protocols were developed to help facilitate this. Two of the most popular include WebRTC and WebSocket."),(0,n.kt)("h4",{id:"webrtc"},"WebRTC"),(0,n.kt)("p",null,"Web Real-Time Communication (WebRTC) is an open-source technology that enables real-time communication between web browsers over the internet.",(0,n.kt)("sup",{parentName:"p",id:"fnref-3"},(0,n.kt)("a",{parentName:"sup",href:"#fn-3",className:"footnote-ref"},"3"))," The protocol uses a combination of JavaScript APIs and peer-to-peer networking to establish direct communication channels between browsers, without the need for a permanent, central server. UDP is used as the primary transport protocol for real-time data transmission. This makes WebRTC an especially attractive choice for collaborative applications that require very low-latency communication at the expense of reduced reliability and error correction, such as video conferencing, online gaming, and live streaming."),(0,n.kt)("h4",{id:"websocket"},"WebSocket"),(0,n.kt)("p",null,"WebSocket is a web protocol that provides a persistent, bi-directional communication channel between a client and a server over a single, long-lived TCP connection.",(0,n.kt)("sup",{parentName:"p",id:"fnref-4"},(0,n.kt)("a",{parentName:"sup",href:"#fn-4",className:"footnote-ref"},"4"))," The connection is established via a handshake between client and server. Since TCP is used as the primary transport protocol, WebSocket is a suitable choice for collaborative applications that require stronger guarantees on the reliability and security of the communication channel at the expense of higher latency, such as real-time dashboards, stock price tickers, and live chat."),(0,n.kt)("p",null,"Using technologies such as WebRTC and WebSocket, clients and/or servers are able to maintain persistent, stateful communication channels, no longer bound by the limits of the request-response cycle. As such, it permitted the development of so-called real-time applications to be built, where state updates are ",(0,n.kt)("em",{parentName:"p"},"perceived")," to be received instantaneously without page refresh."),(0,n.kt)("p",null,"Whilst it may initially seem that the addition of real-time solves the collaboration problem since multiple users can now see changes immediately, this is not the case."),(0,n.kt)("p",null,"The problem is that many real-time applications such as chat applications have the implicit constraint that each piece of state can only have a single mutable reference to it. In other words, the same piece of state cannot be modified concurrently by multiple users. For example, in a chat application, a given message is owned by a single user and they alone can edit it at any given time."),(0,n.kt)("p",null,"For an application to be truly collaborative, it must allow users to work together in real-time on shared state, where multiple users can modify the same piece of state ",(0,n.kt)("em",{parentName:"p"},"at the same time, without conflicts or inconsistencies.")),(0,n.kt)("p",null,"The possibility of conflict radically increases the complexity of implementing collaborative applications."),(0,n.kt)("h3",{id:"conflict"},"Conflict"),(0,n.kt)("p",null,"In the context of real-time collaborative applications, conflict refers to a situation where two or more users attempt to modify the same piece of state, without knowledge of one another (concurrently), resulting in conflict versions of that data."),(0,n.kt)("p",null,"For example, multiple users working on a shared task or document may make changes to the same part of the document at the same time. Alternatively, network delays could cause state to diverge between different users which must be reconciled."),(0,n.kt)("p",null,"We can concretely demonstrate how conflict arises using the following examples."),(0,n.kt)("p",null,"Suppose that Alice and Bob are collaborating on a text document, when both Bob and Alice attempt to write at the same spot:"),(0,n.kt)("video",{loop:!0,playsInline:!0,muted:!0,autoPlay:!0,className:"max-w-full"},(0,n.kt)("source",{src:"/img/case-study/text-editor-conflict.mp4",type:"video/mp4"})),(0,n.kt)("p",null,"When conflicts arise, Alice and Bob\u2019s modifications can be seen as branching off from the previous state of the system, creating a parallel version of the application state."),(0,n.kt)("p",null,(0,n.kt)("img",{alt:"Branching",src:a(5939).Z,width:"723",height:"570"})),(0,n.kt)("p",null,"For a collaborative application, we need a method of reconciling such conflicts and enforcing distributed consistency across clients."),(0,n.kt)("figure",{className:"mb-5"},(0,n.kt)("img",{src:"/img/case-study/merge.png",alt:"merging"}),(0,n.kt)("figcaption",{className:"italic"},"The role of a conflict resolution mechanism is to merge branches in a deterministic way, until all branches have converged to a single, consistent state that all parties agree upon.")),(0,n.kt)("p",null,"In other words, after applying all user state changes, the application should deterministically converge to an eventually consistent state across the whole system that all parties agree upon."),(0,n.kt)("h3",{id:"methods-of-conflict-resolution--maintaining-distributed-consistency"},"Methods of Conflict Resolution & Maintaining Distributed Consistency"),(0,n.kt)("p",null,"Over the years, there have been multiple solutions that have been proposed to the problem of conflict resolution."),(0,n.kt)("p",null,"The simplest strategy, as mentioned previously, is to prevent conflicts from occurring in the first place. This can be implemented via locking. When a given user is making edits, the document is locked, becoming read-only to other users. In other words, we impose the constraint that only a single user can have a mutable reference to the document at any given time."),(0,n.kt)("p",null,"Thanks to its simplicity, this approach is widely used even today. For example, Basecamp, a web-based project management tool, employs locking to prevent conflicts:"),(0,n.kt)("p",null,(0,n.kt)("img",{alt:"Basecamp locking",src:a(1191).Z,width:"2235",height:"1057"})),(0,n.kt)("p",null,"However, as noted previously, this approach provides a very limited workflow since it solely facilitates asynchronous collaboration, where users have to implicitly arrange times when they can edit the document or work on separate documents and then merge changes."),(0,n.kt)("p",null,"For real-time, ",(0,n.kt)("em",{parentName:"p"},"synchronous")," collaboration, more advanced conflict resolution mechanisms are required."),(0,n.kt)("h4",{id:"operational-transformation-ot"},"Operational Transformation (OT)"),(0,n.kt)("p",null,"One possible approach is to use the operational transformation (OT) algorithm, famously used by Google Docs ",(0,n.kt)("sup",{parentName:"p",id:"fnref-5"},(0,n.kt)("a",{parentName:"sup",href:"#fn-5",className:"footnote-ref"},"5")),"."),(0,n.kt)("p",null,"OT represent each user\u2019s edits as a sequence of operations that can be applied to the shared application state. For example, in the case of a collaborative text editor, where the sequence of characters is zero-indexed, the operation to insert the character ",(0,n.kt)("inlineCode",{parentName:"p"},"'a'")," at the beginning of the first sentence may be represented as ",(0,n.kt)("inlineCode",{parentName:"p"},"insert('a', 0)"),"."),(0,n.kt)("p",null,"When a client makes an edit to the state, the corresponding operation is transmitted to the server, which broadcasts it to all other collaborating clients."),(0,n.kt)("p",null,"In cases where multiple users attempt to modify the same piece of state concurrently, the OT algorithm defines a set of rules, which encode how conflicting operations should be ",(0,n.kt)("em",{parentName:"p"},"transformed")," such that the operations can be applied in any order, without causing conflict."),(0,n.kt)("p",null,"For example, in the case of the collaborative text editor, two clients may attempt to concurrently insert text at the start of the document i.e. ",(0,n.kt)("inlineCode",{parentName:"p"},"O1 = insert('a', 0, 1)")," and ",(0,n.kt)("inlineCode",{parentName:"p"},"O2 = insert('b', 0, 2)"),", where the third argument represents the client id. The transform rule may be to shift one of the insertions to the right by the length of the other insertion i.e. ",(0,n.kt)("inlineCode",{parentName:"p"},"insert('a', 0, 1)")," and ",(0,n.kt)("inlineCode",{parentName:"p"},"T(O1) = insert('b', 1, 2)"),"."),(0,n.kt)("p",null,(0,n.kt)("img",{alt:"Operational Transform",src:a(408).Z,width:"1424",height:"541"})),(0,n.kt)("p",null,"This ensures that both insertions can be applied whilst still capturing user intent and not modifying the intended meaning of the document."),(0,n.kt)("p",null,"Since OT only requires operations to be incrementally broadcast, the algorithm is efficient and has low memory overhead."),(0,n.kt)("p",null,"The problem is that OT is very complex to implement correctly. The OT algorithm assumes that every state change is captured, which in modern rich browser environments, can be difficult to guarantee. Further, since operations have a finite transit time to the server, the states of clients naturally diverge over time from one another. The larger the divergence, the larger the number of possible combinations of operations that result in conflict, each of which must be accounted for by the transform rules. Since many of these conflicting combinations are very difficult to foresee, formally proving the correctness of OT is complicated and error-prone, even for the simplest of OT algorithms."),(0,n.kt)("p",null,"This sentiment is widely shared by practitioners in the field, as highlighted by Joseph Gentle, a former Google Wave engineer, and author of the ShareJS OT library, who said:"),(0,n.kt)("blockquote",null,(0,n.kt)("p",{parentName:"blockquote"},"Unfortunately, implementing OT sucks. There's a million algorithms with different tradeoffs, mostly trapped in academic papers. ","[\u2026]"," Wave took 2 years to write and if we rewrote it today, it would take almost as long to write a second time.")),(0,n.kt)("p",null,"In fact, 4 out of 8 different implementations of OT from the original 1989 paper to 2006 were found to be incorrect, missing subtle edge cases. The consequence of this incorrectness was that client state would irrevocably diverge, with no way to return to a consistent state ","[ref: CRDTS the hard parts]","."),(0,n.kt)("p",null,"The complexity of OT led researchers to find alternatives, the most promising of which are conflict-free replicated data types, or CRDTs."),(0,n.kt)("h4",{id:"conflict-free-replicated-data-types-crdts"},"Conflict Free Replicated Data Types (CRDTs)"),(0,n.kt)("p",null,"A conflict-free replicated data type (CRDT) is an abstract data type designed to be replicated at multiple processes.",(0,n.kt)("sup",{parentName:"p",id:"fnref-6"},(0,n.kt)("a",{parentName:"sup",href:"#fn-6",className:"footnote-ref"},"6"))," By definition, CRDTs have the following properties:"),(0,n.kt)("ul",null,(0,n.kt)("li",{parentName:"ul"},(0,n.kt)("strong",{parentName:"li"},"Independent-")," Any replica can be modified without coordinating with other replicas."),(0,n.kt)("li",{parentName:"ul"},(0,n.kt)("strong",{parentName:"li"},"Strongly eventually consistent-")," When any two replicas have received the same set of updates (in any order), the mathematical properties of CRDTs guarantee that both replicas will deterministically converge to the same state ","[footnote- explain what these mathematical properties are]",".")),(0,n.kt)("p",null,"By imposing these mathematical properties on the CRDT and it\u2019s associated algorithms, clients can optimistically update their own state locally and broadcast their updates to all other remote, state replicas ","[footnote explain difference between state and operation based]",". Since CRDTs are strongly eventually consistent, upon a given remote replica receiving all updates, the remote replica is guaranteed to converge to the same state as the local replica without conflict."),(0,n.kt)("p",null,"The advantage of CRDTs is that they are guaranteed to be conflict-free, as long as the required mathematical properties are imposed. Since these mathematical properties are well-defined, it is easier to prove the correctness of a CRDT than any corresponding OT implementation. Further, since each replica is independent and that CRDTs make no assumption about the network topology, CRDTS are partition tolerant by default and can be used in a variety of network topologies including client-server and P2P. This property also means they are offline-capable by default."),(0,n.kt)("p",null,"However, the mathematical constraints of CRDTs, in particular that operations should be commutative adds some unavoidable overhead. Most commonly-used data structures do not have commutative operations by default. For example, the ",(0,n.kt)("inlineCode",{parentName:"p"},"add")," and ",(0,n.kt)("inlineCode",{parentName:"p"},"remove")," operations of a Set are not naturally commutative. To ensure commutativity, the CRDT must retain additional metadata.",(0,n.kt)("sup",{parentName:"p",id:"fnref-7"},(0,n.kt)("a",{parentName:"sup",href:"#fn-7",className:"footnote-ref"},"7"))),(0,n.kt)("p",null,"For example, in the case of the ",(0,n.kt)("inlineCode",{parentName:"p"},"add")," and ",(0,n.kt)("inlineCode",{parentName:"p"},"remove")," operations of a Set, tombstones are typically used as placeholders for removed entries- if a replica receives a ",(0,n.kt)("inlineCode",{parentName:"p"},"remove")," operation for an element before it receives the ",(0,n.kt)("inlineCode",{parentName:"p"},"add")," operation that actually added the element, the tombstone ensures that the ",(0,n.kt)("inlineCode",{parentName:"p"},"remove")," operation is still correctly processed. Since the metadata must be retained for the required mathematical properties to be upheld, the use of CRDTs inevitably results in additional memory overhead, which can become significant for large state. As noted by Jospeh Gentle:"),(0,n.kt)("blockquote",null,(0,n.kt)("p",{parentName:"blockquote"},(0,n.kt)("strong",{parentName:"p"},(0,n.kt)("em",{parentName:"strong"},'"Because of how CRDTs work, documents grow without bound. \u2026 Can you ever delete that data? Probably not. And that data can\u2019t just sit on disk. It needs to be loaded into memory to handle edits." - Joseph Gentle, former Google Wave engineer')))),(0,n.kt)("p",null,"While recent research has sought to introduce garbage-collection methods to reduce the amount of metadata, there is still significant additional memory overhead when using CRDTs to represent a data model."),(0,n.kt)("h4",{id:"custom-conflict-resolution-mechanisms-not-sure-whether-to-include"},"Custom Conflict Resolution Mechanisms (Not sure whether to include)"),(0,n.kt)("p",null,"Whilst OT and CRDTs represent the most popular approaches to conflict-resolution, the complexity of OT and the memory overhead of CRDTs can sometimes be unacceptable for certain use-cases. As such, some choose to create custom, proprietary data models that are inspired by the OT and CRDT approaches and are highly specialised to a particular use-case."),(0,n.kt)("p",null,"For example, Figma relax many of the constraints imposed by CRDTs by adopting much simpler conflict-resolution semantics. In particular, they use simple last-write wins semantics when two clients try to modify a value of a Figma object concurrently. This works great for Figma objects where changes are mutually exclusive i.e. a single value must be chosen, but would fail if used for text editing. In Figma\u2019s case, this was a valid tradeoff for their use case but would not be a suitable model for other applications.",(0,n.kt)("sup",{parentName:"p",id:"fnref-8"},(0,n.kt)("a",{parentName:"sup",href:"#fn-8",className:"footnote-ref"},"8"))),(0,n.kt)("p",null,"The advantage of implementing a custom conflict-free data model is that the mechanism can be made highly-specialised to the target use-case. This can mean that many of the constraints that come with OT and CRDTs can be relaxed which may result in a simpler and efficient data representation. However, developing a custom model can be potentially risky since it requires a number of assumptions to be made about the use-case. In Figma\u2019s case, for example, introducing text-editing may require significant changes to their current conflict-resolution semantics."),(0,n.kt)("h3",{id:"choosing-a-method-of-conflict-resolution"},"Choosing a Method of Conflict Resolution"),(0,n.kt)("p",null,"When choosing a conflict-resolution mechanism, there is no single best, one-size fits all solution. Each conflict-resolution mechanism has it\u2019s own set of tradeoffs and choosing a particular approach requires a deep understanding of the usage pattern of the target application."),(0,n.kt)("p",null,"Some aspects of the target application that should be considered include:"),(0,n.kt)("ul",null,(0,n.kt)("li",{parentName:"ul"},"What CAP (Consistency, Availability, Partition-tolerance) properties should the system have?"),(0,n.kt)("li",{parentName:"ul"},"What is the application architecture? Client-server? P2P?"),(0,n.kt)("li",{parentName:"ul"},"Is the system required to operate offline?"),(0,n.kt)("li",{parentName:"ul"},"Are there any system-level constraints including CPU/memory limits?"),(0,n.kt)("li",{parentName:"ul"},"Is the data model generic or highly specialised to a particular use-case?")),(0,n.kt)("p",null,"Answering these questions influences the suitability of each conflict resolution mechanism to a specific use-case."),(0,n.kt)("p",null,(0,n.kt)("img",{alt:"Conflict resolution mechanisms comparison",src:a(4102).Z,width:"1140",height:"762"})),(0,n.kt)("h2",{id:"manually-building-a-real-time-collaborative-application"},"Manually Building a Real-time Collaborative Application"),(0,n.kt)("p",null,"Building a collaborative application from scratch can be time-consuming and difficult, particularly when dealing with the intricacies of real-time infrastructure and conflict-resolution mechanisms. It means that creating rich, collaborative experiences on the web has traditionally only been open to companies with the human and financial resources to roll their own solutions. For smaller teams of modest means, who may lack familiarity with these specialised topics, implementing such systems has remained out of reach. Provided below is a sample list of tasks involved in creating a production-ready real-time collaborative web application:"),(0,n.kt)("p",null,(0,n.kt)("img",{alt:"Manually building collaborative application",src:a(7234).Z,width:"1179",height:"1063"})),(0,n.kt)("p",null,"As a result, solutions have started to emerge that lower this barrier."),(0,n.kt)("h3",{id:"existing-solutions"},"Existing Solutions"),(0,n.kt)("p",null,"Existing solutions typically fall into two categories: DIY solutions and commercial solutions."),(0,n.kt)("h4",{id:"diy-solutions"},"DIY Solutions"),(0,n.kt)("p",null,"For organisations who have complex, specialised requirements for their collaborative functionality or want to tightly integrate with existing infrastructure, a DIY solution might be the best fit. This involves manually synthesising the various components required for a real-time collaborative application."),(0,n.kt)("p",null,"There are numerous open-source libraries providing implementations of popular conflict-resolution algorithms- teams would likely need to research, choose, and integrate the solution that best fits their use case. Alternatively, a bespoke solution may be best suited for highly specialised applications."),(0,n.kt)("p",null,"For the real-time network and persistence layer which handles the propagation of updates to collaborating clients and/or server(s) and storing of state, one could use a backend-as-a-service such as ",(0,n.kt)("a",{parentName:"p",href:"https://ably.com/"},"Ably"),", ",(0,n.kt)("a",{parentName:"p",href:"https://pusher.com/"},"Pusher"),", or ",(0,n.kt)("a",{parentName:"p",href:"https://www.pubnub.com/"},"PubNub")," or provision a custom implementation using open-source libraries like ws or ",(0,n.kt)("a",{parentName:"p",href:"https://peerjs.com/"},"PeerJS")," on cloud infrastructure."),(0,n.kt)("p",null,"Whilst the DIY approach offers a high degree of customisation, it does require developers to have a high-level of proficiency in the relevant technologies. Thus, less experienced teams might reach for a Software-as-a-Service (SaaS) product to help manage their collaborative functionality needs."),(0,n.kt)("h4",{id:"commercial-solutions"},"Commercial Solutions"),(0,n.kt)("p",null,"The advent of commercial offerings providing Collaboration as a Service is a relatively recent phenomenon."),(0,n.kt)("p",null,"One of the most popular solutions, released in 2021, is ",(0,n.kt)("a",{parentName:"p",href:"https://liveblocks.io/"},"Liveblocks"),". Whilst not as flexible as the DIY approach, Liveblocks provides a great developer experience, exposing all the components required for adding real-time collaboration to an application through an intuitive client API. This includes a collection of custom CRDT-like data types, autoscaling real-time infrastructure with persistence, and a developer dashboard for easily monitoring usage patterns. However, this convenience comes at a cost, with Liveblocks charging $299 per month for an application with up to 2000 monthly active users (MAU) ","[ref: valid as of May 2023]","."),(0,n.kt)("p",null,"A compelling alternative is ",(0,n.kt)("a",{parentName:"p",href:"https://fluidframework.com/"},"Fluid Framework")," developed by Microsoft. Fluid provides a collection of client libraries that also expose custom CRDT-like distributed data structures. The client libraries connect to an implementation of the Fluid service, a runtime which handles the complexities of propagating updates in real-time and persisting state. Whilst Fluid is open-source, it provides a very limited implementation of the Fluid service by default, capable of handling only 100s of concurrent users. For larger applications, developers are forced to use either the Azure Managed Service or write a custom scaled implementation."),(0,n.kt)("h2",{id:"a-solution-for-our-use-case"},"A Solution for Our Use Case"),(0,n.kt)("p",null,"Looking at the above solutions, it is clear that until now, developers who want to incorporate collaboration into their products have been to partially or fully roll their own solutions or turn to a closed-source, managed provider."),(0,n.kt)("p",null,"The first option has significant implementation cost, particularly given that the expertise require to develop collaborative functionality is often orthogonal to the businesses\u2019 core offering. The latter option suffers from vendor lock-in and can attract considerable expense, as noted with Liveblocks."),(0,n.kt)("p",null,"Following this, we wanted to build a tool for small teams that want to add collaborative functionality to their applications without having to spend time implementing and deploying their own conflict resolution and real-time infrastructure."),(0,n.kt)("p",null,"Further, we want to make our framework open-source, scalable and fully self-hosted so that developers have complete control of code and data ownership."),(0,n.kt)("p",null,"With globalisation and the rise of remote work, providing seamless web-native collaboration is no longer the preserve of the largest companies. Smaller teams increasingly want to reap the benefits of fast collaborative feedback loops in their products."),(0,n.kt)("p",null,"An example of this is Propellor Aero, who wanted the ability to collaborate with their customers on 3D interactive site survey maps."),(0,n.kt)("blockquote",null,(0,n.kt)("p",{parentName:"blockquote"},(0,n.kt)("strong",{parentName:"p"},"\u201cWe started looking at building a service ourselves\u2026 We really didn't want to because it's a whole lot of work and it's a really difficult problem. This was a very new problem to us, our engineering team had different levels of experience in synchronisation in real-time as a whole.\u201d ",(0,n.kt)("em",{parentName:"strong"},"- Jye Lewis, Engineering Manager, Propellor Aero")))),(0,n.kt)("p",null,"We sought to assist companies with similar profiles in adding collaborative functionality to their web application."),(0,n.kt)("p",null,"The availability of an open-source tool which handles the complexities of implementing collaboration, including conflict resolution and real-time infrastructure, would free Propellor Aero developers to focus on creating features that have direct business value, whilst still retaining control over all their data."),(0,n.kt)("p",null,(0,n.kt)("img",{alt:"Comparing existing solutions",src:a(8997).Z,width:"982",height:"349"})),(0,n.kt)("h2",{id:"symphony"},"Symphony"),(0,n.kt)("h3",{id:"overview"},"Overview"),(0,n.kt)("p",null,"Symphony is an open-source framework designed to make it easy for developers to add collaborative functionality to their applications. It comes with a client library that provides an intuitive API to a collection of conflict-free data types that are composed to construct a distributed data model. Symphony automatically provisions the required network infrastructure to propagate state changes to all collaborating clients in real-time and persist state between users sessions. It also provides real-time application- and system-level monitoring via a developer dashboard that exposes pertinent metrics including the number of active users, the size of persisted state (bytes), and the CPU/memory usage of each collaborative session."),(0,n.kt)("h3",{id:"using-symphony"},"Using Symphony"),(0,n.kt)("p",null,"Symphony has been designed with ease-of-use in mind. In three simple steps, developers can create and deploy a real-time collaborative application."),(0,n.kt)("p",null,"After installing the required dependencies stated in the documentation, and globally downloading the Symphony CLI tool via ",(0,n.kt)("inlineCode",{parentName:"p"},"npm"),":"),(0,n.kt)("ol",null,(0,n.kt)("li",{parentName:"ol"},"Run ",(0,n.kt)("inlineCode",{parentName:"li"},"symphony compose "),". This command creates a new ",(0,n.kt)("inlineCode",{parentName:"li"},"projectName")," directory, initializes a new Node project with the required ",(0,n.kt)("inlineCode",{parentName:"li"},"package.json"),", and scaffolds some initial starter files including the Symphony configuration file, ",(0,n.kt)("inlineCode",{parentName:"li"},"symphony.config.js"),"."),(0,n.kt)("li",{parentName:"ol"},"Write and deploy the front-end client code by composing the collection of conflict-free data types provided by the Symphony client."),(0,n.kt)("li",{parentName:"ol"},"Run ",(0,n.kt)("inlineCode",{parentName:"li"},"symphony deploy "),", which deploys the application on Google Cloud Platform (GCP). After provisioning is complete, developer\u2019s can run ",(0,n.kt)("inlineCode",{parentName:"li"},"symphony dashboard")," to view the developer monitoring dashboard.")),(0,n.kt)("p",null,"Following these steps, developers can also enhance existing web applications with collaborative functionality using Symphony."),(0,n.kt)("p",null,"To illustrate this, here\u2019s a simple whiteboard application where users can draw lines, shapes, and change colours. In it\u2019s current form, the whiteboard is single-user and non-collaborative."),(0,n.kt)("div",{id:"singleplayer-demo"},(0,n.kt)("iframe",{id:"singleplayer-demo-iframe",width:"100%",height:"600",frameBorder:"0"})),(0,n.kt)("p",null,"To make this whiteboard multiplayer, we modify the whiteboard code to make use of the conflict-free data types provided by the Symphony client. After deploying the application to GCP, user\u2019s can now work together in the same collaborative space and see what others are doing in real-time."),(0,n.kt)("div",{id:"multiplayer-demo",className:"flex justify-between max-w-full mb-3"},(0,n.kt)("iframe",{id:"multiplayer-demo-iframe-1",width:"45%",height:"600",frameBorder:"5"}),(0,n.kt)("iframe",{id:"multiplayer-demo-iframe-2",width:"45%",height:"600",frameBorder:"5"})),(0,n.kt)("p",null,"We\u2019ll now turn to how we built Symphony and the technical challenges we faced."),(0,n.kt)("h3",{id:"architecture-overview"},"Architecture Overview"),(0,n.kt)("p",null,"We\u2019ll being by outlining the fundamental requirements we had to address and a description our design philosophy. We\u2019ll then provide a high-level overview of our core architecture and discuss important design decisions, tradeoffs and improvements that were made."),(0,n.kt)("h4",{id:"terminology"},"Terminology"),(0,n.kt)("p",null,"In order to express the system requirements accurately, we introduce some useful terminology:"),(0,n.kt)("ul",null,(0,n.kt)("li",{parentName:"ul"},"Document- refers to the shared state that clients modify during a session."),(0,n.kt)("li",{parentName:"ul"},"Room- a collaboration session in which one or more clients connect to in order to modify the room document. A given room has a single document i.e. shared state that clients modify."),(0,n.kt)("li",{parentName:"ul"},"Presence- represents the ephemeral state of a room which defines user\u2019s movements and actions inside a room including cursor positions, user avatars, online/offline indicators, or any other visual representation that reflects the real-time activity or availability of users within the collaborative session.")),(0,n.kt)("h4",{id:"fundamental-requirements"},"Fundamental Requirements"),(0,n.kt)("p",null,"When building our initial prototype, we focussed on the fundamental problems that needed to be solved in order to build the core of a real-time collaborative framework. These included:"),(0,n.kt)("ul",null,(0,n.kt)("li",{parentName:"ul"},"Deciding how to model the shared state of a room i.e. document, and selecting a suitable mechanism to resolve conflicts and understanding the constraints that such a choice would impose on the rest of our architecture."),(0,n.kt)("li",{parentName:"ul"},"Determining how ephemeral and persistent state changes on one client would be propagated in real-time to all other subscribed clients and/or servers."),(0,n.kt)("li",{parentName:"ul"},"Constructing a suitable persistence layer, where state can be stored between collaborative sessions and system metadata can be retained.")),(0,n.kt)("h4",{id:"design-philosophy"},"Design Philosophy"),(0,n.kt)("p",null,"Symphony is designed with the principle that developers should be able to include collaboration into their products without having to radically modify their existing workflow and tools. With this as our guiding principle, we explain our choice of architecture and how it attempts to meet the fundamental requirements of a real-time collaborative framework."),(0,n.kt)("h4",{id:"core-architecture"},"Core Architecture"),(0,n.kt)("p",null,"After some initial prototyping, we arrived at the following high-level flow on how a collaboration session involving multiple users starts, progresses and terminates."),(0,n.kt)("p",null,"A client connects to a server via WebSocket. The clients specifies the room to connect to by specifying the room id in the URL path. The server extracts the room id and queries the database to check if a room with that id already exists. If the id exists i.e. the room has been used before, the server retrieves the associated room document from storage and loads it into memory; otherwise, a new document is created in memory and a new room record created in the database."),(0,n.kt)("p",null,"Additional clients can connect to the active room and modify the state. Each update is propagated to the server which in turn updates the document state in memory and broadcasts it to all the other collaborating clients. Upon receiving updates, clients update their local state. When the last remaining client disconnects from the room, the document is serialized and written to storage. The document and room metadata is subsequently purged from memory, and the room is marked as closed in the database."),(0,n.kt)("p",null,"With an overall direction in mind, we then explored different options for each component of our core architecture."),(0,n.kt)("h3",{id:"implementing-the-core-architecture"},"Implementing the Core Architecture"),(0,n.kt)("h4",{id:"conflict-resolution"},"Conflict Resolution"),(0,n.kt)("p",null,"As mentioned previously, a key component of implementing real-time collaboration is the ability to deterministically reconcile conflicts, which arise as a result of multiple users concurrently modifying the same piece of state."),(0,n.kt)("p",null,"While we found that the performance and low memory overhead of OT was attractive, it\u2019s complexity and the fact that it\u2019s most suited to editing large text documents, made it less applicable to supporting generic data models."),(0,n.kt)("p",null,"For Symphony, we instead decided to use CRDTs as the primary conflict resolution mechanism. Their strong eventual consistency guarantees mean that client changes can be optimistically applied resulting in a faster user experience. In addition, they are highly available and fault-tolerant which means that the users can continue to change state even during network failure or disconnection- the state will simply synchronise with other clients upon reconnection."),(0,n.kt)("p",null,"Although CRDTs have traditionally suffered from inadequate performance and very large memory overhead, they have become exponentially faster and more memory efficient in recent years, thanks to an active research effort.",(0,n.kt)("sup",{parentName:"p",id:"fnref-9"},(0,n.kt)("a",{parentName:"sup",href:"#fn-9",className:"footnote-ref"},"9"))," To ensure suitable performance, we decided to use an operation-based CRDT, which unlike state-based CRDTs, only propagate operations over the wire instead of the entire state. The tradeoff is that operation-based CRDTs require a reliable network channel which could be easily included given our chosen network topology (see below)."),(0,n.kt)("p",null,"For our collection of CRDTs, we chose to use ",(0,n.kt)("a",{parentName:"p",href:"https://github.com/yjs/yjs"},"Yjs"),", a library which provides a collection of generic, operation-based CRDT implementations based on the YATA algorithm. We chose Yjs since it had strong community support, has a very efficient linked-list data model with optimisations such as a garbage collector, making it one of the most memory-efficient and performant implementations, and since it provided defined synchronisation and awareness protocols to propagate across persistent and ephemeral updates across a generic network layer."),(0,n.kt)("p",null,"We also considered using ",(0,n.kt)("a",{parentName:"p",href:"https://automerge.github.io/"},"Automerge"),", the other leading open-source offering in this space. Whilst equally performant, it is less mature and was 2x less memory efficient than Yjs in recent benchmarks."),(0,n.kt)("h4",{id:"state-change-propagation"},"State Change Propagation"),(0,n.kt)("p",null,"Since we now have a collection of conflict-free data types that can be used to construct a distributed data model, we need to consider how to propagate state updates to all collaborating clients in real-time."),(0,n.kt)("p",null,"Since CRDTs have strong eventual consistency, they can theoretically support any network layer capable of propagating updates from one replica to another. However, since our use-case is for web applications, we can only use technologies supported by modern browsers- the two primary choices being WebSocket and WebRTC."),(0,n.kt)("p",null,"WebRTC is primarily used in peer-to-peer (P2P) topologies. Whilst WebRTC is scalable and minimises infrastructure requirements since it does not require the use of a central server, it has lacks suitability for our use case."),(0,n.kt)("p",null,"Firstly, the majority of modern web applications already use a centralised client-server model. Companies want to retain control of data and enforce security measures such as authentication across all users, which is difficult in a P2P topology. Additionally, traversing firewalls and Network Address Translation (NAT) devices is not trivial with WebRTC- a consequence of this is that the applications will fail to propagate updates in geographies with national firewalls e.g. China."),(0,n.kt)("p",null,"As a result of these limitations, we chose WebSocket as the underlying protocol for our real-time infrastructure. Their support for the client-server model and stability across all major browsers and basis made them a natural choice for us. Since WebSocket provides a bidirectional communication channel over TCP, the reliable network channel required for operation-based CRDTs is inherently provided."),(0,n.kt)("h4",{id:"persisting-room-data"},"Persisting Room Data"),(0,n.kt)("p",null,"When a collaboration session ends, we need to persist room data so that room documents are not lost and users can recreate the room in the future to continue working on it."),(0,n.kt)("p",null,"To do this, we need to construct a data model which allows us to represent created rooms and their associated metadata. The model consists of a single Room entity:"),(0,n.kt)("p",null,(0,n.kt)("img",{alt:"Relational Model",src:a(1730).Z,width:"1198",height:"534"})),(0,n.kt)("p",null,"We chose to store this data in a Postgres relational database since we have a ready-heavy system and each room has a fixed schema. It also the permits analytical queries to be more easily executed. We rely on the Prisma ORM which provides a high-level, type-safe abstraction for schema creation and database interaction."),(0,n.kt)("h4",{id:"storing-document-data"},"Storing Document Data"),(0,n.kt)("p",null,"In line with Yjs best practice, we serialize room documents into a highly compressed binary format. This has the benefit of significantly reducing the amount of storage space required per document, faster data transmission and minimised bandwidth consumption across the network."),(0,n.kt)("p",null,"We initially thought of storing these binary blobs in the Postgres database. However, we realised that this was suboptimal."),(0,n.kt)("p",null,"Firstly, document sizes can become very large, particularly after lengthy collaboration sessions which can result in a large amount of accumulated CRDT metadata. Storing these documents in Postgres would affect the scalability of the database. Secondly, Postgres is not optimized for large scale writes- the number of writes scales linearly with the number of rooms and can become particularly problematics if large documents are saved multiple times during a collaborative session. Implementing other useful features such as document versioning also becomes tricky."),(0,n.kt)("p",null,"One potential solution is to use a NoSQL database like AWS DynamoDB. However, these often have limits on the size of a single database item (DynamoDB has a 400kb limit), which is impractical for use cases like ours where document size can potentially be unbounded."),(0,n.kt)("p",null,"Considering these limitations, we decided to store documents in object storage, namely AWS Simple Storage Service (S3). Object storage is highly scalable, optimized to handle large amounts of unstructured data making it ideal for persisting schemaless room documents. It\u2019s also cheaper than alternative NoSQL solutions like DynamoDB and supports large-scale read and write operations, making it suitable for scenarios where there is a large number of concurrent rooms and documents needs to be ingested and retrieved at high volumes. Further, our use case only requires documents to be persisted as atomic binary blobs- we do not need to query ",(0,n.kt)("em",{parentName:"p"},"within")," a document making object storage more suitable than a NoSQL database ","[ref Sam Broner]","."),(0,n.kt)("p",null,"Integrating Postgres and S3 object storage, we are now able to persist room data between collaboration sessions. When a user connects to a room, we query Postgres to determine if an existing room exists. If it does, we can retrieve the associated document from S3 and load it into memory for editing; otherwise we create a new Room record and initalize an empty document. After the last user leaves the room, we serialize the in-memory document, store it in object storage and purge the document from server memory, returning memory resource to the system."),(0,n.kt)("h4",{id:"front-end-client-api"},"Front-end Client API"),(0,n.kt)("p",null,"Whilst the conflict-free data types provided by Yjs come with a primitive API, it requires the developer to have some knowledge of the underlying data model to use optimally."),(0,n.kt)("p",null,"In line with our design philosophy of seamlessly integrating into developers\u2019 existing workflow, we created a JavaScript client API wrapper with sensible defaults and intuitive abstractions, through which a developer interacts with Symphony\u2019s components."),(0,n.kt)("p",null,"The client exposes the conflict-free data structures including a ",(0,n.kt)("inlineCode",{parentName:"p"},"SyncedList")," and ",(0,n.kt)("inlineCode",{parentName:"p"},"SyncedMap"),", which are composed to form a distributed document model. Importantly, the underlying communication and persistence infrastructure, allowing the application developer to remain at a familiar level of abstraction."),(0,n.kt)("p",null,"The client internally implements additional quality of life improvements for the developer, provide an enhanced developer experience. These include:"),(0,n.kt)("ul",null,(0,n.kt)("li",{parentName:"ul"},"Implementing performance optimizations such as auto bulk-insertion of updates which significantly reduces memory consumption."),(0,n.kt)("li",{parentName:"ul"},"Automatically converting between CRDT and plain JS objects when logical to do so such that developers do not need to keep manually converting."),(0,n.kt)("li",{parentName:"ul"},"Providing undo/redo functionality with a History API. This allows undo/redo functionality to be manually paused and resumed."),(0,n.kt)("li",{parentName:"ul"},"Convenience iterator methods on ",(0,n.kt)("inlineCode",{parentName:"li"},"SyncedList")," including ",(0,n.kt)("inlineCode",{parentName:"li"},"filter"),", ",(0,n.kt)("inlineCode",{parentName:"li"},"map"),", and ",(0,n.kt)("inlineCode",{parentName:"li"},"find"),", allowing it to be used more like a regular JavaScript Array.")),(0,n.kt)("p",null,"The full feature set provided by the Symphony client is described in our ",(0,n.kt)("a",{parentName:"p",href:"/api/client"},"API documentation"),"."),(0,n.kt)("h2",{id:"load-testing"},"Load Testing"),(0,n.kt)("p",null,"Once Symphony\u2019s core functionality was operational, developers were able to easily create real-time collaborative applications."),(0,n.kt)("p",null,"However, the current architecture is limited."),(0,n.kt)("p",null,"The responsibility for creating, maintaining and updating state in memory for all rooms, handling user WebSocket connections, and serializing/deserializing state all fall to a single server. In other words, the system has a single point of failure."),(0,n.kt)("p",null,"Also, since the single server is responsible for handling all collaborative sessions and supporting the additional memory overhead resulting from our use of CRDTs, we hypothesised that whilst this architecture is suitable for a small number of rooms, it would not suffice in real-world applications that would typically have thousands of concurrent users ","[ref typical app user count e.g. miro]","."),(0,n.kt)("p",null,"To empirically verify this, we turned to load testing the system. This would also allow us to determine the system\u2019s service level objectives (SLOs) including the concurrent user limit and identify potential bottlenecks such as compute or memory, which would later inform our scaling strategy."),(0,n.kt)("h3",{id:"constructing-a-test-environment"},"Constructing a Test Environment"),(0,n.kt)("p",null,"We first needed a way to establish a large number of virtual user connections to the server which each send state updates and broadcasted presence."),(0,n.kt)("p",null,"To do this, we wrote a program which spawned N separate processes, where each process modelled a virtual user connecting to the server. Since creating a large number of virtual users and propagating updates proved to be CPU intensive, we provisioned multiple EC2 instances to execute the script concurrently."),(0,n.kt)("p",null,"For the test itself, we selected the following load parameters."),(0,n.kt)("p",null,(0,n.kt)("img",{alt:"Load testing parameters",src:a(5778).Z,width:"845",height:"526"})),(0,n.kt)("p",null,"A single room server with 1vCPU and 4GB of memory, handling 240 virtual users with 4 users per room, resulting in a total of 60 rooms, propagating one state update per second and 5 presence updates per second, for a period of 30 minutes."),(0,n.kt)("p",null,"While the rates of document and presence updates would vary widely depending on the specific use case, we felt that these were reasonable values to model real-world usage (in comparison Liveblocks\u2019 default settings throttle user updates to 10 per second)."),(0,n.kt)("p",null,"Using AWS CloudWatch, we instrumented our server to extract application-level and system-level metrics including total number of WebSocket connections and CPU/memory usage."),(0,n.kt)("p",null,"We observed CPU usage steadily increase as a function of the number of connected virtual users. Once all connections were established, CPU usage had reached 92%. As the in-memory document size grew as a result of user updates, CPU usage peaked at 94% before we detected performance degradation in the form of dropped connections."),(0,n.kt)("p",null,"The results confirmed our hypothesis- that our current architecture could only handle a few hundred concurrent users for 30 minutes of real-world usage before failing."),(0,n.kt)("p",null,"It would be possible to vertically scale the server with greater compute and memory. However, this approach is not optimal. Firstly, the architecture would continue to have a single point of failure. Secondly, scaling would be hard-capped by the maximum instance size offered by the AWS."),(0,n.kt)("p",null,"For these reasons, we decided to explore horizontal scaling, which means increasing the number rather than the size of our servers. This would make our system capable of handling more users, while also being more resilient to server failures."),(0,n.kt)("h2",{id:"scaling"},"Scaling"),(0,n.kt)("h3",{id:"looking-to-existing-solutions"},"Looking to Existing Solutions"),(0,n.kt)("p",null,"Horizontally scaling the Symphony room server is not trivial. Unlike stateless services which can be scaled simply by adding more instances, clients connect to the room server via persistent WebSocket connections which are stateful. This means that clients who connect to the same room may be connected to different room server instances. This raises two problems."),(0,n.kt)("p",null,"The first problem is that if a client connected to a given server instance makes an update to document of a particular room on that server, then this update must be propagated to other servers which have that room document in memory; otherwise, the update will not be received by the other servers which have that room document and the state will diverge."),(0,n.kt)("p",null,"The second problem arises when a client attempts to connect to an already active room. It\u2019s possible that the connecting client may be routed to a server instance which does not have the document in-memory- while the server needs a way of retrieving the most recently updated document from another server."),(0,n.kt)("h3",{id:"redis-pubsub"},"Redis Pub/Sub"),(0,n.kt)("p",null,"The first problem is not unique to the Symphony room server. One common pattern to ensure updates on one server are propagated to other server is by adding a backplane, a shared component that facilitates the synchronization of data across multiple server instances."),(0,n.kt)("p",null,"A popular backplane is a Redis node, where each server connects to Redis channels i.e. to a \u2018publish\u2019 channel to send all updates received by the server from connected clients and to a \u2018subscribe\u2019 channel to receive all updates published by other servers. This publisher-subscribe mechanism ensures that when a client updates a room document on a particular server, the update is broadcast to all other servers- if a receiving server has the corresponding room document in memory, it can apply the update locally, ensuring that the document replicas of a given room maintain synchronised."),(0,n.kt)("h4",{id:"querying-for-documents"},"Querying for Documents"),(0,n.kt)("p",null,"One way of solving the second problem, namely that the document of an active room is missing in the particular server instance that a client connect to is to retain copies of every document on each server. However, this nullifies the benefit of scaling since the memory demands on each server is not reduced."),(0,n.kt)("p",null,"Instead, we implemented a system where a server could query another server instance, that had the required document in memory. For this, we maintain a key-value mapping of room id\u2019s to room server IP addresses which defines which room documents are present in which room servers. We chose AWS DynamoDB, a NoSQL key-value database to store this data."),(0,n.kt)("p",null,"When a client connects to a room and is routed to a server that does not have the corresponding document in memory, the server queries DynamoDB for the list of server IP addresses that are handling that room."),(0,n.kt)("p",null,"If one or more IP addresses are returned, it means that the room is active and thus the latest version of the document is one that is being currently edited on one or more other servers. Using one of the returned IP addresses, the server retrieves the document from the corresponding server. If no IP addresses are returned, the room is not active and the latest version of the document is simply retrieved from object storage. Once the querying server had retrieved the document, it subscribes to Redis to receive all future document updates."),(0,n.kt)("p",null,"This solution ensured that clients could access a room document via any server instance, without having to replicate all active room documents on every server."),(0,n.kt)("h4",{id:"adding-and-removing-instances"},"Adding and Removing Instances"),(0,n.kt)("p",null,"Since the single-server load test had identified CPU utilisation as a notable bottleneck, we set our scaling policy to target 50% CPU utilisation. This means that the system will scale out when CPU usage of any server exceeds that limit and scale in when it falls below that number."),(0,n.kt)("h4",{id:"evaluating-the-current-scaling-solution"},"Evaluating the Current Scaling Solution"),(0,n.kt)("p",null,"The chosen scaling solution represents a significant improvement the single-server approach. It can support a larger number of concurrent users by elastically deploying room server instances. However, while the architecture has historically been the most prescription for scaling WebSocket-based stateful services, we found a number of significant limitations specific to our use case during load testing."),(0,n.kt)("p",null,"When a plurality of clients attempted to join a particular room, they were often routed to different server instances. When the number of users in each room approached the number of server instances, it would invariably lead to copies of the document being present on every server. This nullified the benefits of scaling since there was no intended decrease in memory overhead. This additional overhead was also expensive since it would lead to extraneous CPU usage as a result of updates having to be broadcast and applied at every replica. This in turn resulted in more server instanced being provisioned and additional load on the Redis node. In fact, the Redis node approached 90% CPU utilisation at a few thousand concurrent users and represented a single point failure."),(0,n.kt)("p",null,"These findings led us to rethink the suitability of our current architecture for our use case."),(0,n.kt)("h2",{id:"a-better-scaling-solution"},"A Better Scaling Solution"),(0,n.kt)("p",null,"Upon reflection, there are two primary problems with the pub-sub architecture."),(0,n.kt)("p",null,"The first is that there is unnecessary duplication of documents across multiple server instances. The second is that the Redis node constitutes a single point of failure."),(0,n.kt)("p",null,"To overcome these limitations, we took inspiration from Figma."),(0,n.kt)("blockquote",null,(0,n.kt)("p",{parentName:"blockquote"},(0,n.kt)("strong",{parentName:"p"},"\u201cOur servers currently spin up a separate process for each multiplayer document which everyone editing that document connects to.\u201d - ",(0,n.kt)("em",{parentName:"strong"},"Evan Wallace, CTO, Figma")))),(0,n.kt)("p",null,"This approach has the advantage of keeping document state confined to a single process. This means that there is no longer a need for distributed document state, eliminating the difficulties in horizontally scaling a stateful service. Further, each process/room can be scaled independently of others resulting in minimised cost and efficient utilisation of system resources."),(0,n.kt)("p",null,"This improved architecture has the following requirements:"),(0,n.kt)("ul",null,(0,n.kt)("li",{parentName:"ul"},"Isolating each process/room from other rooms running on the same host."),(0,n.kt)("li",{parentName:"ul"},"Dynamically orchestrating process creation, execution, and termination. Processes should also automatically be restarted in case of crashes."),(0,n.kt)("li",{parentName:"ul"},"Autoscaling processes according to a specified scaling metric- in our case, this would likely be CPU or memory utilisation."),(0,n.kt)("li",{parentName:"ul"},"Proxying requests to the correct service")),(0,n.kt)("h3",{id:"implementation"},"Implementation"),(0,n.kt)("p",null,"We arrived at the following high-level architecture."),(0,n.kt)("p",null,(0,n.kt)("img",{alt:"Architecture overview",src:a(1029).Z,width:"1611",height:"587"})),(0,n.kt)("p",null,"A client sends a request to connect to a room via WebSocket. As before, the client specifies the room to connect to by specifying the room id in the URL path. The request is intercepted by a proxy server. The proxy server extracts the room id and queries a database to check if a room process with that id is active. If there isn\u2019t, the server requests a process, uniquely identified by room id to be started. Once a process with the requested id is running and ready to accept requests, a key-value record mapping the room id to the IP address of the process is added to the database and the server proxies the client request to the relevant process and the standard collaboration session, described in Section 1 can begin. When the last remaining client disconnects from the room, the process waits for a predefined grace period after which the process is terminated. The corresponding process record is removed from the database."),(0,n.kt)("p",null,"With an overall direction in mind, we then explored different options for each component of our core architecture."),(0,n.kt)("h4",{id:"isolating-room-processes"},"Isolating Room Processes"),(0,n.kt)("p",null,"To execute isolated room server processes, we had two potential choices of infrastructure: containers or virtual machines."),(0,n.kt)("p",null,"Since rooms should be ephemeral and rapidly scalable, we chose to use containers. Containers are more lightweight resulting in shorter cold start times and faster scaling. While they are less secure than virtual machine due to having a shared kernel and not providing full hardware virtualisation, this is an acceptable tradeoff for our use case since we are running trusted code."),(0,n.kt)("p",null,"We now needed a way of efficiently orchestrating room containers."),(0,n.kt)("h4",{id:"orchestrating-and-scaling-room-processes"},"Orchestrating and Scaling Room Processes"),(0,n.kt)("p",null,"One solution was to use the AWS-native way of orchestrating containers, namely AWS Elastic Container Service (ECS), as we did in our original architecture. However, we found that this suffered from considerable vendor lock-in and would make supporting multi-cloud deployment difficult in the future. Since many developers may use other cloud providers, this went against our philosophy of integrating into existing developer workflows."),(0,n.kt)("p",null,"Instead, we chose to use ",(0,n.kt)("a",{parentName:"p",href:"https://kubernetes.io/"},"Kubernetes"),", a open-source container orchestration tool thanks to it\u2019s large community, extensive tooling, and flexibility."),(0,n.kt)("h4",{id:"serverless"},"Serverless"),(0,n.kt)("p",null,"Our next decision was whether to run containers in a serverless fashion or to have direct access to the virtual machines hosting the containers. In line with our design philosophy, we wanted to make it as easy as possible for developers to create real-time collaborative web applications without having to manage the underlying infrastructure. Moreover, we wanted our solution to be cost effective. Given these requirements, we chose a serverless model, where usage-based billing model i.e. per K8s pod is employed- this means that a developer will only be charged for the number of active rooms."),(0,n.kt)("p",null,"For hosting the cluster, we initially turned to ",(0,n.kt)("a",{parentName:"p",href:"https://aws.amazon.com/eks/"},"AWS Elastic Kubernetes Service (EKS) with Fargate"),". However, we found a number of drawbacks to it. The most significant drawback is that EKS does not provide a fully managed option- while automated cluster creation tools such as ",(0,n.kt)("inlineCode",{parentName:"p"},"eksctl")," give the illusion of a fully-managed service, it simple auto generates the required resources and does not abstract away their existence. This means that the developer is still implicitly responsible for maintaining them and may mistakenly modify the cluster configuration."),(0,n.kt)("p",null,"EKS also has less flexibility than other solutions. For example, EKS insists that namespaces that require Fargate compute profiles must be specified before cluster creation. If namespaces are modified in the future, it means the infrastructure configuration also needs to be changed and the cluster recreated. Thirdly, upgrading EKS clusters can be difficult- to upgrade Kubernetes version, service pods needs to be deleted so that the underlying node is destroyed and a new one with the correct Kubernetes version is created. The lack of zero-downtime upgrades adds further burden on developers."),(0,n.kt)("p",null,"Instead, we found that a better solution for our Kubernetes deployment was ",(0,n.kt)("a",{parentName:"p",href:"https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview"},"Google Kubernetes Engine (GKE) Autopilot"),". GKE Autopilot provides faster cluster creation, global serverless compute across all namespaces by default, and abstracts away all the underlying components such as provisioning node pools etc. from the developer, providing a cleaner developer experience."),(0,n.kt)("h4",{id:"proxying-requests"},"Proxying Requests"),(0,n.kt)("p",null,"When a client request to connect to a particular room is received via the Kubernetes Ingress, it is intercepted by the Symphony proxy service. This service has two requirements:"),(0,n.kt)("ul",null,(0,n.kt)("li",{parentName:"ul"},"Find or create the requested room service"),(0,n.kt)("li",{parentName:"ul"},"Proxy the request to the requested room service")),(0,n.kt)("p",null,"To satisfy the first requirement, we query etcd to check if a service with name corresponding to the room id exists. If it doesn\u2019t, we send a request to the K8s API server to create a new room deployment where the service name is the room id. We then poll service endpoints in etcd until the service is marked as ready. In this case, polling was justified over a more complex mechanism such as using Kubernetes Watch since pods typically spin up within a few seconds so polling does not add much additional load. Each service has been configured with K8s readiness and liveness probes to ensure that it is not prematurely added to the list of available service endpoints and marked as healthy before the room server is ready to accept requests."),(0,n.kt)("p",null,"As implied by the above, we decided to use etcd as the source of truth on the existence and status of services instead of keeping a service registry cached locally- this ensures the proxy services remains stateless. Since etcd is strongly consistent, it is guaranteed to represent the true state of the system when queried. By keeping the proxy service stateless, we can horizontally scale by simply adding additional replicas without having to worry about state synchronisation. Whilst this does introduce additional latency since we need to make network calls to etcd, we decided this was a valid tradeoff as having a stateful service would radically increase complexity."),(0,n.kt)("p",null,"Once the required room service is ready to accept requests, the server proxies the client request to it."),(0,n.kt)("h3",{id:"overview-of-the-final-architecture"},"Overview of the Final Architecture"),(0,n.kt)("p",null,"Ultimately, we settled on the following implementation for our final architecture:"),(0,n.kt)("ol",null,(0,n.kt)("li",{parentName:"ol"},"A client requests to connect to a room. The request is intercepted by the Symphony proxy."),(0,n.kt)("li",{parentName:"ol"},"The proxy extracts the room id from the URL pathname and queries etcd to check if a service with that name exists."),(0,n.kt)("li",{parentName:"ol"},"If the service does not exists, a request is sent to the K8s API server to create a new room deployment where the service name is the room id."),(0,n.kt)("li",{parentName:"ol"},"The proxy polls etcd to check if the service is ready to accept requests. Once it is, the client request is proxied to the service."),(0,n.kt)("li",{parentName:"ol"},"If the number of connections to the room remains at 0 for a specified grace period (by default 30s), the room sends a request to the K8s API server to terminate the room, returning resources back to the system.")),(0,n.kt)("p",null,"The creation of the K8s infrastructure and the required services is automated using Terraform. We use a K8s job to automate the initialization of the database schema."),(0,n.kt)("p",null,(0,n.kt)("img",{alt:"Final Architecture",src:a(7770).Z,width:"3280",height:"1682"})),(0,n.kt)("h3",{id:"additional-improvements"},"Additional Improvements"),(0,n.kt)("p",null,"With our final architecture in place, there were a few additional considerations and features remaining for us to review. We wanted to make Symphony more performant, scalable, and secure. We also wanted to add features that would make it easier for developers to monitor the state of the system."),(0,n.kt)("h4",{id:"monitoring-and-visibility"},"Monitoring and Visibility"),(0,n.kt)("p",null,"In production applications, it\u2019s imperative that developers have the ability to observe the usage patterns and condition of the system."),(0,n.kt)("p",null,"To integrate observability into Symphony, we first needed a way to scrape metrics from Symphony services, particularly room servers. We sought a flexible system that would allow us to expose and inspect large volumes of custom metrics. We chose Prometheus, an open-source, industry-standard monitoring tool that provides a variety of integrations to instrument applications and a powerful query language to querying and analyze scraped metrics."),(0,n.kt)("p",null,"For each room, we expose pertinent application- and system-level metrics such as the number of active WebSocket connections CPU usage and memory usage via the Prometheus client for Node.js. After provisioning the Prometheus server and configuring it to dynamically detect rooms, we deployed the Prometheus UI which allowed us to query scraped room metrics Prometheus Query Language (PromQL)."),(0,n.kt)("p",null,"Whilst this provided satisfactory visibility, using PromQL has a small learning curve. In line with our design philosophy of creating a developer-friendly experience, we wanted the ability to visualise these metrics in an intuitive manner."),(0,n.kt)("p",null,"To achieve this, we integrated Prometheus with Grafana, an open-source tool that is widely used for creating interactive and customizable dashboards."),(0,n.kt)("p",null,"As a final touch, we created an intuitive developer dashboard UI which provides a centralised location for the developer to monitor the system. In particular, the UI provides a visualisation of room metrics that are scraped and aggregated by Prometheus in real-time as a collection of pre-configured Grafana dashboards. It also exposes historical metadata about each room by querying the Cloud SQL Postgres database such as the last time the room was active, the size of room state (bytes) per room, and the total number of rooms created (inactive + active rooms)."),(0,n.kt)("h4",{id:"reducing-pod-cold-start-time"},"Reducing Pod Cold Start Time"),(0,n.kt)("p",null,"When clients attempt to connect to a room which does not exist, the proxy must wait for the K8s scheduler to match a pod to a node and the node kubelet to run it before proxying can begin."),(0,n.kt)("p",null,"In certain cases, we noticed that when room deployment took as long as 2 minutes. This was surprising since K8s guarantees that \u201c99% of pods (with pre-pulled images) start within 5 seconds\u201d ",(0,n.kt)("sup",{parentName:"p",id:"fnref-10"},(0,n.kt)("a",{parentName:"sup",href:"#fn-10",className:"footnote-ref"},"10")),". After some investigation, we realised that the delay was introduced when the K8s scheduler has no available node to schedule the pod on. This resulted in a lengthy autoscaling operation until a new node was provisioned."),(0,n.kt)("p",null,"To mitigate this, we provisioned spare capacity using balloon pods ",(0,n.kt)("sup",{parentName:"p",id:"fnref-11"},(0,n.kt)("a",{parentName:"sup",href:"#fn-11",className:"footnote-ref"},"11")),". A balloon pod is a low priority (defined using a K8s ",(0,n.kt)("inlineCode",{parentName:"p"},"PriorityClass")," resource) pod, which reserves extra node capacity. When a room is scheduled, the balloon pod is evicted so that the room can immediately start booting. The balloon pod is also then re-scheduled continuing to reserve capacity for the next room pod."),(0,n.kt)("figure",{className:"mb-5 text-center"},(0,n.kt)("img",{src:"/img/case-study/balloon-pods.png",alt:"balloon pods"}),(0,n.kt)("figcaption",{className:"italic"},"Image from ",(0,n.kt)("a",{href:"https://wdenniss.com/gke-autopilot-spare-capacity"},"William Denniss"))),(0,n.kt)("p",null,"This reduced pod-startup times by 10x. Whilst this solution eliminated the problem of prolonged cold-start times, it is more expensive and the \u2018always-on\u2019 balloon pods reduces the benefit of a serverless compute layer. To minimise this disadvantage, we provision only 3 balloon pods by default, where the size of each balloon pod is equal to the size of the smallest room pod."),(0,n.kt)("h4",{id:"securing-the-deployment"},"Securing the Deployment"),(0,n.kt)("p",null,"To ensure our infrastructure conformed to security best practice, we added the following configurations."),(0,n.kt)("p",null,"Firstly, we regulated access to all K8s services in line with the principle of least privilege using Role-based access control (RBAC). We also configured Workload Identity with Google Cloud Platform (GCP) which ensures that each K8s service has least privilege when accessing GCP services external to the cluster including the database and object storage. Additionally, all non-public facing services including the Postgres database were added to private subnets to prevent direct network access."),(0,n.kt)("h4",{id:"snapshotting"},"Snapshotting"),(0,n.kt)("p",null,"Currently, documents are only persisted to object storage once, immediately preceding room termination. This means that a process or system failure during a collaboration session would lead to irrevocable data loss, particularly given that pods are ephemeral in K8s."),(0,n.kt)("p",null,"To mitigate this occurrence, we implemented checkpointing, where the in-memory document is periodically serialized and persisted to object storage. This approach does, however, lead to increased costs since cloud storage has an operation-billing component, where developers are charged per use of the API. In order to balance the need to snapshot with the associated additional costs, we set the default snapshot interval to 30s i.e. in the worst-case, a user could lose 30s of work. We felt this was reasonable since a client also has a local copy which could be used to replay the state- in combination with snapshotting, this makes the system adequately fault-tolerant."),(0,n.kt)("h2",{id:"future-work"},"Future Work"),(0,n.kt)("p",null,"Going forward, there are additional features that we think would enhance Symphony:"),(0,n.kt)("ul",null,(0,n.kt)("li",{parentName:"ul"},"Integrating authentication so that users can only interact with rooms they have access to."),(0,n.kt)("li",{parentName:"ul"},"Expanding deployment targets beyond Google Kubernetes Engine (GKE). Since Symphony is built on Kubernetes and provisioned with Terraform, we can easily add support for other providers of K8s services include AWS EKS and Azure AKS."),(0,n.kt)("li",{parentName:"ul"},"Develop a set of React hooks and providers enabling Symphony to be used declaratively.")),(0,n.kt)("h2",{id:"references"},"References"),(0,n.kt)("div",{className:"footnotes"},(0,n.kt)("hr",{parentName:"div"}),(0,n.kt)("ol",{parentName:"div"},(0,n.kt)("li",{parentName:"ol",id:"fn-1"},(0,n.kt)("a",{parentName:"li",href:"https://en.wikipedia.org/wiki/The_Mother_of_All_Demos"},"https://en.wikipedia.org/wiki/The_Mother_of_All_Demos"),(0,n.kt)("a",{parentName:"li",href:"#fnref-1",className:"footnote-backref"},"\u21a9")),(0,n.kt)("li",{parentName:"ol",id:"fn-2"},(0,n.kt)("a",{parentName:"li",href:"https://erikbern.com/2017/07/06/optimizing-for-iteration-speed.html"},"https://erikbern.com/2017/07/06/optimizing-for-iteration-speed.html"),(0,n.kt)("a",{parentName:"li",href:"#fnref-2",className:"footnote-backref"},"\u21a9")),(0,n.kt)("li",{parentName:"ol",id:"fn-3"},(0,n.kt)("a",{parentName:"li",href:"https://webrtc.org/"},"https://webrtc.org/"),(0,n.kt)("a",{parentName:"li",href:"#fnref-3",className:"footnote-backref"},"\u21a9")),(0,n.kt)("li",{parentName:"ol",id:"fn-4"},(0,n.kt)("a",{parentName:"li",href:"https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API"},"https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API"),(0,n.kt)("a",{parentName:"li",href:"#fnref-4",className:"footnote-backref"},"\u21a9")),(0,n.kt)("li",{parentName:"ol",id:"fn-5"},(0,n.kt)("a",{parentName:"li",href:"https://svn.apache.org/repos/asf/incubator/wave/whitepapers/operational-transform/operational-transform.html"},"https://svn.apache.org/repos/asf/incubator/wave/whitepapers/operational-transform/operational-transform.html"),(0,n.kt)("a",{parentName:"li",href:"#fnref-5",className:"footnote-backref"},"\u21a9")),(0,n.kt)("li",{parentName:"ol",id:"fn-6"},(0,n.kt)("a",{parentName:"li",href:"https://crdt.tech/"},"https://crdt.tech/"),(0,n.kt)("a",{parentName:"li",href:"#fnref-6",className:"footnote-backref"},"\u21a9")),(0,n.kt)("li",{parentName:"ol",id:"fn-7"},(0,n.kt)("a",{parentName:"li",href:"https://arxiv.org/pdf/1805.06358"},"https://arxiv.org/pdf/1805.06358"),(0,n.kt)("a",{parentName:"li",href:"#fnref-7",className:"footnote-backref"},"\u21a9")),(0,n.kt)("li",{parentName:"ol",id:"fn-8"},(0,n.kt)("a",{parentName:"li",href:"https://www.figma.com/blog/how-figmas-multiplayer-technology-works"},"https://www.figma.com/blog/how-figmas-multiplayer-technology-works"),(0,n.kt)("a",{parentName:"li",href:"#fnref-8",className:"footnote-backref"},"\u21a9")),(0,n.kt)("li",{parentName:"ol",id:"fn-9"},(0,n.kt)("a",{parentName:"li",href:"https://www.bartoszsypytkowski.com/crdt-optimizations/"},"https://www.bartoszsypytkowski.com/crdt-optimizations/"),(0,n.kt)("a",{parentName:"li",href:"#fnref-9",className:"footnote-backref"},"\u21a9")),(0,n.kt)("li",{parentName:"ol",id:"fn-10"},(0,n.kt)("a",{parentName:"li",href:"https://kubernetes.io/blog/2015/09/kubernetes-performance-measurements-and/#:~:text=%E2%80%9CPod"},"https://kubernetes.io/blog/2015/09/kubernetes-performance-measurements-and/#:~:text=\u201cPod"),(0,n.kt)("a",{parentName:"li",href:"#fnref-10",className:"footnote-backref"},"\u21a9")),(0,n.kt)("li",{parentName:"ol",id:"fn-11"},(0,n.kt)("a",{parentName:"li",href:"https://wdenniss.com/gke-autopilot-spare-capacity"},"https://wdenniss.com/gke-autopilot-spare-capacity"),(0,n.kt)("a",{parentName:"li",href:"#fnref-11",className:"footnote-backref"},"\u21a9")))))}m.isMDXComponent=!0},1029:(e,t,a)=>{a.d(t,{Z:()=>o});const o=a.p+"assets/images/architecture-overview-2d79de1ef288a1cbfe2e891d4f78a3cc.png"},1191:(e,t,a)=>{a.d(t,{Z:()=>o});const o=a.p+"assets/images/basecamp-locking-90594eff8ddab973e2e4993399111964.png"},5939:(e,t,a)=>{a.d(t,{Z:()=>o});const o=a.p+"assets/images/branch-96f9faa44059d9f43e8c2bc3c1a95054.png"},8997:(e,t,a)=>{a.d(t,{Z:()=>o});const o=a.p+"assets/images/comparing-solutions-8ad521b0cfa33d59083166ecf925ab7b.png"},4102:(e,t,a)=>{a.d(t,{Z:()=>o});const o=a.p+"assets/images/conflict-comparison-9fd64e64f630a4a9b2b2b3dd21d8065a.png"},1561:(e,t,a)=>{a.d(t,{Z:()=>o});const o="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAA08AAADFCAIAAAADn15uAAAheklEQVR4nO3dB3Bc12Hu8b3bd7G7KIveCBCFAEECLCBIsHexyxRNiYpk1WTs2E7s5I2dOO89T16SeXEyGT87Tsaxx45s2ZYs0bYsiSoUqyiwobCAHewkQBCNqAts33eABZcrFpCQSC5x8P8NhrN79+7FWfCee75z7rl3lY6ePhUAAAAkpY50AQAAAPAAkfYAAABkRtoDAACQGWkPAABAZqQ9AAAAmZH2AAAAZEbaAwAAkBlpDwAAQGakPQAAAJmR9gAAAGRG2gMAAJCZNtIFkJzb7e5y9Ho9vkgXBACAR5eiqMxGY1SUSa1mHOr+I+3dZyLeNTa3XWlqbWxuFv/29PRGukQAAIwMGrU6IT42NTEhJSk+JSnBHhutiBiIz03p6OmLdBkk4fZ4qg4d33+g1u3xRrosAACMeKnJCfNmTB2TnhLpgox4pL37IKBSHa87t6OisqeXPyYAAPdTTlbG0rnTo23WSBdkBCPtfV4ut/u9bRV1Zy/e9lW9TmuzWLQ6zUMuFQAAI0ggoPQ6+xyOPr/ff+urOp1u+cJZ4/OyH37B5EDa+1z6XO7X3/qwubUtfKHJaBibmZaZlpKZkRJLXwQAgHvj9foam1su1jdeuNzY0NgU+PSri2aXTZtUFJmSjXBc+fLZ+fz+t97fFh71tDrtzNKSP39+/eql80qK8ol6AADcO61Wk5GaPLts8rPrVnxp/er0lKTwV7dXVJ48cyFCRRvZSHuf3Y7d1ZcaroaeJiXEv7Th8bkzpuh1ugiWCgAACaQmxYvMt2TeDI12cDZUQKV6b2tF67WOyBZsJCLtfUbnLzXUHD4WepqZlvKldcvjom0RLBIAAJKZOrHwqdVLQoHP4/W889Eu3+3m9mEIpL3Pwuv1bt65NzSfINZmfWLFQq2WmxcCAHCfZaalLJ8/M/S0ubWt+tCxIdbHrUh7n0VN7cmOru7gY41avWbZAqNBH9kiAQAgqwkFueIn9HR31eE+pyuC5RlxSHvD5vX5Kg8dCT2dWjI+JdEewfIAACC9xXOmm03G4GO3x1NTeyKy5RlZOPk4bGfOX3b0OoOP9TrdzNKSyJYHAADpGQ36GVOKt++uDD49fKJu1rQSvlftHjG2N2x1527cSLm4MI9zuAAAPASTivJ02sG7XnR3Oxqb24ZeHyGkvWG7dOXGXVcK8rIiVxAAAEYRvV6fPebGd+bWhzXHGBppb3jcbndPT2/wsVqtTk6Mj2x5AAAYPdKTk0OPW9u58d69Iu0Nj6PPGXpsiTJpNXwBLgAAD0lMtCX0ODSHHndF2hueQODGt/ZpNVzjAgDAw6NWk1s+C/5qwxN++U948gMAAHg0kfYAAABkRtoDAACQGWkPAABAZqQ9AAAAmZH2AAAAZEbaAwAAkBlpDwAAQGakPQAAAJmR9gAAAGRG2gMAAJAZaQ8AAEBmpD0AAACZkfYAAABkRtoDAACQGWkPAABAZqQ9AAAAmZH2AAAAZEbaAwAAkBlpDwAAQGakPQAAAJmR9gAAAGRG2gMAAJAZaQ8AAEBmpD0AAACZkfYAAABkRtoDAACQGWkPAABAZqQ9AAAAmZH2AAAAZEbaAwAAkBlpDwAAQGakPQAAAJmR9gAAAGRG2gMAAJAZaQ8AAEBmpD0AAACZkfYAAABkRtoDAACQGWkPAABAZqQ9AAAAmZH2AAAAZEbaAwAAkBlpDwAAQGakPQAAAJmR9gAAAGRG2gMAAJAZaQ8AAEBmpD0AAACZkfYAAABkRtoDAACQGWkPAABAZqQ9AAAAmZH2AAAAZEbaAwAAkBlpDwAAQGakPQAAAJlpI12AEaDP6ao9Xufx+sRjp8sVttxZUXko+NhmiZpQmKtWlMgUEQAA4A5Ie3e3t+Zw5cFjty53utwVlQdDT6NtUWPSUx9iuQAAAO6OM7l3V5g79q7raNTqlMT4h1AYAACAYSHt3V1KUvzyBTOHWEGn1b349ON6vf6hFQkAAOAekfbuSUnRuOLxebd9SVFUKxfPjo+NechFAgAAuBekvXu1dG55cqL91uXTJhUV5GY99OIAAADcE9LevdJqNWuXLTQZDeELM1KT55eXRqpIAAAAd0XaG4Zom2X10nnK9dusWCzmLyxboFbzNwQAAI8uksrwjM1Mm102WTzQaDQi6kWZjZEuEQAAo4WiunFfW4V73N4z0t6wzSwtXjp/xpOrl6QnJ0a6LAAAjCJpKYkmw+AdMHLHpEe2MCOI5m//7n9FugwjjOhMpCQmxNiskS4IAACji1arKcrPMZmMM6YWF+RlM7h3j/guDQAAMGJYrVEzS4sjXYoRhjO5AAAAMiPtAQAAyIy0BwAAILNhz9vzOfzOS+6eI30PojQYEdQGxZCiM2bqdQlaRcMc2ZHH3+d31nscR/v8nkCky4LIULSKPllrytTrk3XicaSLg2HzuwKuRo/jeJ+vxx/psiBCFJU+SSdqsSFNp+juUouHl/Z83f7mP3SI3atzv+NzFBAjm9qkFlEvarwxpjzKNi1KRUsxovh6/W3vd3Uf7u2qdIgGI9LFQWSItsGYoY8aZ7BNj4qdZ6UWjyx+p799Z09XVW9XtcPb6Yt0cRAhisqQro/KM1hLzfZlNkU9VDVWOnrueZQuoGr9oPPyD5p10VpLvlEXw/W8o1JA9Cn9ffXuvivuqGJTxtcSDKm6SJcJw9D+cfflH7UofpUl16i365jNMToFPAFno7vnnNOQoc/+TrJxjD7SJcIwiJzX8JMWT6svKttgSGR0drTyq1zNnq4TvRqrOuef0sx5hiHWHU7a86tO/Pmlvjpn+ob4uOlWfRxpb5Ty9fm7T/Rd/aDd3eVNecFuf8wW6RJhGM7+zyudlY7kZbHxs62GJL1C2huV/J5A73nnlXev9Zx1pj5nT3wyNtIlwjBc/mFz20ddcWXWhPk2U7pBfbezeJBSwK9yXnFfebvtWmVPyrP21JfsQwzSD+NIH1CpnOdcKkWJn2sj6o1mGpM6KsdgyTOK2Odu9ES6OBge52V3wBuIm24xJhP1Ri+RD8xjDNFFZrEziF0i0sXB8LgaPf5ef/REszmTqDd6iQO4KV0fO8WiKKo+Ec+GnJgzvIN9/5xuRaWN0nyuAmLkUzSKyHxi3xJNRaTLguEJiFocUGnMaqZqjXaKohEH84DKTy0eaQK+QEDUYqOa6+SgsfQHOb/nLhfr0LUHAACQGWkPAABAZqQ9AAAAmZH2AAAAZEbaAwAAkBlpDwAAQGakPQAAAJmR9gAAAGRG2gMAAJAZaQ8AAEBmpD0AAACZkfYAAABkRtoDAACQGWkPAABAZqQ9AAAAmZH2AAAAZEbaAwAAkBlpDwAAQGakPQAAAJmR9gAAAGRG2gMAAJAZaQ8AAEBmpD0AAACZkfYAAABkRtoDAACQGWkPAABAZqQ9AAAAmZH2AAAAZEbaAwAAkBlpDwAAQGakPQAAAJlpI12AR0tAFXB7PL3OviijWa/TRbo4AB4Gj9fT3euIMpkNOn2kywIA95/MaS8QCLR0tP3HxlfTE5P/7PGnFUUJveT1eY+erXt715bJ+ePXzF0ilvj9/oaWpg/27jxbf8Hj9ZqMxryM7KXT56TGJwU3daW16ce//3Xw7WJTZqMxOzVjXObYibmFWo3GHwg0NF99ZdPGkrzCx+cu6el1iE0dPn3ipiJFW6wLS8trT59s7+5cv2hVRlJKcPm1ro43t73ndLmeX7Eu1hb9MP46kMjRU2evNrfNmzFZJ10XRXS9/u3XP3V7PTctT4y1f+2Lz/3qw7fO1l98eumawqzcYAU/f+XyL97bmByX8OUnnnW6nN979ceiagffolYrsbaYgjE5j02fq1YPntZo7Wh/95Otpy6ddXncep0+Kzl95eyFmUmpqoFav2n3tv1HDw3+SkVl1BvTE5JmT5qWm54lXq1vvvqTt35za5lFQRJj4//iyRceyF8EkMKug/srDlf1Op03LV8xc/7UgmJ/wP+Tt17z+jwvrnrSHh0bvsKFxvqNW99LjIt/YsEyqzkqtFxsbeeBfZPzixaWzjQZjKHl26t3766tWb9oZV56ljiS/MfGX4oNPrVkdZTR9EA/4KNG6rSnUon++pb9nyTZ4xdMnZmXkRV6qc/lem/3drETGPQ6kfbEgbvpWqvYCQ6fOS4eZySmHDtfd/DUsYuN9V9f/4I9OkYs7Ozp3la122w0JcTG+fw+8XTvkQMp9sSlM+auW7BcrNDR07mjZo9OqxVpT7QctWdO7jywNyHWbtQZQr/XHhPr9ni8ft/mfbtEqhP7sWpgXOHUxXObKrZNyBlnDNtHgXu0acsu8e+ZC5fWr15ij5GqtyCq0vaaPaJ+iboW3mET9SigCljNlu3Ve0TD8H/+7K9Fp0ssf2PLu6LKf3ntM8rAe7dVVYgqmSzeq1I8Pm+no3tPbU1Le9tzK9apBqLkv7/x31Unat0et+jdHT9/5sDJI6frz3/nua+Kmiu2f/zc6a1VFcn2BJPeKJ62drbrNdqaU0f/8skXROAT2xdZUzVwTkCkRkefI2lgTVFM0WOM1F8MGBHONVwS4Uyv1UWZzGrlxqSyLkePqNEilu2prW5qb51SUDyreGr4oLt410eVu8Zn5a2ctTA87X2472ORIC9ebZg+YXJ42jt9+cKOmr3zp8zIScsUNX3XoUrRxD8xf5mKtCcR0UYExH7T0d0lDtmhtCeWNLe3in5AYGAFscTpdounlccPZqVk/OnjG6Ittobmq6999LbIcyKBiTAX3JZWqynOLXhuxRP+QKC9u0s0DG/v+kjslOUTpoijfCAQ3Frg+u/226KsL616Mj0xJVQgvU6XlpgsmqgP9uysOFy9fOaC5LgEsXOLHCkap6XT5pgMhps/BHBvOjq7X33z3ZWL5ubnZEa6LPeTqFbZqelf++LzWs2N45U4mmvUmoWl5e9+snXf0YMH646VFkw8fv60aAnSk1KXlc8X0TBYI7NTM7++/nnRnIhOWkNL03/+7pdvbN20vHyB6Lb1DwYc3JcYa//OC1+Nj45r7+78xaaN+48d+mDvzmAcFO8X23lu+brs1AxxuGjv6hRHkl0HK0Vf8ZsbXk6NT/z2l74iVhNbfmPLpj1Ha555bG1u+pj+4hnptgFD8Q9Uz9VzFk/KG68PC3Ppicki2wW7c06Xa2fNnpLcAkP04Aqiudx9uMrR1xdQ3WhthcbW5rqL58S76i6dq29qjLXYNAPdP1UwBwy2zgPv6X8UUI0+cqe9QaILXnGoasOS1bHW/mEPl9stYlx7V4deP7gDOd3OTw5VirZk3cIVMydOFcd3cch2ul3/9pufir5CMO0JosGwx8ROHjdBNXB8H5OcerbhouhJ1J45ucSecOvvFbtsYXZubnrWzS8EVHMnT/9g744dNfvWL1whtrD/+KG8jOxpRSUP7G8Amel1WrenfzDJ5fa89eG26VMmzp0xVR02EjbSiY5TSf54vfbm89SiRj/92Jp/+NkPX9/8x+Kcca9tfrujp+vLa59JiLWHvdci6mzwrzExx115/NDWyoqTF88kxJZtqawQR4MnF6+eU1IWTIc+n/9v/vOfP9z3cTDtqfrP3yoFWTnjs/PEY5/PlxKf+MmhKpEvxVOz0TR5XJFqYGbI9uo94leMyxxbnFfwcP4mkIwIIB6PZ7TNF89KyRBV26i/4zCH6H1daWmOsdpE7048rT5RK/psov29abWqE7Wi7mckpjS2NVefrM3LyIoymR9s0UeaUZH2/H5/07WW3YdrVs1eqBo4ffPR/l1K2NCxx+s9U39RtApzSqYFzxaJoDY5v0i0JWLHEuvfui+KPS8tIXnBlPKfvv36+SuXh1Ueizlq/pQZonn4+MDessLiHTV7A37/4rLZ4YPSI1dPT2+v06nTjYpd6xERCChhj1X7ao40NrU+/th8s0n+EaZZxaVTxhUdrDv20z++LkJY0dj8+VNn3CnpGvT6oux8kfZEnZ0zqezEhTN6nX5J2exgrRf/lhZOTI5LqG9q7OnrNd8yPqfRaLJTM4wGQ0v7NZH8QoMHwOe3a2915eHji2eVTZ5Ih+GGju6uPUeqs1LTrGaLP+DfWlXh6Ou7aR3RT9tdW+32ep9asvqVTW/uqa3+wtylpL2byN8k67Ta3PTsi431m/d9vKx8rtgtjp6tq2++OjG3QPTvg+uIjkJ3b0+KPSHaYg0uEcf9KJMpISa2paO9p89x256HTqvLSk0XUbLT0XWn3x4cPb6JVqPJTE6dXVK6rXr3b7e8IzorWakZM4un3o+PG2GdXT0VVYeOnDgd6YKMdhfrG1958521jy1ITb7NqPOI4/Z4Oru7dNfH9tRqxRY1WFUtJvMzy9Z++0f/951Ptva6+p5dtjbGYrvTdkS/7vj5OtXAYKGom9e6Ouy2mOCQf5DJYEy2J1y91tLt6Lk17fW6nBWHKl1uV2p8Uug6D+C+OHyszuf1bf5479XWtiVzZmi1o6Iv4ehzdPZ0O3Wu4FPRlTLo9KEZuvmZYxtbm7ZX710xc4HFFFV36fypi+fyM7PEg/CNXG5uPFt/MSs5bcaEyRWHq4+cPXmp6UqMNVpLfyyM/GlPo9ZkpaTHWm2nLp0TO0FOWtaWyk9EBFw1a2Eo7flFl8EfMH16zqbY4QaWXLvThGvR5Ii2QbQZd1pBtCXff+1nlus9DKs5auWsRVMLJ6oGzi4tmjZrR83enQf2iTZMPB6iiRpBuh0Oot4jorvb8btNW7724gaNZsTnkrMNF//+Zz8Ijdjpdfp/+frfhqbxTcofX5idd/j08XGZY6cVliifHthrbG1+c+smsbDP1Xfywtmak0cSY+1l40v8/edtfbcOAIhunvg3dBWwODa8sunNWGuMeNzS0dbQfFVU+acWr1YkOlGOR4HVaul19oceEfuaW9rWLl9os1oiXagHbuP297dW7dZc7zstmjZ7SdnsUK0UXa/0xOTqE7WHTh+Pj4nbUb23o6frT5auEZU6fCNVxw93OboXlfY3o9OLJh09e7LqeK04GmgZ3gsjf9pT9Z85Nc+dXHbg5z94f8/O9QtXHDh1dMLY/Ak548JWuf2cTWVwUucQ2x7qiO/z918g4nQP9lo8Pq/b6w4+Fg1VVnL6zOKpH+7dOSEnc97k6cP5QI+uKPPousrpERdvj5MjkygqRa/VKerBD3PTWHtPr6PpWosIYc3tbZ09XaK+h7/a2Nb8m81/VAYqoNPlys/IfmrJatGKDD1TOzQqLx7UXTovfqPX52touWo1Rf3V03+6uGzWff18wKc0Nre98sY7a5bOy85Mi3RZHiytRqPX6UIj5TqtNrwfpVVrlpfPP3HhzNbKCpHe9h87mJaQNCm/yKB/J7SOqKF7+k/jeqZPmGQ0GMsnTnn9o7fFknULlnEyN9yoSHs6jXZCTn5+ZrbYA/w+v8frWTFzYXiD0X/5t6Jye9zh7xL7kMvjVvrffvvRYLGC2+MRu2b4pYLhYizWF1evzxi4d5dqYLdOjI0PvRplNk8tmLijZq9ofuJsMZ/3Qz4aoq2WBbOmXapvDF40gIejobHJf0unpLR4/ILZ0+Q44ZiZnPqVdc/qrlc0jVoTnLId9Nst77S0t41NzbzQWP/rD9/61rNfDv/UogI+v/KL6v42RLFGWWKstqyUdLFC8Hpb1/XOWIjT3X8cCDtrrH5h5frs1HS31/uPP/9hl6OnOLcgdB5ZYr1OV83hYx2d3ZEuyGjR1d0T/rTP6dq4acvsssnlpSVSdNlub+n0udMKS0LXpojqeVNfLi8ja1L++P1HD73+0Tv1zVefW/FEfExceCIUtf78lXpxcDhx/vTV1mbRKxOvioUXrjZwMjfcqEh7KkWJjrKJhPcvr/644nBVdmpmWdGkbseNqqXRaGxmS3tXZ5/LGbpPjwiFTddaRZITLcRtt+rxekVfXzQGtjusIBqMsWmZt7kmd4B4o2XgPkMy9T/Eh5o+eYL4iXRBRpfv/+RX4fFaq9UuXzirKH9sBIt0f5mNpuzUjFuvyRWOnD35wd6dyXEJ337uK//6q//aVrV7QenMaYXFofbAHh27YEq50n8aWAk/9IsVRC+rrauj19lnvj6Lwx/wX2lpErUyJjSFV6WMGzN2fHaez+9fNWvRL97/3eZ9H+dlZD/gTxx523btO1Z3LtKlGNX8/sCufQeuNLV+Ydl8WVNLYmx8Vmr6ENfk6rT6ZTPmV5+o3XWwMsZqnTFhyk1zrvYfO9jd2yPa7o3b3g/OWhFNucvjrjp2uGBMjkWi5vVzkqHffy9E10F0IMakpHl93sdmzL3p6lfx6sScgvbuzvcqtgeXiF1nR80+0Y/Pz8w23e6Ox+LQX9/c+Mddm416fdHYvIfxGYB7EBdte379Spmi3hDcXs8r72681tX5pRXrinMLX1y13uHs/e933hABLrSOiHki/oqu/63tZWlhscvtfnPrpuDTgEq1ed+u5vY2UaNvvWGeRq1eM2exCIgf7vv4clPjA/1cj4IzF4Z3qwE8IGfOX+p29Ea6FJFUnFc4bkyO2+ueN2VGsj1eEzaw5/f79x45KJrjl9dseHH1+udWfFH8/OVTL9qjY/bUVvc5b756dzQbHWN7A/34xDj7N5566Wpbi+jo3/SqiP9PLl5Zd+ncax+9fe7KpZT4xItXG2pOHkmIiXtm2drgjbhUA3fVOnnhzE/ees3n87V0tF28Wl/f1Dh/6owp4yZG4jMBg0IDe3ljM1ctnmPQy/ZlryJd/fzt34afvY21RT+xYLkIZwfrjpWNL1kwtVyksdkl08TPniMH3v1k61NLVt91s8+vWFd75uTG7e9faW3OSklvaLm67+hBk8Hw5bXPqBW1P+C/af1Ee/yqWYte/eD3v9/x/jc3vHyfP+Qjpig/58DRk5EuBVSzyybF2KSdObCtuuJs/YXw2VDlxVMLxnyqs2o2GEVVLZ8wpWz8pP5TYWGTVs42XLp0taEgM2dJ2ezQF6z5/YHqE7VVxw9faKy/7TeRXmlp+uX7vzeFDSjGWG2r5yy+7ciONOROe0rwbI56YHK3XqsTXXmPxzN45lTpfzk4v0fsauOz815a89Smiq0fH9wnXlMrSlZqxoqZ8wuzckLbEiHvYmNDS/sHqoGTv2kJyWvmLlkzZ4ktyiL6Ftdv2aW+vn7/XMDwL4S5uXDB8ikqOWZWIYJio23tnV2zyybPKpsk5RSf5va2tz7+KPyjjUlNX1g687XNf9Ro1C+tfjI4m8JsNL24+snaMyde3/LO4rLZwfvhqdV3PAWWnZbx9fXP/2HH+7trq3bXVot4l56Q/CePPV6cWxhcYfAAcr0Wi009Pm/Jpt3btuz/5OmljyfFhabhKsGKL8HlzyEzS0uSEuzeW25jiwdkX01td8+nxvCMBv3qJfNystIjVaQHamAqrarqeO2huuNK2PWOCbH2nLTM683p4IwM0UDnpI0RaUw0lz6/r39ahrr/teoTh3v6HHMml8XZYsKz2tzJ0w+cOlpz8mhhdl5YLe5/n9L/7djX3qvYFj75LyM5den0uaS9kUrsC6nxSd//5neT4gZvrC8CX2jqj+gH/L9vfNceM9gbEP/NovEozMrt6Olq62gXO1ycLToxzh7sc4h9bkxy2o++9Y/BlZWBtGc1R0VHWe0xcaqBQJmdmvn9b/7vYPfCFmV9cdX6nr7elPjEOxVPbHliTsG//sXfhV+6AXwGz6xd7nS54u2xd191pBG1TFSrW69BERVWvNR/QYaiHp+dHzpw56Vnfe/r3xGdOqvZIrLXv/+Pv4+2WO90s2Xx3tnFpdmp6Z093Y2tLaK+x1psyfYEnVYbfHXD0jWLps3KTB680EoZuCXEP3/1bzxeT+jenKqBk7xPLV65sLQ8K0WehtliMZcU5Ue6FKNI7fHT4WlPRO21yxbEREs7qvfYjHkl+eO9vpu7E5lJqQa9QR/Qf/flb5iNxuAVUaK51JoG44oIfN99+ZsGvV68JFLduDE5onU2fvpLR2cVl4rWP9YWbdQbVs1eNG18ydi0TK1GE2Uy/9NXvhW6v1KISW+QaQL9bcmc9lQDTcLUgttfMWDQ6aeEvSRaC4vJnJeR1X+lrdfTf7uHsBZCdAjErjCtsPhOv0isIN4+ZdzgBkVrIfatocsmth9jtU22Fg3j8wC3Ixpm8RPpUjwQ4ig/teCOMyVCNS5EdMNKro/MqQZm5g29fdFI5KSNCfQPHrjCb+saJFoR8RO+RETA4tybv+pAvCszOS0zWfKbZeCB8vlvzByYUJC7bH65VitzA50SnzjEaIiq/3sOx912uWhtQy+lJSSLn1vXEZ2xSfnjb11HBL6SvMJb1x8NZN6ZPhtx4DboZJv2BGAIyi338AMeMqfTGXzw2LxyvjwN9508s0wAABihls4rT0qIf+6Lq4h6eBAY2wMAIMLyx44RP5EuBaTF2B4AAIDMSHsAAAAyI+0BAADIjLQHAAAgM9IeAACAzEh7AAAAMiPtAQAAyIy0BwAAIDPSHgAAgMxIewAAADIj7QEAAMiMtAcAACAz0h4AAIDMSHsAAAAyI+0BAADIjLQHAAAgM9IeAACAzEh7AAAAMiPtAQAAyIy0BwAAIDPSHgAAgMxIewAAADIj7QEAAMiMtAcAACAz0h4AAIDMSHsAAAAyI+0BAADIjLQHAAAgs+GlPUWsHlAFvIEHVBqMGAGVP7gbqJVIFwXDFPwf80W4FHgUBIK7gUItHmGC/2MBv2iPI10URJrfc09tsfbetyi2ZMzUOy+5r+3vsc+yfp7CYaTzdPq6T/apTWp94jB2ITwKDGk6d7O3/YAjcaFNbWB0f/Tyu/wdB3sUrWJM00W6LBgeXaJOHH67TvSZMw26GA7Co1p7ZU8goDJl6lVD5r3h7CWKKnFD7MXvNV14tfna/m5DEgeIUcrn8DvOOZ0tHkuJKbo8KtLFwfDEr47urXM1/KG1s9ZhTNErmkgXCJEQcPt7zrn6GtyGVF3cYluki4PhiZ1vdRzra97S0VPXZ0o3qPWMzo5K/oDjnMtx0aU2KQlrou9n2rMvsnlbfFd+da2ztpcpf6NXQKVoVdZJ5tSX7bo4upUjTMwsi/ear/HVa90n+7pP9UW6OIiQQP8h3ZRtSP9aAiP0I451sin5GXvjr6/1N/bnXEM385CZP6CL12X+daI+VT/0isOr5IpOSdoQG7fc5jjh9LR4P0cBMYJpTGrTWL0xU99/HpCjzEijaJT4VdEx8yy9p1zuq56AP9IFQiQoWsWUrTeNNahN1OKRR9Ti2AUW2zRz71mXu8EzOHMLo4+oxeZcgzpKo9xtAG7YXToR+PR2rW5mFJNDRy9FpagVWoiRS7T0ulitrUyrCjDLe7QSNVhROEUzconAp43WWCeZVSXU4lFMrdw15wV9pgF8pX8/+yxvBPDIGDhGUJGBEYxajHtEzw4AAEBmpD0AAACZkfYAAABkRtoDAACQ2f8HUnOioWERWj0AAAAddEVYdFNvZnR3YXJlAEBsdW5hcGFpbnQvcG5nLWNvZGVj9UMZHgAAAABJRU5ErkJggg=="},7770:(e,t,a)=>{a.d(t,{Z:()=>o});const o=a.p+"assets/images/final-architecture-a62cd0c314c95be3960ac11adf5c1948.png"},5778:(e,t,a)=>{a.d(t,{Z:()=>o});const o=a.p+"assets/images/load-testing-9d89941d1c804a828252e999f5f9a50d.png"},7234:(e,t,a)=>{a.d(t,{Z:()=>o});const o=a.p+"assets/images/manual-750de189201c10a7a95ca8845a464856.png"},408:(e,t,a)=>{a.d(t,{Z:()=>o});const o=a.p+"assets/images/ot-96add64dc7ceed68eaea7e7083b21a7f.png"},1730:(e,t,a)=>{a.d(t,{Z:()=>o});const o=a.p+"assets/images/relational-model-3a055499946dc54fb030a4c5440a8ba3.png"},4619:(e,t,a)=>{a.d(t,{Z:()=>o});const o=a.p+"assets/images/three-tier-b4304aedaa428b4ff30442eca531fb50.png"}}]); \ No newline at end of file diff --git a/assets/js/runtime~main.7282ba85.js b/assets/js/runtime~main.947fd201.js similarity index 79% rename from assets/js/runtime~main.7282ba85.js rename to assets/js/runtime~main.947fd201.js index 4d8ac9f..b4451a9 100644 --- a/assets/js/runtime~main.7282ba85.js +++ b/assets/js/runtime~main.947fd201.js @@ -1 +1 @@ -(()=>{"use strict";var e,t,r,o,n,a={},i={};function f(e){var t=i[e];if(void 0!==t)return t.exports;var r=i[e]={id:e,loaded:!1,exports:{}};return a[e].call(r.exports,r,r.exports,f),r.loaded=!0,r.exports}f.m=a,f.c=i,e=[],f.O=(t,r,o,n)=>{if(!r){var a=1/0;for(d=0;d=n)&&Object.keys(f.O).every((e=>f.O[e](r[l])))?r.splice(l--,1):(i=!1,n0&&e[d-1][2]>n;d--)e[d]=e[d-1];e[d]=[r,o,n]},f.n=e=>{var t=e&&e.__esModule?()=>e.default:()=>e;return f.d(t,{a:t}),t},r=Object.getPrototypeOf?e=>Object.getPrototypeOf(e):e=>e.__proto__,f.t=function(e,o){if(1&o&&(e=this(e)),8&o)return e;if("object"==typeof e&&e){if(4&o&&e.__esModule)return e;if(16&o&&"function"==typeof e.then)return e}var n=Object.create(null);f.r(n);var a={};t=t||[null,r({}),r([]),r(r)];for(var i=2&o&&e;"object"==typeof i&&!~t.indexOf(i);i=r(i))Object.getOwnPropertyNames(i).forEach((t=>a[t]=()=>e[t]));return a.default=()=>e,f.d(n,a),n},f.d=(e,t)=>{for(var r in t)f.o(t,r)&&!f.o(e,r)&&Object.defineProperty(e,r,{enumerable:!0,get:t[r]})},f.f={},f.e=e=>Promise.all(Object.keys(f.f).reduce(((t,r)=>(f.f[r](e,t),t)),[])),f.u=e=>"assets/js/"+({53:"935f2afb",210:"6fdce000",237:"1df93b7f",514:"1be78505",574:"15d8c5b7",655:"89dd2556",918:"17896441",935:"88b961c4"}[e]||e)+"."+{53:"33f94d69",210:"57fb553f",237:"7835cd9b",514:"d088d80c",574:"4a60292b",655:"663dbdb8",918:"1c696f69",935:"a96e4ad2",972:"b8a5bc0d"}[e]+".js",f.miniCssF=e=>{},f.g=function(){if("object"==typeof globalThis)return globalThis;try{return this||new Function("return this")()}catch(e){if("object"==typeof window)return window}}(),f.o=(e,t)=>Object.prototype.hasOwnProperty.call(e,t),o={},n="symphony-collaboration:",f.l=(e,t,r,a)=>{if(o[e])o[e].push(t);else{var i,l;if(void 0!==r)for(var c=document.getElementsByTagName("script"),d=0;d{i.onerror=i.onload=null,clearTimeout(b);var n=o[e];if(delete o[e],i.parentNode&&i.parentNode.removeChild(i),n&&n.forEach((e=>e(r))),t)return t(r)},b=setTimeout(s.bind(null,void 0,{type:"timeout",target:i}),12e4);i.onerror=s.bind(null,i.onerror),i.onload=s.bind(null,i.onload),l&&document.head.appendChild(i)}},f.r=e=>{"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},f.p="/",f.gca=function(e){return e={17896441:"918","935f2afb":"53","6fdce000":"210","1df93b7f":"237","1be78505":"514","15d8c5b7":"574","89dd2556":"655","88b961c4":"935"}[e]||e,f.p+f.u(e)},(()=>{var e={303:0,532:0};f.f.j=(t,r)=>{var o=f.o(e,t)?e[t]:void 0;if(0!==o)if(o)r.push(o[2]);else if(/^(303|532)$/.test(t))e[t]=0;else{var n=new Promise(((r,n)=>o=e[t]=[r,n]));r.push(o[2]=n);var a=f.p+f.u(t),i=new Error;f.l(a,(r=>{if(f.o(e,t)&&(0!==(o=e[t])&&(e[t]=void 0),o)){var n=r&&("load"===r.type?"missing":r.type),a=r&&r.target&&r.target.src;i.message="Loading chunk "+t+" failed.\n("+n+": "+a+")",i.name="ChunkLoadError",i.type=n,i.request=a,o[1](i)}}),"chunk-"+t,t)}},f.O.j=t=>0===e[t];var t=(t,r)=>{var o,n,a=r[0],i=r[1],l=r[2],c=0;if(a.some((t=>0!==e[t]))){for(o in i)f.o(i,o)&&(f.m[o]=i[o]);if(l)var d=l(f)}for(t&&t(r);c{"use strict";var e,t,r,o,n,a={},i={};function f(e){var t=i[e];if(void 0!==t)return t.exports;var r=i[e]={id:e,loaded:!1,exports:{}};return a[e].call(r.exports,r,r.exports,f),r.loaded=!0,r.exports}f.m=a,f.c=i,e=[],f.O=(t,r,o,n)=>{if(!r){var a=1/0;for(u=0;u=n)&&Object.keys(f.O).every((e=>f.O[e](r[l])))?r.splice(l--,1):(i=!1,n0&&e[u-1][2]>n;u--)e[u]=e[u-1];e[u]=[r,o,n]},f.n=e=>{var t=e&&e.__esModule?()=>e.default:()=>e;return f.d(t,{a:t}),t},r=Object.getPrototypeOf?e=>Object.getPrototypeOf(e):e=>e.__proto__,f.t=function(e,o){if(1&o&&(e=this(e)),8&o)return e;if("object"==typeof e&&e){if(4&o&&e.__esModule)return e;if(16&o&&"function"==typeof e.then)return e}var n=Object.create(null);f.r(n);var a={};t=t||[null,r({}),r([]),r(r)];for(var i=2&o&&e;"object"==typeof i&&!~t.indexOf(i);i=r(i))Object.getOwnPropertyNames(i).forEach((t=>a[t]=()=>e[t]));return a.default=()=>e,f.d(n,a),n},f.d=(e,t)=>{for(var r in t)f.o(t,r)&&!f.o(e,r)&&Object.defineProperty(e,r,{enumerable:!0,get:t[r]})},f.f={},f.e=e=>Promise.all(Object.keys(f.f).reduce(((t,r)=>(f.f[r](e,t),t)),[])),f.u=e=>"assets/js/"+({53:"935f2afb",210:"6fdce000",237:"1df93b7f",514:"1be78505",574:"15d8c5b7",655:"89dd2556",918:"17896441",935:"88b961c4"}[e]||e)+"."+{53:"33f94d69",210:"08cfe3f0",237:"7835cd9b",514:"d088d80c",574:"4a60292b",655:"663dbdb8",918:"1c696f69",935:"a96e4ad2",972:"b8a5bc0d"}[e]+".js",f.miniCssF=e=>{},f.g=function(){if("object"==typeof globalThis)return globalThis;try{return this||new Function("return this")()}catch(e){if("object"==typeof window)return window}}(),f.o=(e,t)=>Object.prototype.hasOwnProperty.call(e,t),o={},n="symphony-collaboration:",f.l=(e,t,r,a)=>{if(o[e])o[e].push(t);else{var i,l;if(void 0!==r)for(var d=document.getElementsByTagName("script"),u=0;u{i.onerror=i.onload=null,clearTimeout(b);var n=o[e];if(delete o[e],i.parentNode&&i.parentNode.removeChild(i),n&&n.forEach((e=>e(r))),t)return t(r)},b=setTimeout(s.bind(null,void 0,{type:"timeout",target:i}),12e4);i.onerror=s.bind(null,i.onerror),i.onload=s.bind(null,i.onload),l&&document.head.appendChild(i)}},f.r=e=>{"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},f.p="/",f.gca=function(e){return e={17896441:"918","935f2afb":"53","6fdce000":"210","1df93b7f":"237","1be78505":"514","15d8c5b7":"574","89dd2556":"655","88b961c4":"935"}[e]||e,f.p+f.u(e)},(()=>{var e={303:0,532:0};f.f.j=(t,r)=>{var o=f.o(e,t)?e[t]:void 0;if(0!==o)if(o)r.push(o[2]);else if(/^(303|532)$/.test(t))e[t]=0;else{var n=new Promise(((r,n)=>o=e[t]=[r,n]));r.push(o[2]=n);var a=f.p+f.u(t),i=new Error;f.l(a,(r=>{if(f.o(e,t)&&(0!==(o=e[t])&&(e[t]=void 0),o)){var n=r&&("load"===r.type?"missing":r.type),a=r&&r.target&&r.target.src;i.message="Loading chunk "+t+" failed.\n("+n+": "+a+")",i.name="ChunkLoadError",i.type=n,i.request=a,o[1](i)}}),"chunk-"+t,t)}},f.O.j=t=>0===e[t];var t=(t,r)=>{var o,n,a=r[0],i=r[1],l=r[2],d=0;if(a.some((t=>0!==e[t]))){for(o in i)f.o(i,o)&&(f.m[o]=i[o]);if(l)var u=l(f)}for(t&&t(r);d Case Study | Symphony - +
-

Case Study

“Alone we can do so little; together we can do so much.” - Helen Keller

Introduction

Symphony is an open source framework designed to make it easy for developers to build collaborative web applications. Symphony handles the complexities of implementing collaboration, including conflict resolution and real-time infrastructure, freeing developers to focus on creating unique and engaging features for their applications.

In this case study, we’ll discuss the challenges that arise when building collaborative experiences on the web, the limitations of traditional approaches in solving these problems, and how we designed Symphony to overcome them.

Collaboration

Real-time collaboration, where multiple users can concurrently work together on a common task, has been a notable feature since the earliest days of the internet. It’s origin can be traced back to the 1960s, when Douglas Engelbart in his famous Mother of All Demos, demonstrated the first real-time collaborative editor, built on the oN-Line System (NLS), that allowed users to create and edit documents, link them together, and share them with others.1

However, for much of the web’s history, the majority of applications have notably been non-collaborative. Without the ability to work together on a common task in real-time, users have to instead enter into a tedious cycle of changing, exporting, and manually syncing or emailing copies of files.

Modify-Export-Send feedback loop

This slow feedback loop harms productivity.2 In other words, this workflow is sub-optimal and restrictive.

With the rise of remote work where users are geographically separated, the need to improve this workflow has become even more acute.

As noted by industry leaders, the optimal solution is for applications to allow multiple users to collaborate online in real-time.

"[Real-time collaboration] eliminates the need to export, sync, or email copies of files and allows more people to take part in the design process." - Evan Wallace, Figma

Popular products such as FigmaGoogle Docs, and Visual Studio Code, incorporate this as a defining feature, allowing multiple users to concurrently modify the same state.

The problem is that building these types of applications is non-trivial. To understand why, we need to consider the characteristics of traditional web applications.

Evolution of Web Applications

Traditionally, the architecture of most web applications have conformed to the client-server model, where client and server communicate in a request-response cycle.

When a user makes a change to the client state, the change is propagated to the application server via a HTTP request, which in turn updates the database i.e. the true application state and confirms the change to the client via a response.

Three-tier Architecture

This architecture is fine for applications that are designed to be used by only one user at a time, this architecture is fine. However, for applications that seek to provide a multiplayer experience, the stateless nature of HTTP is problematic.

Since each state change by a given client is scoped to the request-response cycle, other user’s who wish to view the change must first request the data from the server, usually by refreshing the page.

In situations where multiple users are frequently modifying the same state, the need for each client to constantly send requests can quickly become burdensome and inefficient.

Introducing Real-Time

As companies began wanting to create applications that allowed multiple users to interact in realtime, the stateless nature of HTTP request-response cycle became a limitation. These applications such as online games, chat rooms, and social media platforms, needed to maintain updated state without requiring the user to take any specific action such as a page refresh. In other words, a different approach to data transmission was needed- one that allowed data to be shared bi-directionally between clients and/or a server in real-time.

In response, new web protocols were developed to help facilitate this. Two of the most popular include WebRTC and WebSocket.

WebRTC

Web Real-Time Communication (WebRTC) is an open-source technology that enables real-time communication between web browsers over the internet.3 The protocol uses a combination of JavaScript APIs and peer-to-peer networking to establish direct communication channels between browsers, without the need for a permanent, central server. UDP is used as the primary transport protocol for real-time data transmission. This makes WebRTC an especially attractive choice for collaborative applications that require very low-latency communication at the expense of reduced reliability and error correction, such as video conferencing, online gaming, and live streaming.

WebSocket

WebSocket is a web protocol that provides a persistent, bi-directional communication channel between a client and a server over a single, long-lived TCP connection.4 The connection is established via a handshake between client and server. Since TCP is used as the primary transport protocol, WebSocket is a suitable choice for collaborative applications that require stronger guarantees on the reliability and security of the communication channel at the expense of higher latency, such as real-time dashboards, stock price tickers, and live chat.

Using technologies such as WebRTC and WebSocket, clients and/or servers are able to maintain persistent, stateful communication channels, no longer bound by the limits of the request-response cycle. As such, it permitted the development of so-called real-time applications to be built, where state updates are perceived to be received instantaneously without page refresh.

Whilst it may initially seem that the addition of real-time solves the collaboration problem since multiple users can now see changes immediately, this is not the case.

The problem is that many real-time applications such as chat applications have the implicit constraint that each piece of state can only have a single mutable reference to it. In other words, the same piece of state cannot be modified concurrently by multiple users. For example, in a chat application, a given message is owned by a single user and they alone can edit it at any given time.

For an application to be truly collaborative, it must allow users to work together in real-time on shared state, where multiple users can modify the same piece of state at the same time, without conflicts or inconsistencies.

The possibility of conflict radically increases the complexity of implementing collaborative applications.

Conflict

In the context of real-time collaborative applications, conflict refers to a situation where two or more users attempt to modify the same piece of state, without knowledge of one another (concurrently), resulting in conflict versions of that data.

For example, multiple users working on a shared task or document may make changes to the same part of the document at the same time. Alternatively, network delays could cause state to diverge between different users which must be reconciled.

We can concretely demonstrate how conflict arises using the following examples.

Suppose that Alice and Bob are collaborating on a text document, when both Bob and Alice attempt to write at the same spot:

When conflicts arise, Alice and Bob’s modifications can be seen as branching off from the previous state of the system, creating a parallel version of the application state.

Branching

For a collaborative application, we need a method of reconciling such conflicts and enforcing distributed consistency across clients.

merging
The role of a conflict resolution mechanism is to merge branches in a deterministic way, until all branches have converged to a single, consistent state that all parties agree upon.

In other words, after applying all user state changes, the application should deterministically converge to an eventually consistent state across the whole system that all parties agree upon.

Methods of Conflict Resolution & Maintaining Distributed Consistency

Over the years, there have been multiple solutions that have been proposed to the problem of conflict resolution.

The simplest strategy, as mentioned previously, is to prevent conflicts from occurring in the first place. This can be implemented via locking. When a given user is making edits, the document is locked, becoming read-only to other users. In other words, we impose the constraint that only a single user can have a mutable reference to the document at any given time.

Thanks to its simplicity, this approach is widely used even today. For example, Basecamp, a web-based project management tool, employs locking to prevent conflicts:

Basecamp locking

However, as noted previously, this approach provides a very limited workflow since it solely facilitates asynchronous collaboration, where users have to implicitly arrange times when they can edit the document or work on separate documents and then merge changes.

For real-time, synchronous collaboration, more advanced conflict resolution mechanisms are required.

Operational Transformation (OT)

One possible approach is to use the operational transformation (OT) algorithm, famously used by Google Docs 5.

OT represent each user’s edits as a sequence of operations that can be applied to the shared application state. For example, in the case of a collaborative text editor, where the sequence of characters is zero-indexed, the operation to insert the character 'a' at the beginning of the first sentence may be represented as insert('a', 0).

When a client makes an edit to the state, the corresponding operation is transmitted to the server, which broadcasts it to all other collaborating clients.

In cases where multiple users attempt to modify the same piece of state concurrently, the OT algorithm defines a set of rules, which encode how conflicting operations should be transformed such that the operations can be applied in any order, without causing conflict.

For example, in the case of the collaborative text editor, two clients may attempt to concurrently insert text at the start of the document i.e. O1 = insert('a', 0, 1) and O2 = insert('b', 0, 2), where the third argument represents the client id. The transform rule may be to shift one of the insertions to the right by the length of the other insertion i.e. insert('a', 0, 1) and T(O1) = insert('b', 1, 2).

Operational Transform

This ensures that both insertions can be applied whilst still capturing user intent and not modifying the intended meaning of the document.

Since OT only requires operations to be incrementally broadcast, the algorithm is efficient and has low memory overhead.

The problem is that OT is very complex to implement correctly. The OT algorithm assumes that every state change is captured, which in modern rich browser environments, can be difficult to guarantee. Further, since operations have a finite transit time to the server, the states of clients naturally diverge over time from one another. The larger the divergence, the larger the number of possible combinations of operations that result in conflict, each of which must be accounted for by the transform rules. Since many of these conflicting combinations are very difficult to foresee, formally proving the correctness of OT is complicated and error-prone, even for the simplest of OT algorithms.

This sentiment is widely shared by practitioners in the field, as highlighted by Joseph Gentle, a former Google Wave engineer, and author of the ShareJS OT library, who said:

Unfortunately, implementing OT sucks. There's a million algorithms with different tradeoffs, mostly trapped in academic papers. […] Wave took 2 years to write and if we rewrote it today, it would take almost as long to write a second time.

In fact, 4 out of 8 different implementations of OT from the original 1989 paper to 2006 were found to be incorrect, missing subtle edge cases. The consequence of this incorrectness was that client state would irrevocably diverge, with no way to return to a consistent state [ref: CRDTS the hard parts].

The complexity of OT led researchers to find alternatives, the most promising of which are conflict-free replicated data types, or CRDTs.

Conflict Free Replicated Data Types (CRDTs)

A conflict-free replicated data type (CRDT) is an abstract data type designed to be replicated at multiple processes.6 By definition, CRDTs have the following properties:

  • Independent- Any replica can be modified without coordinating with other replicas.
  • Strongly eventually consistent- When any two replicas have received the same set of updates (in any order), the mathematical properties of CRDTs guarantee that both replicas will deterministically converge to the same state [footnote- explain what these mathematical properties are].

By imposing these mathematical properties on the CRDT and it’s associated algorithms, clients can optimistically update their own state locally and broadcast their updates to all other remote, state replicas [footnote explain difference between state and operation based]. Since CRDTs are strongly eventually consistent, upon a given remote replica receiving all updates, the remote replica is guaranteed to converge to the same state as the local replica without conflict.

The advantage of CRDTs is that they are guaranteed to be conflict-free, as long as the required mathematical properties are imposed. Since these mathematical properties are well-defined, it is easier to prove the correctness of a CRDT than any corresponding OT implementation. Further, since each replica is independent and that CRDTs make no assumption about the network topology, CRDTS are partition tolerant by default and can be used in a variety of network topologies including client-server and P2P. This property also means they are offline-capable by default.

However, the mathematical constraints of CRDTs, in particular that operations should be commutative adds some unavoidable overhead. Most commonly-used data structures do not have commutative operations by default. For example, the add and remove operations of a Set are not naturally commutative. To ensure commutativity, the CRDT must retain additional metadata.7

For example, in the case of the add and remove operations of a Set, tombstones are typically used as placeholders for removed entries- if a replica receives a remove operation for an element before it receives the add operation that actually added the element, the tombstone ensures that the remove operation is still correctly processed. Since the metadata must be retained for the required mathematical properties to be upheld, the use of CRDTs inevitably results in additional memory overhead, which can become significant for large state. As noted by Jospeh Gentle:

"Because of how CRDTs work, documents grow without bound. … Can you ever delete that data? Probably not. And that data can’t just sit on disk. It needs to be loaded into memory to handle edits." - Joseph Gentle, former Google Wave engineer

While recent research has sought to introduce garbage-collection methods to reduce the amount of metadata, there is still significant additional memory overhead when using CRDTs to represent a data model.

Custom Conflict Resolution Mechanisms (Not sure whether to include)

Whilst OT and CRDTs represent the most popular approaches to conflict-resolution, the complexity of OT and the memory overhead of CRDTs can sometimes be unacceptable for certain use-cases. As such, some choose to create custom, proprietary data models that are inspired by the OT and CRDT approaches and are highly specialised to a particular use-case.

For example, Figma relax many of the constraints imposed by CRDTs by adopting much simpler conflict-resolution semantics. In particular, they use simple last-write wins semantics when two clients try to modify a value of a Figma object concurrently. This works great for Figma objects where changes are mutually exclusive i.e. a single value must be chosen, but would fail if used for text editing. In Figma’s case, this was a valid tradeoff for their use case but would not be a suitable model for other applications.8

The advantage of implementing a custom conflict-free data model is that the mechanism can be made highly-specialised to the target use-case. This can mean that many of the constraints that come with OT and CRDTs can be relaxed which may result in a simpler and efficient data representation. However, developing a custom model can be potentially risky since it requires a number of assumptions to be made about the use-case. In Figma’s case, for example, introducing text-editing may require significant changes to their current conflict-resolution semantics.

Choosing a Method of Conflict Resolution

When choosing a conflict-resolution mechanism, there is no single best, one-size fits all solution. Each conflict-resolution mechanism has it’s own set of tradeoffs and choosing a particular approach requires a deep understanding of the usage pattern of the target application.

Some aspects of the target application that should be considered include:

  • What CAP (Consistency, Availability, Partition-tolerance) properties should the system have?
  • What is the application architecture? Client-server? P2P?
  • Is the system required to operate offline?
  • Are there any system-level constraints including CPU/memory limits?
  • Is the data model generic or highly specialised to a particular use-case?

Answering these questions influences the suitability of each conflict resolution mechanism to a specific use-case.

Conflict resolution mechanisms comparison

Manually Building a Real-time Collaborative Application

Building a collaborative application from scratch can be time-consuming and difficult, particularly when dealing with the intricacies of real-time infrastructure and conflict-resolution mechanisms. It means that creating rich, collaborative experiences on the web has traditionally only been open to companies with the human and financial resources to roll their own solutions. For smaller teams of modest means, who may lack familiarity with these specialised topics, implementing such systems has remained out of reach. Provided below is a sample list of tasks involved in creating a production-ready real-time collaborative web application:

Manually building collaborative application

As a result, solutions have started to emerge that lower this barrier.

Existing Solutions

Existing solutions typically fall into two categories: DIY solutions and commercial solutions.

DIY Solutions

For organisations who have complex, specialised requirements for their collaborative functionality or want to tightly integrate with existing infrastructure, a DIY solution might be the best fit. This involves manually synthesising the various components required for a real-time collaborative application.

There are numerous open-source libraries providing implementations of popular conflict-resolution algorithms- teams would likely need to research, choose, and integrate the solution that best fits their use case. Alternatively, a bespoke solution may be best suited for highly specialised applications.

For the real-time network and persistence layer which handles the propagation of updates to collaborating clients and/or server(s) and storing of state, one could use a backend-as-a-service such as Ably, Pusher, or PubNub or provision a custom implementation using open-source libraries like ws or PeerJS on cloud infrastructure.

Whilst the DIY approach offers a high degree of customisation, it does require developers to have a high-level of proficiency in the relevant technologies. Thus, less experienced teams might reach for a Software-as-a-Service (SaaS) product to help manage their collaborative functionality needs.

Commercial Solutions

The advent of commercial offerings providing Collaboration as a Service is a relatively recent phenomenon.

One of the most popular solutions, released in 2021, is Liveblocks. Whilst not as flexible as the DIY approach, Liveblocks provides a great developer experience, exposing all the components required for adding real-time collaboration to an application through an intuitive client API. This includes a collection of custom CRDT-like data types, autoscaling real-time infrastructure with persistence, and a developer dashboard for easily monitoring usage patterns. However, this convenience comes at a cost, with Liveblocks charging $299 per month for an application with up to 2000 monthly active users (MAU) [ref: valid as of May 2023].

A compelling alternative is Fluid Framework developed by Microsoft. Fluid provides a collection of client libraries that also expose custom CRDT-like distributed data structures. The client libraries connect to an implementation of the Fluid service, a runtime which handles the complexities of propagating updates in real-time and persisting state. Whilst Fluid is open-source, it provides a very limited implementation of the Fluid service by default, capable of handling only 100s of concurrent users. For larger applications, developers are forced to use either the Azure Managed Service or write a custom scaled implementation.

A Solution for Our Use Case

Looking at the above solutions, it is clear that until now, developers who want to incorporate collaboration into their products have been to partially or fully roll their own solutions or turn to a closed-source, managed provider.

The first option has significant implementation cost, particularly given that the expertise require to develop collaborative functionality is often orthogonal to the businesses’ core offering. The latter option suffers from vendor lock-in and can attract considerable expense, as noted with Liveblocks.

Following this, we wanted to build a tool for small teams that want to add collaborative functionality to their applications without having to spend time implementing and deploying their own conflict resolution and real-time infrastructure.

Further, we want to make our framework open-source, scalable and fully self-hosted so that developers have complete control of code and data ownership.

With globalisation and the rise of remote work, providing seamless web-native collaboration is no longer the preserve of the largest companies. Smaller teams increasingly want to reap the benefits of fast collaborative feedback loops in their products.

An example of this is Propellor Aero, who wanted the ability to collaborate with their customers on 3D interactive site survey maps.

“We started looking at building a service ourselves… We really didn't want to because it's a whole lot of work and it's a really difficult problem. This was a very new problem to us, our engineering team had different levels of experience in synchronisation in real-time as a whole.” - Jye Lewis, Engineering Manager, Propellor Aero

We sought to assist companies with similar profiles in adding collaborative functionality to their web application.

The availability of an open-source tool which handles the complexities of implementing collaboration, including conflict resolution and real-time infrastructure, would free Propellor Aero developers to focus on creating features that have direct business value, whilst still retaining control over all their data.

Comparing existing solutions

Symphony

Overview

Symphony is an open-source framework designed to make it easy for developers to add collaborative functionality to their applications. It comes with a client library that provides an intuitive API to a collection of conflict-free data types that are composed to construct a distributed data model. Symphony automatically provisions the required network infrastructure to propagate state changes to all collaborating clients in real-time and persist state between users sessions. It also provides real-time application- and system-level monitoring via a developer dashboard that exposes pertinent metrics including the number of active users, the size of persisted state (bytes), and the CPU/memory usage of each collaborative session.

Using Symphony

Symphony has been designed with ease-of-use in mind. In three simple steps, developers can create and deploy a real-time collaborative application.

After installing the required dependencies stated in the documentation, and globally downloading the Symphony CLI tool via npm:

  1. Run symphony compose <projectName>. This command creates a new projectName directory, initializes a new Node project with the required package.json, and scaffolds some initial starter files including the Symphony configuration file, symphony.config.js.
  2. Write and deploy the front-end client code by composing the collection of conflict-free data types provided by the Symphony client.
  3. Run symphony deploy <domainName>, which deploys the application on Google Cloud Platform (GCP). After provisioning is complete, developer’s can run symphony dashboard to view the developer monitoring dashboard.

Following these steps, developers can also enhance existing web applications with collaborative functionality using Symphony.

To illustrate this, here’s a simple whiteboard application where users can draw lines, shapes, and change colours. In it’s current form, the whiteboard is single-user and non-collaborative.

To make this whiteboard multiplayer, we modify the whiteboard code to make use of the conflict-free data types provided by the Symphony client. After deploying the application to GCP, user’s can now work together in the same collaborative space and see what others are doing in real-time.

We’ll now turn to how we built Symphony and the technical challenges we faced.

Architecture Overview

We’ll being by outlining the fundamental requirements we had to address and a description our design philosophy. We’ll then provide a high-level overview of our core architecture and discuss important design decisions, tradeoffs and improvements that were made.

Terminology

In order to express the system requirements accurately, we introduce some useful terminology:

  • Document- refers to the shared state that clients modify during a session.
  • Room- a collaboration session in which one or more clients connect to in order to modify the room document. A given room has a single document i.e. shared state that clients modify.
  • Presence- represents the ephemeral state of a room which defines user’s movements and actions inside a room including cursor positions, user avatars, online/offline indicators, or any other visual representation that reflects the real-time activity or availability of users within the collaborative session.

Fundamental Requirements

When building our initial prototype, we focussed on the fundamental problems that needed to be solved in order to build the core of a real-time collaborative framework. These included:

  • Deciding how to model the shared state of a room i.e. document, and selecting a suitable mechanism to resolve conflicts and understanding the constraints that such a choice would impose on the rest of our architecture.
  • Determining how ephemeral and persistent state changes on one client would be propagated in real-time to all other subscribed clients and/or servers.
  • Constructing a suitable persistence layer, where state can be stored between collaborative sessions and system metadata can be retained.

Design Philosophy

Symphony is designed with the principle that developers should be able to include collaboration into their products without having to radically modify their existing workflow and tools. With this as our guiding principle, we explain our choice of architecture and how it attempts to meet the fundamental requirements of a real-time collaborative framework.

Core Architecture

After some initial prototyping, we arrived at the following high-level flow on how a collaboration session involving multiple users starts, progresses and terminates.

A client connects to a server via WebSocket. The clients specifies the room to connect to by specifying the room id in the URL path. The server extracts the room id and queries the database to check if a room with that id already exists. If the id exists i.e. the room has been used before, the server retrieves the associated room document from storage and loads it into memory; otherwise, a new document is created in memory and a new room record created in the database.

Additional clients can connect to the active room and modify the state. Each update is propagated to the server which in turn updates the document state in memory and broadcasts it to all the other collaborating clients. Upon receiving updates, clients update their local state. When the last remaining client disconnects from the room, the document is serialized and written to storage. The document and room metadata is subsequently purged from memory, and the room is marked as closed in the database.

With an overall direction in mind, we then explored different options for each component of our core architecture.

Implementing the Core Architecture

Conflict Resolution

As mentioned previously, a key component of implementing real-time collaboration is the ability to deterministically reconcile conflicts, which arise as a result of multiple users concurrently modifying the same piece of state.

While we found that the performance and low memory overhead of OT was attractive, it’s complexity and the fact that it’s most suited to editing large text documents, made it less applicable to supporting generic data models.

For Symphony, we instead decided to use CRDTs as the primary conflict resolution mechanism. Their strong eventual consistency guarantees mean that client changes can be optimistically applied resulting in a faster user experience. In addition, they are highly available and fault-tolerant which means that the users can continue to change state even during network failure or disconnection- the state will simply synchronise with other clients upon reconnection.

Although CRDTs have traditionally suffered from inadequate performance and very large memory overhead, they have become exponentially faster and more memory efficient in recent years, thanks to an active research effort.9 To ensure suitable performance, we decided to use an operation-based CRDT, which unlike state-based CRDTs, only propagate operations over the wire instead of the entire state. The tradeoff is that operation-based CRDTs require a reliable network channel which could be easily included given our chosen network topology (see below).

For our collection of CRDTs, we chose to use Yjs, a library which provides a collection of generic, operation-based CRDT implementations based on the YATA algorithm. We chose Yjs since it had strong community support, has a very efficient linked-list data model with optimisations such as a garbage collector, making it one of the most memory-efficient and performant implementations, and since it provided defined synchronisation and awareness protocols to propagate across persistent and ephemeral updates across a generic network layer.

We also considered using Automerge, the other leading open-source offering in this space. Whilst equally performant, it is less mature and was 2x less memory efficient than Yjs in recent benchmarks.

State Change Propagation

Since we now have a collection of conflict-free data types that can be used to construct a distributed data model, we need to consider how to propagate state updates to all collaborating clients in real-time.

Since CRDTs have strong eventual consistency, they can theoretically support any network layer capable of propagating updates from one replica to another. However, since our use-case is for web applications, we can only use technologies supported by modern browsers- the two primary choices being WebSocket and WebRTC.

WebRTC is primarily used in peer-to-peer (P2P) topologies. Whilst WebRTC is scalable and minimises infrastructure requirements since it does not require the use of a central server, it has lacks suitability for our use case.

Firstly, the majority of modern web applications already use a centralised client-server model. Companies want to retain control of data and enforce security measures such as authentication across all users, which is difficult in a P2P topology. Additionally, traversing firewalls and Network Address Translation (NAT) devices is not trivial with WebRTC- a consequence of this is that the applications will fail to propagate updates in geographies with national firewalls e.g. China.

As a result of these limitations, we chose WebSocket as the underlying protocol for our real-time infrastructure. Their support for the client-server model and stability across all major browsers and basis made them a natural choice for us. Since WebSocket provides a bidirectional communication channel over TCP, the reliable network channel required for operation-based CRDTs is inherently provided.

Persisting Room Data

When a collaboration session ends, we need to persist room data so that room documents are not lost and users can recreate the room in the future to continue working on it.

To do this, we need to construct a data model which allows us to represent created rooms and their associated metadata. The model consists of a single Room entity:

Relational Model

We chose to store this data in a Postgres relational database since we have a ready-heavy system and each room has a fixed schema. It also the permits analytical queries to be more easily executed. We rely on the Prisma ORM which provides a high-level, type-safe abstraction for schema creation and database interaction.

Storing Document Data

In line with Yjs best practice, we serialize room documents into a highly compressed binary format. This has the benefit of significantly reducing the amount of storage space required per document, faster data transmission and minimised bandwidth consumption across the network.

We initially thought of storing these binary blobs in the Postgres database. However, we realised that this was suboptimal.

Firstly, document sizes can become very large, particularly after lengthy collaboration sessions which can result in a large amount of accumulated CRDT metadata. Storing these documents in Postgres would affect the scalability of the database. Secondly, Postgres is not optimized for large scale writes- the number of writes scales linearly with the number of rooms and can become particularly problematics if large documents are saved multiple times during a collaborative session. Implementing other useful features such as document versioning also becomes tricky.

One potential solution is to use a NoSQL database like AWS DynamoDB. However, these often have limits on the size of a single database item (DynamoDB has a 400kb limit), which is impractical for use cases like ours where document size can potentially be unbounded.

Considering these limitations, we decided to store documents in object storage, namely AWS Simple Storage Service (S3). Object storage is highly scalable, optimized to handle large amounts of unstructured data making it ideal for persisting schemaless room documents. It’s also cheaper than alternative NoSQL solutions like DynamoDB and supports large-scale read and write operations, making it suitable for scenarios where there is a large number of concurrent rooms and documents needs to be ingested and retrieved at high volumes. Further, our use case only requires documents to be persisted as atomic binary blobs- we do not need to query within a document making object storage more suitable than a NoSQL database [ref Sam Broner].

Integrating Postgres and S3 object storage, we are now able to persist room data between collaboration sessions. When a user connects to a room, we query Postgres to determine if an existing room exists. If it does, we can retrieve the associated document from S3 and load it into memory for editing; otherwise we create a new Room record and initalize an empty document. After the last user leaves the room, we serialize the in-memory document, store it in object storage and purge the document from server memory, returning memory resource to the system.

Front-end Client API

Whilst the conflict-free data types provided by Yjs come with a primitive API, it requires the developer to have some knowledge of the underlying data model to use optimally.

In line with our design philosophy of seamlessly integrating into developers’ existing workflow, we created a JavaScript client API wrapper with sensible defaults and intuitive abstractions, through which a developer interacts with Symphony’s components.

The client exposes the conflict-free data structures including a SyncedList and SyncedMap, which are composed to form a distributed document model. Importantly, the underlying communication and persistence infrastructure, allowing the application developer to remain at a familiar level of abstraction.

The client internally implements additional quality of life improvements for the developer, provide an enhanced developer experience. These include:

  • Implementing performance optimizations such as auto bulk-insertion of updates which significantly reduces memory consumption.
  • Automatically converting between CRDT and plain JS objects when logical to do so such that developers do not need to keep manually converting.
  • Providing undo/redo functionality with a History API. This allows undo/redo functionality to be manually paused and resumed.
  • Convenience iterator methods on SyncedList including filter, map, and find, allowing it to be used more like a regular JavaScript Array.

The full feature set provided by the Symphony client is described in our API documentation.

Load Testing

Once Symphony’s core functionality was operational, developers were able to easily create real-time collaborative applications.

However, the current architecture is limited.

The responsibility for creating, maintaining and updating state in memory for all rooms, handling user WebSocket connections, and serializing/deserializing state all fall to a single server. In other words, the system has a single point of failure.

Also, since the single server is responsible for handling all collaborative sessions and supporting the additional memory overhead resulting from our use of CRDTs, we hypothesised that whilst this architecture is suitable for a small number of rooms, it would not suffice in real-world applications that would typically have thousands of concurrent users [ref typical app user count e.g. miro].

To empirically verify this, we turned to load testing the system. This would also allow us to determine the system’s service level objectives (SLOs) including the concurrent user limit and identify potential bottlenecks such as compute or memory, which would later inform our scaling strategy.

Constructing a Test Environment

We first needed a way to establish a large number of virtual user connections to the server which each send state updates and broadcasted presence.

To do this, we wrote a program which spawned N separate processes, where each process modelled a virtual user connecting to the server. Since creating a large number of virtual users and propagating updates proved to be CPU intensive, we provisioned multiple EC2 instances to execute the script concurrently.

For the test itself, we selected the following load parameters.

Load testing parameters

A single room server with 1vCPU and 4GB of memory, handling 240 virtual users with 4 users per room, resulting in a total of 60 rooms, propagating one state update per second and 5 presence updates per second, for a period of 30 minutes.

While the rates of document and presence updates would vary widely depending on the specific use case, we felt that these were reasonable values to model real-world usage (in comparison Liveblocks’ default settings throttle user updates to 10 per second).

Using AWS CloudWatch, we instrumented our server to extract application-level and system-level metrics including total number of WebSocket connections and CPU/memory usage.

We observed CPU usage steadily increase as a function of the number of connected virtual users. Once all connections were established, CPU usage had reached 92%. As the in-memory document size grew as a result of user updates, CPU usage peaked at 94% before we detected performance degradation in the form of dropped connections.

The results confirmed our hypothesis- that our current architecture could only handle a few hundred concurrent users for 30 minutes of real-world usage before failing.

It would be possible to vertically scale the server with greater compute and memory. However, this approach is not optimal. Firstly, the architecture would continue to have a single point of failure. Secondly, scaling would be hard-capped by the maximum instance size offered by the AWS.

For these reasons, we decided to explore horizontal scaling, which means increasing the number rather than the size of our servers. This would make our system capable of handling more users, while also being more resilient to server failures.

Scaling

Looking to Existing Solutions

Horizontally scaling the Symphony room server is not trivial. Unlike stateless services which can be scaled simply by adding more instances, clients connect to the room server via persistent WebSocket connections which are stateful. This means that clients who connect to the same room may be connected to different room server instances. This raises two problems.

The first problem is that if a client connected to a given server instance makes an update to document of a particular room on that server, then this update must be propagated to other servers which have that room document in memory; otherwise, the update will not be received by the other servers which have that room document and the state will diverge.

The second problem arises when a client attempts to connect to an already active room. It’s possible that the connecting client may be routed to a server instance which does not have the document in-memory- while the server needs a way of retrieving the most recently updated document from another server.

Redis Pub/Sub

The first problem is not unique to the Symphony room server. One common pattern to ensure updates on one server are propagated to other server is by adding a backplane, a shared component that facilitates the synchronization of data across multiple server instances.

A popular backplane is a Redis node, where each server connects to Redis channels i.e. to a ‘publish’ channel to send all updates received by the server from connected clients and to a ‘subscribe’ channel to receive all updates published by other servers. This publisher-subscribe mechanism ensures that when a client updates a room document on a particular server, the update is broadcast to all other servers- if a receiving server has the corresponding room document in memory, it can apply the update locally, ensuring that the document replicas of a given room maintain synchronised.

Querying for Documents

One way of solving the second problem, namely that the document of an active room is missing in the particular server instance that a client connect to is to retain copies of every document on each server. However, this nullifies the benefit of scaling since the memory demands on each server is not reduced.

Instead, we implemented a system where a server could query another server instance, that had the required document in memory. For this, we maintain a key-value mapping of room id’s to room server IP addresses which defines which room documents are present in which room servers. We chose AWS DynamoDB, a NoSQL key-value database to store this data.

When a client connects to a room and is routed to a server that does not have the corresponding document in memory, the server queries DynamoDB for the list of server IP addresses that are handling that room.

If one or more IP addresses are returned, it means that the room is active and thus the latest version of the document is one that is being currently edited on one or more other servers. Using one of the returned IP addresses, the server retrieves the document from the corresponding server. If no IP addresses are returned, the room is not active and the latest version of the document is simply retrieved from object storage. Once the querying server had retrieved the document, it subscribes to Redis to receive all future document updates.

This solution ensured that clients could access a room document via any server instance, without having to replicate all active room documents on every server.

Adding and Removing Instances

Since the single-server load test had identified CPU utilisation as a notable bottleneck, we set our scaling policy to target 50% CPU utilisation. This means that the system will scale out when CPU usage of any server exceeds that limit and scale in when it falls below that number.

Evaluating the Current Scaling Solution

The chosen scaling solution represents a significant improvement the single-server approach. It can support a larger number of concurrent users by elastically deploying room server instances. However, while the architecture has historically been the most prescription for scaling WebSocket-based stateful services, we found a number of significant limitations specific to our use case during load testing.

When a plurality of clients attempted to join a particular room, they were often routed to different server instances. When the number of users in each room approached the number of server instances, it would invariably lead to copies of the document being present on every server. This nullified the benefits of scaling since there was no intended decrease in memory overhead. This additional overhead was also expensive since it would lead to extraneous CPU usage as a result of updates having to be broadcast and applied at every replica. This in turn resulted in more server instanced being provisioned and additional load on the Redis node. In fact, the Redis node approached 90% CPU utilisation at a few thousand concurrent users and represented a single point failure.

These findings led us to rethink the suitability of our current architecture for our use case.

A Better Scaling Solution

Upon reflection, there are two primary problems with the pub-sub architecture.

The first is that there is unnecessary duplication of documents across multiple server instances. The second is that the Redis node constitutes a single point of failure.

To overcome these limitations, we took inspiration from Figma.

“Our servers currently spin up a separate process for each multiplayer document which everyone editing that document connects to.” - Evan Wallace, CTO, Figma

This approach has the advantage of keeping document state confined to a single process. This means that there is no longer a need for distributed document state, eliminating the difficulties in horizontally scaling a stateful service. Further, each process/room can be scaled independently of others resulting in minimised cost and efficient utilisation of system resources.

This improved architecture has the following requirements:

  • Isolating each process/room from other rooms running on the same host.
  • Dynamically orchestrating process creation, execution, and termination. Processes should also automatically be restarted in case of crashes.
  • Autoscaling processes according to a specified scaling metric- in our case, this would likely be CPU or memory utilisation.
  • Proxying requests to the correct service

Implementation

We arrived at the following high-level architecture.

Architecture overview

A client sends a request to connect to a room via WebSocket. As before, the client specifies the room to connect to by specifying the room id in the URL path. The request is intercepted by a proxy server. The proxy server extracts the room id and queries a database to check if a room process with that id is active. If there isn’t, the server requests a process, uniquely identified by room id to be started. Once a process with the requested id is running and ready to accept requests, a key-value record mapping the room id to the IP address of the process is added to the database and the server proxies the client request to the relevant process and the standard collaboration session, described in Section 1 can begin. When the last remaining client disconnects from the room, the process waits for a predefined grace period after which the process is terminated. The corresponding process record is removed from the database.

With an overall direction in mind, we then explored different options for each component of our core architecture.

Isolating Room Processes

To execute isolated room server processes, we had two potential choices of infrastructure: containers or virtual machines.

Since rooms should be ephemeral and rapidly scalable, we chose to use containers. Containers are more lightweight resulting in shorter cold start times and faster scaling. While they are less secure than virtual machine due to having a shared kernel and not providing full hardware virtualisation, this is an acceptable tradeoff for our use case since we are running trusted code.

We now needed a way of efficiently orchestrating room containers.

Orchestrating and Scaling Room Processes

One solution was to use the AWS-native way of orchestrating containers, namely AWS Elastic Container Service (ECS), as we did in our original architecture. However, we found that this suffered from considerable vendor lock-in and would make supporting multi-cloud deployment difficult in the future. Since many developers may use other cloud providers, this went against our philosophy of integrating into existing developer workflows.

Instead, we chose to use Kubernetes, a open-source container orchestration tool thanks to it’s large community, extensive tooling, and flexibility.

Serverless

Our next decision was whether to run containers in a serverless fashion or to have direct access to the virtual machines hosting the containers. In line with our design philosophy, we wanted to make it as easy as possible for developers to create real-time collaborative web applications without having to manage the underlying infrastructure. Moreover, we wanted our solution to be cost effective. Given these requirements, we chose a serverless model, where usage-based billing model i.e. per K8s pod is employed- this means that a developer will only be charged for the number of active rooms.

For hosting the cluster, we initially turned to AWS Elastic Kubernetes Service (EKS) with Fargate. However, we found a number of drawbacks to it. The most significant drawback is that EKS does not provide a fully managed option- while automated cluster creation tools such as eksctl give the illusion of a fully-managed service, it simple auto generates the required resources and does not abstract away their existence. This means that the developer is still implicitly responsible for maintaining them and may mistakenly modify the cluster configuration.

EKS also has less flexibility than other solutions. For example, EKS insists that namespaces that require Fargate compute profiles must be specified before cluster creation. If namespaces are modified in the future, it means the infrastructure configuration also needs to be changed and the cluster recreated. Thirdly, upgrading EKS clusters can be difficult- to upgrade Kubernetes version, service pods needs to be deleted so that the underlying node is destroyed and a new one with the correct Kubernetes version is created. The lack of zero-downtime upgrades adds further burden on developers.

Instead, we found that a better solution for our Kubernetes deployment was Google Kubernetes Engine (GKE) Autopilot. GKE Autopilot provides faster cluster creation, global serverless compute across all namespaces by default, and abstracts away all the underlying components such as provisioning node pools etc. from the developer, providing a cleaner developer experience.

Proxying Requests

When a client request to connect to a particular room is received via the Kubernetes Ingress, it is intercepted by the Symphony proxy service. This service has two requirements:

  • Find or create the requested room service
  • Proxy the request to the requested room service

To satisfy the first requirement, we query etcd to check if a service with name corresponding to the room id exists. If it doesn’t, we send a request to the K8s API server to create a new room deployment where the service name is the room id. We then poll service endpoints in etcd until the service is marked as ready. In this case, polling was justified over a more complex mechanism such as using Kubernetes Watch since pods typically spin up within a few seconds so polling does not add much additional load. Each service has been configured with K8s readiness and liveness probes to ensure that it is not prematurely added to the list of available service endpoints and marked as healthy before the room server is ready to accept requests.

As implied by the above, we decided to use etcd as the source of truth on the existence and status of services instead of keeping a service registry cached locally- this ensures the proxy services remains stateless. Since etcd is strongly consistent, it is guaranteed to represent the true state of the system when queried. By keeping the proxy service stateless, we can horizontally scale by simply adding additional replicas without having to worry about state synchronisation. Whilst this does introduce additional latency since we need to make network calls to etcd, we decided this was a valid tradeoff as having a stateful service would radically increase complexity.

Once the required room service is ready to accept requests, the server proxies the client request to it.

Overview of the Final Architecture

Ultimately, we settled on the following implementation for our final architecture:

  1. A client requests to connect to a room. The request is intercepted by the Symphony proxy.
  2. The proxy extracts the room id from the URL pathname and queries etcd to check if a service with that name exists.
  3. If the service does not exists, a request is sent to the K8s API server to create a new room deployment where the service name is the room id.
  4. The proxy polls etcd to check if the service is ready to accept requests. Once it is, the client request is proxied to the service.
  5. If the number of connections to the room remains at 0 for a specified grace period (by default 30s), the room sends a request to the K8s API server to terminate the room, returning resources back to the system.

The creation of the K8s infrastructure and the required services is automated using Terraform. We use a K8s job to automate the initialization of the database schema.

Final Architecture

Additional Improvements

With our final architecture in place, there were a few additional considerations and features remaining for us to review. We wanted to make Symphony more performant, scalable, and secure. We also wanted to add features that would make it easier for developers to monitor the state of the system.

Monitoring and Visibility

In production applications, it’s imperative that developers have the ability to observe the usage patterns and condition of the system.

To integrate observability into Symphony, we first needed a way to scrape metrics from Symphony services, particularly room servers. We sought a flexible system that would allow us to expose and inspect large volumes of custom metrics. We chose Prometheus, an open-source, industry-standard monitoring tool that provides a variety of integrations to instrument applications and a powerful query language to querying and analyze scraped metrics.

For each room, we expose pertinent application- and system-level metrics such as the number of active WebSocket connections CPU usage and memory usage via the Prometheus client for Node.js. After provisioning the Prometheus server and configuring it to dynamically detect rooms, we deployed the Prometheus UI which allowed us to query scraped room metrics Prometheus Query Language (PromQL).

Whilst this provided satisfactory visibility, using PromQL has a small learning curve. In line with our design philosophy of creating a developer-friendly experience, we wanted the ability to visualise these metrics in an intuitive manner.

To achieve this, we integrated Prometheus with Grafana, an open-source tool that is widely used for creating interactive and customizable dashboards.

As a final touch, we created an intuitive developer dashboard UI which provides a centralised location for the developer to monitor the system. In particular, the UI provides a visualisation of room metrics that are scraped and aggregated by Prometheus in real-time as a collection of pre-configured Grafana dashboards. It also exposes historical metadata about each room by querying the Cloud SQL Postgres database such as the last time the room was active, the size of room state (bytes) per room, and the total number of rooms created (inactive + active rooms).

Reducing Pod Cold Start Time

When clients attempt to connect to a room which does not exist, the proxy must wait for the K8s scheduler to match a pod to a node and the node kubelet to run it before proxying can begin.

In certain cases, we noticed that when room deployment took as long as 2 minutes. This was surprising since K8s guarantees that “99% of pods (with pre-pulled images) start within 5 seconds” 10. After some investigation, we realised that the delay was introduced when the K8s scheduler has no available node to schedule the pod on. This resulted in a lengthy autoscaling operation until a new node was provisioned.

To mitigate this, we provisioned spare capacity using balloon pods 11. A balloon pod is a low priority (defined using a K8s PriorityClass resource) pod, which reserves extra node capacity. When a room is scheduled, the balloon pod is evicted so that the room can immediately start booting. The balloon pod is also then re-scheduled continuing to reserve capacity for the next room pod.

balloon pods
Image from William Denniss

This reduced pod-startup times by 10x. Whilst this solution eliminated the problem of prolonged cold-start times, it is more expensive and the ‘always-on’ balloon pods reduces the benefit of a serverless compute layer. To minimise this disadvantage, we provision only 3 balloon pods by default, where the size of each balloon pod is equal to the size of the smallest room pod.

Securing the Deployment

To ensure our infrastructure conformed to security best practice, we added the following configurations.

Firstly, we regulated access to all K8s services in line with the principle of least privilege using Role-based access control (RBAC). We also configured Workload Identity with Google Cloud Platform (GCP) which ensures that each K8s service has least privilege when accessing GCP services external to the cluster including the database and object storage. Additionally, all non-public facing services including the Postgres database were added to private subnets to prevent direct network access.

Snapshotting

Currently, documents are only persisted to object storage once, immediately preceding room termination. This means that a process or system failure during a collaboration session would lead to irrevocable data loss, particularly given that pods are ephemeral in K8s.

To mitigate this occurrence, we implemented checkpointing, where the in-memory document is periodically serialized and persisted to object storage. This approach does, however, lead to increased costs since cloud storage has an operation-billing component, where developers are charged per use of the API. In order to balance the need to snapshot with the associated additional costs, we set the default snapshot interval to 30s i.e. in the worst-case, a user could lose 30s of work. We felt this was reasonable since a client also has a local copy which could be used to replay the state- in combination with snapshotting, this makes the system adequately fault-tolerant.

Future Work

Going forward, there are additional features that we think would enhance Symphony:

  • Integrating authentication so that users can only interact with rooms they have access to.
  • Expanding deployment targets beyond Google Kubernetes Engine (GKE). Since Symphony is built on Kubernetes and provisioned with Terraform, we can easily add support for other providers of K8s services include AWS EKS and Azure AKS.
  • Develop a set of React hooks and providers enabling Symphony to be used declaratively.

References

- +

Case Study

“Alone we can do so little; together we can do so much.” - Helen Keller

Introduction

Symphony is an open source framework designed to make it easy for developers to build collaborative web applications. Symphony handles the complexities of implementing collaboration, including conflict resolution and real-time infrastructure, freeing developers to focus on creating unique and engaging features for their applications.

In this case study, we’ll discuss the challenges that arise when building collaborative experiences on the web, the limitations of traditional approaches in solving these problems, and how we designed Symphony to overcome them.

Collaboration

Real-time collaboration, where multiple users can concurrently work together on a common task, has been a notable feature since the earliest days of the internet. It’s origin can be traced back to the 1960s, when Douglas Engelbart in his famous Mother of All Demos, demonstrated the first real-time collaborative editor, built on the oN-Line System (NLS), that allowed users to create and edit documents, link them together, and share them with others.1

However, for much of the web’s history, the majority of applications have notably been non-collaborative. Without the ability to work together on a common task in real-time, users have to instead enter into a tedious cycle of changing, exporting, and manually syncing or emailing copies of files.

Modify-Export-Send feedback loop

This slow feedback loop harms productivity.2 In other words, this workflow is sub-optimal and restrictive.

With the rise of remote work where users are geographically separated, the need to improve this workflow has become even more acute.

As noted by industry leaders, the optimal solution is for applications to allow multiple users to collaborate online in real-time.

"[Real-time collaboration] eliminates the need to export, sync, or email copies of files and allows more people to take part in the design process." - Evan Wallace, Figma

Popular products such as FigmaGoogle Docs, and Visual Studio Code, incorporate this as a defining feature, allowing multiple users to concurrently modify the same state.

The problem is that building these types of applications is non-trivial. To understand why, we need to consider the characteristics of traditional web applications.

Evolution of Web Applications

Traditionally, the architecture of most web applications have conformed to the client-server model, where client and server communicate in a request-response cycle.

When a user makes a change to the client state, the change is propagated to the application server via a HTTP request, which in turn updates the database i.e. the true application state and confirms the change to the client via a response.

Three-tier Architecture

This architecture is fine for applications that are designed to be used by only one user at a time. However, for applications that seek to provide a multiplayer experience, the stateless nature of HTTP is problematic.

Since each state change by a given client is scoped to the request-response cycle, other users who wish to view the change must first request the data from the server, usually by refreshing the page.

In situations where multiple users are frequently modifying the same state, the need for each client to constantly send requests can quickly become burdensome and inefficient.

Introducing Real-Time

As companies began wanting to create applications that allowed multiple users to interact in realtime, the stateless nature of HTTP request-response cycle became a limitation. These applications such as online games, chat rooms, and social media platforms, needed to maintain updated state without requiring the user to take any specific action such as a page refresh. In other words, a different approach to data transmission was needed- one that allowed data to be shared bi-directionally between clients and/or a server in real-time.

In response, new web protocols were developed to help facilitate this. Two of the most popular include WebRTC and WebSocket.

WebRTC

Web Real-Time Communication (WebRTC) is an open-source technology that enables real-time communication between web browsers over the internet.3 The protocol uses a combination of JavaScript APIs and peer-to-peer networking to establish direct communication channels between browsers, without the need for a permanent, central server. UDP is used as the primary transport protocol for real-time data transmission. This makes WebRTC an especially attractive choice for collaborative applications that require very low-latency communication at the expense of reduced reliability and error correction, such as video conferencing, online gaming, and live streaming.

WebSocket

WebSocket is a web protocol that provides a persistent, bi-directional communication channel between a client and a server over a single, long-lived TCP connection.4 The connection is established via a handshake between client and server. Since TCP is used as the primary transport protocol, WebSocket is a suitable choice for collaborative applications that require stronger guarantees on the reliability and security of the communication channel at the expense of higher latency, such as real-time dashboards, stock price tickers, and live chat.

Using technologies such as WebRTC and WebSocket, clients and/or servers are able to maintain persistent, stateful communication channels, no longer bound by the limits of the request-response cycle. As such, it permitted the development of so-called real-time applications to be built, where state updates are perceived to be received instantaneously without page refresh.

It may initially seem that the addition of real-time solves the collaboration problem since multiple users can now see changes immediately.

This is not the case.

The problem is that many real-time applications such as chat applications have the implicit constraint that each piece of state can only have a single mutable reference to it. In other words, the same piece of state cannot be modified concurrently by multiple users. For example, in a chat application, a given message is owned by a single user and they alone can edit it at any given time.

For an application to be truly collaborative, it must allow users to work together in real-time on shared state, where multiple users can modify the same piece of state at the same time, without conflicts or inconsistencies.

The possibility of conflict radically increases the complexity of implementing collaborative applications.

Conflict

In the context of real-time collaborative applications, conflict refers to a situation where two or more users attempt to modify the same piece of state, without knowledge of one another (concurrently), resulting in conflicting versions of that data.

For example, multiple users working on a shared task or document may make changes to the same part of the document at the same time. Alternatively, network delays could cause state to diverge between different users which must be reconciled.

We can concretely demonstrate how conflict arises using the following examples.

Suppose that Alice and Bob are collaborating on a text document, when both Bob and Alice attempt to write at the same spot:

When conflicts arise, Alice and Bob’s modifications can be seen as branching off from the previous state of the system, creating a parallel version of the application state.

Branching

For a collaborative application, we need a method of reconciling such conflicts and enforcing distributed consistency across clients.

merging
The role of a conflict resolution mechanism is to merge branches in a deterministic way, until all branches have converged to a single, consistent state that all parties agree upon.

In other words, after applying all state changes, the application should deterministically converge to an eventually consistent state across the whole system that all parties agree upon.

Methods of Conflict Resolution & Maintaining Distributed Consistency

Over the years, there have been multiple solutions that have been proposed to the problem of conflict resolution.

The simplest strategy, as mentioned previously, is to prevent conflicts from occurring in the first place. This can be implemented via locking. When a given user is making edits, the document is locked, becoming read-only to other users. In other words, we impose the constraint that only a single user can have a mutable reference to the document at any given time.

Thanks to its simplicity, this approach is widely used even today. For example, Basecamp, a web-based project management tool, employs locking to prevent conflicts:

Basecamp locking

However, as noted previously, this approach provides a very limited workflow since it solely facilitates asynchronous collaboration, where users have to implicitly arrange times when they can edit the document or work on separate documents and then merge changes.

For real-time, synchronous collaboration, more advanced conflict resolution mechanisms are required.

Operational Transformation (OT)

One possible approach is to use the operational transformation (OT) algorithm, famously used by Google Docs 5.

OT represent each user’s edits as a sequence of operations that can be applied to the shared application state. For example, in the case of a collaborative text editor, where the sequence of characters is zero-indexed, the operation to insert the character 'a' at the beginning of the first sentence may be represented as insert('a', 0).

When a client makes an edit to the state, the corresponding operation is transmitted to the server, which broadcasts it to all other collaborating clients.

In cases where multiple users attempt to modify the same piece of state concurrently, the OT algorithm defines a set of rules, which encode how conflicting operations should be transformed such that the operations can be applied in any order, without causing conflict.

For example, in the case of the collaborative text editor, two clients may attempt to concurrently insert text at the start of the document i.e. O1 = insert('a', 0, 1) and O2 = insert('b', 0, 2), where the third argument represents the client id. The transform rule may be to shift one of the insertions to the right by the length of the other insertion i.e. insert('a', 0, 1) and T(O1) = insert('b', 1, 2).

Operational Transform

This ensures that both insertions can be applied whilst still capturing user intent and not modifying the intended meaning of the document.

Since OT only requires operations to be incrementally broadcast, the algorithm is efficient and has low memory overhead.

The problem is that OT is very complex to implement correctly. The OT algorithm assumes that every state change is captured, which in modern rich browser environments, can be difficult to guarantee. Further, since operations have a finite transit time to the server, the states of clients naturally diverge over time from one another. The larger the divergence, the larger the number of possible combinations of operations that result in conflict, each of which must be accounted for by the transform rules. Since many of these conflicting combinations are very difficult to foresee, formally proving the correctness of OT is complicated and error-prone, even for the simplest of OT algorithms.

This sentiment is widely shared by practitioners in the field, as highlighted by Joseph Gentle, a former Google Wave engineer, and author of the ShareJS OT library, who said:

Unfortunately, implementing OT sucks. There's a million algorithms with different tradeoffs, mostly trapped in academic papers. […] Wave took 2 years to write and if we rewrote it today, it would take almost as long to write a second time.

In fact, 4 out of 8 different implementations of OT from the original 1989 paper to 2006 were found to be incorrect, missing subtle edge cases. The consequence of this incorrectness was that client state would irrevocably diverge, with no way to return to a consistent state.

The complexity of OT led researchers to find alternatives, the most promising of which are conflict-free replicated data types, or CRDTs.

Conflict Free Replicated Data Types (CRDTs)

A conflict-free replicated data type (CRDT) is an abstract data type designed to be replicated at multiple processes.6 By definition, CRDTs have the following properties:

  • Independent- Any replica can be modified without coordinating with other replicas.
  • Strongly eventually consistent- When any two replicas have received the same set of updates (in any order), the mathematical properties of CRDTs guarantee that both replicas will deterministically converge to the same state.

By imposing these mathematical properties on the CRDT and it’s associated algorithms, clients can optimistically update their own state locally and broadcast their updates to all other remote, state replicas. Since CRDTs are strongly eventually consistent, upon a given remote replica receiving all updates, the remote replica is guaranteed to converge to the same state as the local replica without conflict.

The advantage of CRDTs is that they are guaranteed to be conflict-free, as long as the required mathematical properties are imposed. Since these mathematical properties are well-defined, it is easier to prove the correctness of a CRDT than any corresponding OT implementation. Further, since each replica is independent and that CRDTs make no assumption about the network topology, CRDTS are partition tolerant by default and can be used in a variety of network topologies including client-server and P2P. This property also means they are offline-capable by default.

However, the mathematical constraints of CRDTs, in particular that operations should be commutative adds some unavoidable overhead. Most commonly-used data structures do not have commutative operations by default. For example, the add and remove operations of a Set are not naturally commutative. To ensure commutativity, the CRDT must retain additional metadata.7

For example, in the case of the add and remove operations of a Set, tombstones are typically used as placeholders for removed entries- if a replica receives a remove operation for an element before it receives the add operation that actually added the element, the tombstone ensures that the remove operation is still correctly processed. Since the metadata must be retained for the required mathematical properties to be upheld, the use of CRDTs inevitably results in additional memory overhead, which can become significant for large state. As noted by Jospeh Gentle:

"Because of how CRDTs work, documents grow without bound. … Can you ever delete that data? Probably not. And that data can’t just sit on disk. It needs to be loaded into memory to handle edits." - Joseph Gentle, former Google Wave engineer

While recent research has sought to introduce garbage-collection methods to reduce the amount of metadata, there is still significant additional memory overhead when using CRDTs to represent a data model.

Custom Conflict Resolution Mechanisms (Not sure whether to include)

Whilst OT and CRDTs represent the most popular approaches to conflict-resolution, the complexity of OT and the memory overhead of CRDTs can sometimes be unacceptable for certain use-cases. As such, some choose to create custom, proprietary data models that are inspired by the OT and CRDT approaches and are highly specialised to a particular use-case.

For example, Figma relax many of the constraints imposed by CRDTs by adopting much simpler conflict-resolution semantics. In particular, they use simple last-write wins (LWW) semantics when two clients try to modify a value of a Figma object concurrently. This works great for Figma objects where changes are mutually exclusive i.e. a single value must be chosen, but would fail if used for text editing. In Figma’s case, this was a valid tradeoff for their use case but would not be a suitable model for other applications.8

The advantage of implementing a custom conflict-free data model is that the mechanism can be made highly-specialised to the target use-case. This can mean that many of the constraints that come with OT and CRDTs can be relaxed which may result in a simpler and efficient data representation. However, developing a custom model can be potentially risky since it requires a number of assumptions to be made about the use-case. In Figma’s case, for example, introducing text-editing may require significant changes to their current conflict-resolution semantics.

Choosing a Method of Conflict Resolution

When choosing a conflict-resolution mechanism, there is no single best, one-size fits all solution. Each conflict-resolution mechanism has it’s own set of tradeoffs and choosing a particular approach requires a deep understanding of the usage pattern of the target application.

Some aspects of the target application that should be considered include:

  • What CAP (Consistency, Availability, Partition-tolerance) properties should the system have?
  • What is the application architecture? Client-server? P2P?
  • Is the system required to operate offline?
  • Are there any system-level constraints including CPU/memory limits?
  • Is the data model generic or highly specialised to a particular use-case?

Answering these questions influences the suitability of each conflict resolution mechanism to a specific use-case.

Conflict resolution mechanisms comparison

Manually Building a Real-time Collaborative Application

Building a collaborative application from scratch can be time-consuming and difficult, particularly when dealing with the intricacies of real-time infrastructure and conflict-resolution mechanisms. It means that creating rich, collaborative experiences on the web has traditionally only been open to companies with the human and financial resources to roll their own solutions.

For smaller teams of modest means, who may lack familiarity with these specialised topics, implementing such systems has remained out of reach.

Provided below is a sample list of tasks involved in creating a production-ready real-time collaborative web application:

Manually building collaborative application

As a result, solutions have started to emerge that lower this barrier.

Existing Solutions

Existing solutions typically fall into two categories: DIY solutions and commercial solutions.

DIY Solutions

For organisations who have complex, specialised requirements for their collaborative functionality or want to tightly integrate with existing infrastructure, a DIY solution might be the best fit. This involves manually synthesising the various components required for a real-time collaborative application.

There are numerous open-source libraries providing implementations of popular conflict-resolution algorithms- teams would likely need to research, choose, and integrate the solution that best fits their use case. Alternatively, a bespoke solution may be best suited for highly specialised applications.

For the real-time network and persistence layer which handles the propagation of updates to collaborating clients and/or server(s) and storing of state, one could use a backend-as-a-service such as Ably, Pusher, or PubNub or provision a custom implementation using open-source libraries like ws or PeerJS on cloud infrastructure.

Whilst the DIY approach offers a high degree of customisation, it does require developers to have a high-level of proficiency in the relevant technologies. Thus, less experienced teams might reach for a Software-as-a-Service (SaaS) product to help manage their collaborative functionality needs.

Commercial Solutions

The advent of commercial offerings providing Collaboration-as-a-Service is a relatively recent phenomenon.

One of the most popular solutions, released in 2021, is Liveblocks. Whilst not as flexible as the DIY approach, Liveblocks provides a great developer experience, exposing all the components required for adding real-time collaboration to an application through an intuitive client API. This includes a collection of custom CRDT-like data types, autoscaling real-time infrastructure with persistence, and a developer dashboard for easily monitoring usage patterns. However, this convenience comes at a cost, with Liveblocks charging $299 per month for an application with up to 2000 monthly active users (MAU), valid as of September 2023.

A compelling alternative is Fluid Framework developed by Microsoft. Fluid provides a collection of client libraries that also expose custom CRDT-like distributed data structures. The client libraries connect to an implementation of the Fluid service, a runtime which handles the complexities of propagating updates in real-time and persisting state. Whilst Fluid is open-source, it provides a very limited implementation of the Fluid service by default, capable of handling only 100s of concurrent users. For larger applications, developers are forced to use either the Azure Managed Service or write a custom scaled implementation.

A Solution for Our Use Case

Looking at the above solutions, it is clear that until now, developers who want to incorporate collaboration into their products have been to partially or fully roll their own solutions or turn to a closed-source, managed provider.

The first option has significant implementation cost, particularly given that the expertise require to develop collaborative functionality is often orthogonal to the businesses’ core offering. The latter option suffers from vendor lock-in and can attract considerable expense, as noted with Liveblocks.

Following this, we wanted to build a tool for small teams that want to add collaborative functionality to their applications without having to spend time implementing and deploying their own conflict resolution and real-time infrastructure.

Further, we want to make our framework open-source, scalable and fully self-hosted so that developers have complete control of code and data ownership.

With globalisation and the rise of remote work, providing seamless web-native collaboration is no longer the preserve of the largest companies. Smaller teams increasingly want to reap the benefits of fast collaborative feedback loops in their products.

An example of this is Propellor Aero, who wanted the ability to collaborate with their customers on 3D interactive site survey maps.

“We started looking at building a service ourselves… We really didn't want to because it's a whole lot of work and it's a really difficult problem. This was a very new problem to us, our engineering team had different levels of experience in synchronisation in real-time as a whole.” - Jye Lewis, Engineering Manager, Propellor Aero

We sought to assist companies with similar profiles in adding collaborative functionality to their web application.

The availability of an open-source tool which handles the complexities of implementing collaboration, including conflict resolution and real-time infrastructure, would free Propellor Aero developers to focus on creating features that have direct business value, whilst still retaining control over all their data.

Comparing existing solutions

Symphony

Overview

Symphony is an open-source runtime designed to make it easy for developers to add collaborative functionality to their applications.

It comes with a client library that provides an intuitive API to a collection of conflict-free data types that are composed to construct a distributed data model. Symphony automatically provisions the required network infrastructure to propagate state changes to all collaborating clients in real-time and persist state between users sessions. It also provides real-time application- and system-level monitoring via a developer dashboard that exposes pertinent metrics including the number of active users, the size of persisted state (bytes), and the CPU/memory usage of each collaborative session.

Using Symphony

Symphony has been designed with ease-of-use in mind. In three simple steps, developers can create and deploy a real-time collaborative application.

After installing the required dependencies stated in the documentation, and globally downloading the Symphony CLI tool via npm:

  1. Run symphony compose <projectName>. This command creates a new projectName directory, initializes a new Node project with the required package.json, and scaffolds some initial starter files including the Symphony configuration file, symphony.config.js.
  2. Write and deploy the front-end client code by composing the collection of conflict-free data types provided by the Symphony client.
  3. Run symphony deploy <domainName>, which deploys the application on Google Cloud Platform (GCP). After provisioning is complete, developers can run symphony dashboard to view the developer monitoring dashboard.

Following these steps, developers can enhance existing their web applications with collaborative functionality using Symphony.

To illustrate this, here’s a simple whiteboard application where users can draw lines, shapes, and change colours. In it’s current form, the whiteboard is single-user and non-collaborative.

To make this whiteboard multiplayer, we modify the whiteboard code to make use of the conflict-free data types provided by the Symphony client. After deploying the application to GCP, user’s can now work together in the same collaborative space and see what others are doing in real-time.

We’ll now turn to how we built Symphony and the technical challenges we faced.

Architecture Overview

We’ll being by outlining the fundamental requirements we had to address and a description our design philosophy. We’ll then provide a high-level overview of our core architecture and discuss important design decisions, tradeoffs and improvements that were made.

Terminology

In order to express the system requirements accurately, we introduce some useful terminology:

  • Document- refers to the shared state that clients modify during a session.
  • Room- a collaboration session in which one or more clients connect to in order to modify the room document. A given room has a single document i.e. shared state that clients modify.
  • Presence- represents the ephemeral state of a room which defines user’s movements and actions inside a room including cursor positions, user avatars, online/offline indicators, or any other visual representation that reflects the real-time activity or availability of users within the collaborative session.

Fundamental Requirements

When building our initial prototype, we focussed on the fundamental problems that needed to be solved in order to build the core of a real-time collaborative framework. These included:

  • Deciding how to model the shared state of a room i.e. document, and selecting a suitable mechanism to resolve conflicts and understanding the constraints that such a choice would impose on the rest of our architecture.
  • Determining how ephemeral and persistent state changes on one client would be propagated in real-time to all other subscribed clients and/or servers.
  • Constructing a suitable persistence layer, where state can be stored between collaborative sessions and system metadata can be retained.

Design Philosophy

Symphony is designed with the principle that developers should be able to include collaboration into their products without having to radically modify their existing workflow and tools. With this as our guiding principle, we explain our choice of architecture and how it attempts to meet the fundamental requirements of a real-time collaborative framework.

Core Architecture

After some initial prototyping, we arrived at the following high-level flow on how a collaboration session involving multiple users starts, progresses and terminates.

A client connects to a server via WebSocket. The clients specifies the room to connect to by specifying the room ID in the URL path. The server extracts the room ID and queries the database to check if a room with that id already exists. If the id exists i.e. the room has been used before, the server retrieves the associated room document from storage and loads it into memory; otherwise, a new document is created in memory and a new room record created in the database.

Additional clients can connect to the active room and modify the state. Each update is propagated to the server which in turn updates the document state in memory and broadcasts it to all the other collaborating clients. Upon receiving updates, clients update their local state. When the last remaining client disconnects from the room, the document is serialized and written to storage. The document and room metadata is subsequently purged from memory, and the room is marked as closed in the database.

With an overall direction in mind, we then explored different options for each component of our core architecture.

Implementing the Core Architecture

Conflict Resolution

As mentioned previously, a key component of implementing real-time collaboration is the ability to deterministically reconcile conflicts, which arise as a result of multiple users concurrently modifying the same piece of state.

While we found that the performance and low memory overhead of OT was attractive, it’s complexity and the fact that it’s most suited to editing large text documents, made it less applicable to supporting generic data models.

For Symphony, we instead decided to use CRDTs as the primary conflict resolution mechanism. Their strong eventual consistency guarantees mean that client changes can be optimistically applied resulting in a faster user experience. In addition, they are highly available and fault-tolerant which means that the users can continue to change state even during network failure or disconnection- the state will simply synchronise with other clients upon reconnection.

Although CRDTs have traditionally suffered from inadequate performance and very large memory overhead, they have become exponentially faster and more memory efficient in recent years, thanks to an active research effort.9 To ensure suitable performance, we decided to use an operation-based CRDT, which unlike state-based CRDTs, only propagate operations over the wire instead of the entire state. The tradeoff is that operation-based CRDTs require a reliable network channel which could be easily included given our chosen network topology (see below).

For our collection of CRDTs, we chose to use Yjs, a library which provides a collection of generic, operation-based CRDT implementations based on the YATA algorithm. We chose Yjs since it had strong community support, has a very efficient linked-list data model with optimisations such as a garbage collector, making it one of the most memory-efficient and performant implementations. It also provided defined synchronisation and awareness protocols to propagate across persistent and ephemeral updates across a generic network layer.

We also considered using Automerge, the other leading open-source offering in this space. Whilst equally performant, it is less mature and was 2x less memory efficient than Yjs in recent benchmarks.

State Change Propagation

Since we now have a collection of conflict-free data types that can be used to construct a distributed data model, we need to consider how to propagate state updates to all collaborating clients in real-time.

CRDTs have strong eventual consistency, they can theoretically support any network layer capable of propagating updates from one replica to another. Given our use-case is for web applications, we are constrained to technologies supported by modern browsers- the two primary choices being WebSocket and WebRTC.

WebRTC is primarily used in peer-to-peer (P2P) topologies. Whilst WebRTC is scalable and minimises infrastructure requirements since it does not require the use of a central server, it lacks suitability for our use case.

Firstly, the majority of modern web applications already use a centralised client-server model. Companies want to retain control of data and enforce security measures such as authentication across all users, which is difficult in a P2P topology. Additionally, traversing firewalls and Network Address Translation (NAT) devices is not trivial with WebRTC- a consequence of this is that the applications will fail to propagate updates in geographies with national firewalls e.g. China.

As a result of these limitations, we chose WebSocket as the underlying protocol for our real-time infrastructure. It's support for the client-server model and stability across all major browsers made it a natural choice for us. Since WebSocket provides a bidirectional communication channel over TCP, the reliable network channel required for operation-based CRDTs is inherently provided.

Persisting Room Data

When a collaboration session ends, we need to persist room data so that room documents are not lost and users can recreate the room in the future to continue working on it.

To do this, we need to construct a data model which allows us to represent created rooms and their associated metadata. The model consists of a single Room entity:

Relational Model

We chose to store this data in a Postgres relational database since we have a ready-heavy system and each room has a fixed schema. It also the permits analytical queries to be more easily executed. We rely on the Prisma ORM which provides a high-level, type-safe abstraction for schema creation and database interaction.

Storing Document Data

In line with Yjs best practice, we serialize room documents into a highly compressed binary format. This has the benefit of significantly reducing the amount of storage space required per document, faster data transmission and minimised bandwidth consumption across the network.

We initially thought of storing these binary blobs in the Postgres database. However, we realised that this was suboptimal.

Firstly, document sizes can become very large, particularly after lengthy collaboration sessions which can result in a large amount of accumulated CRDT metadata. Storing these documents in Postgres would affect the scalability of the database.

Secondly, Postgres is not optimized for large scale writes- the number of writes scales linearly with the number of rooms and can become particularly problematics if large documents are saved multiple times during a collaborative session. Implementing other useful features such as document versioning also becomes tricky.

One potential solution is to use a NoSQL database like AWS DynamoDB. However, these often have limits on the size of a single database item (DynamoDB has a 400kb limit), which is impractical for use cases like ours where document size can potentially be unbounded.

Considering these limitations, we decided to store documents in object storage, namely AWS Simple Storage Service (S3). Object storage is highly scalable, optimized to handle large amounts of unstructured data making it ideal for persisting schemaless room documents. It’s also cheaper than alternative NoSQL solutions like DynamoDB and supports large-scale read and write operations, making it suitable for scenarios where there is a large number of concurrent rooms and documents needs to be ingested and retrieved at high volumes. Further, our use case only requires documents to be persisted as atomic binary blobs- we do not need to query within a document making object storage more suitable than a NoSQL database.

Integrating Postgres and S3 object storage, we are now able to persist room data between collaboration sessions. When a user connects to a room, we query Postgres to determine if an existing room exists. If it does, we can retrieve the associated document from S3 and load it into memory for editing; otherwise we create a new Room record and initalize an empty document. After the last user leaves the room, we serialize the in-memory document, store it in object storage and purge the document from server memory, returning memory resource to the system.

Front-end Client API

Whilst the conflict-free data types provided by Yjs come with a primitive API, it requires the developer to have some knowledge of the underlying data model to use optimally.

In line with our design philosophy of seamlessly integrating into developers’ existing workflow, we created a JavaScript client API wrapper with sensible defaults and intuitive abstractions, through which a developer interacts with Symphony’s components.

The client exposes the conflict-free data structures including a SyncedList and SyncedMap, which are composed to form a distributed document model. Importantly, the underlying communication and persistence infrastructure, allowing the application developer to remain at a familiar level of abstraction.

The client internally implements additional quality of life improvements for the developer, provide an enhanced developer experience. These include:

  • Implementing performance optimizations such as auto bulk-insertion of updates which significantly reduces memory consumption.
  • Automatically converting between CRDT and plain JS objects when logical to do so such that developers do not need to keep manually converting.
  • Providing undo/redo functionality with a History API. This allows undo/redo functionality to be manually paused and resumed.
  • Convenience iterator methods on SyncedList including filter, map, and find, allowing it to be used more like a regular JavaScript Array.

The full feature set provided by the Symphony client is described in our API documentation.

Load Testing

Once Symphony’s core functionality was operational, developers were able to easily create real-time collaborative applications.

However, the current architecture is limited.

The responsibility for creating, maintaining and updating state in memory for all rooms, handling user WebSocket connections, and serializing/deserializing state all fall to a single server. In other words, the system has a single point of failure.

Also, since the single server is responsible for handling all collaborative sessions and supporting the additional memory overhead resulting from our use of CRDTs, we hypothesised that whilst this architecture is suitable for a small number of rooms, it would not suffice in real-world applications that would typically have thousands of concurrent users.

To empirically verify this, we turned to load testing the system. This would also allow us to determine the system’s service level objectives (SLOs) including the concurrent user limit and identify potential bottlenecks such as compute or memory, which would later inform our scaling strategy.

Constructing a Test Environment

We first needed a way to establish a large number of virtual user connections to the server which each send state updates and broadcasted presence.

To do this, we wrote a program which spawned N separate processes, where each process modelled a virtual user connecting to the server. Since creating a large number of virtual users and propagating updates proved to be CPU intensive, we provisioned multiple EC2 instances to execute the script concurrently.

For the test itself, we selected the following load parameters.

Load testing parameters

A single room server with 1vCPU and 4GB of memory, handling 240 virtual users with 4 users per room, resulting in a total of 60 rooms, propagating one state update per second and 5 presence updates per second, for a period of 30 minutes.

While the rates of document and presence updates would vary widely depending on the specific use case, we felt that these were reasonable values to model real-world usage (in comparison Liveblocks’ default settings throttle user updates to 10 per second).

Using AWS CloudWatch, we instrumented our server to extract application-level and system-level metrics including total number of WebSocket connections and CPU/memory usage.

We observed CPU usage steadily increase as a function of the number of connected virtual users. Once all connections were established, CPU usage had reached 92%. As the in-memory document size grew as a result of user updates, CPU usage peaked at 94% before we detected performance degradation in the form of dropped connections.

The results confirmed our hypothesis- that our current architecture could only handle a few hundred concurrent users for 30 minutes of real-world usage before failing.

It would be possible to vertically scale the server with greater compute and memory. However, this approach is not optimal. Firstly, the architecture would continue to have a single point of failure. Secondly, scaling would be hard-capped by the maximum instance size offered by the AWS.

For these reasons, we decided to explore horizontal scaling, which means increasing the number rather than the size of our servers. This would make our system capable of handling more users, while also being more resilient to server failures.

Scaling

Looking to Existing Solutions

Horizontally scaling the Symphony room server is not trivial. Unlike stateless services which can be scaled simply by adding more instances, clients connect to the room server via persistent WebSocket connections which are stateful. This means that clients who connect to the same room may be connected to different room server instances. This raises two problems.

The first problem is that if a client connected to a given server instance makes an update to document of a particular room on that server, then this update must be propagated to other servers which have that room document in memory; otherwise, the update will not be received by the other servers which have that room document and the state will diverge.

The second problem arises when a client attempts to connect to an already active room. It’s possible that the connecting client may be routed to a server instance which does not have the document in-memory- while the server needs a way of retrieving the most recently updated document from another server.

Redis Pub/Sub

The first problem is not unique to the Symphony room server. One common pattern to ensure updates on one server are propagated to other server is by adding a backplane, a shared component that facilitates the synchronization of data across multiple server instances.

A popular backplane is a Redis node, where each server connects to Redis channels i.e. to a ‘publish’ channel to send all updates received by the server from connected clients and to a ‘subscribe’ channel to receive all updates published by other servers. This publisher-subscribe mechanism ensures that when a client updates a room document on a particular server, the update is broadcast to all other servers- if a receiving server has the corresponding room document in memory, it can apply the update locally, ensuring that the document replicas of a given room maintain synchronised.

Querying for Documents

One way of solving the second problem, namely that the document of an active room is missing in the particular server instance that a client connect to is to retain copies of every document on each server. However, this nullifies the benefit of scaling since the memory demands on each server is not reduced.

Instead, we implemented a system where a server could query another server instance, that had the required document in memory. For this, we maintain a key-value mapping of room id’s to room server IP addresses which defines which room documents are present in which room servers. We chose AWS DynamoDB, a NoSQL key-value database to store this data.

When a client connects to a room and is routed to a server that does not have the corresponding document in memory, the server queries DynamoDB for the list of server IP addresses that are handling that room.

If one or more IP addresses are returned, it means that the room is active and thus the latest version of the document is one that is being currently edited on one or more other servers. Using one of the returned IP addresses, the server retrieves the document from the corresponding server. If no IP addresses are returned, the room is not active and the latest version of the document is simply retrieved from object storage. Once the querying server had retrieved the document, it subscribes to Redis to receive all future document updates.

This solution ensured that clients could access a room document via any server instance, without having to replicate all active room documents on every server.

Adding and Removing Instances

Since the single-server load test had identified CPU utilisation as a notable bottleneck, we set our scaling policy to target 50% CPU utilisation. This means that the system will scale out when CPU usage of any server exceeds that limit and scale in when it falls below that number.

Evaluating the Current Scaling Solution

The chosen scaling solution represents a significant improvement the single-server approach. It can support a larger number of concurrent users by elastically deploying room server instances. However, while the architecture has historically been the most prescription for scaling WebSocket-based stateful services, we found a number of significant limitations specific to our use case during load testing.

When a plurality of clients attempted to join a particular room, they were often routed to different server instances. When the number of users in each room approached the number of server instances, it would invariably lead to copies of the document being present on every server. This nullified the benefits of scaling since there was no intended decrease in memory overhead. This additional overhead was also expensive since it would lead to extraneous CPU usage as a result of updates having to be broadcast and applied at every replica. This in turn resulted in more server instanced being provisioned and additional load on the Redis node. In fact, the Redis node approached 90% CPU utilisation at a few thousand concurrent users and represented a single point failure.

These findings led us to rethink the suitability of our current architecture for our use case.

A Better Scaling Solution

Upon reflection, there are two primary problems with the Pub-Sub architecture.

The first is that there is unnecessary duplication of documents across multiple server instances. The second is that the Redis node constitutes a single point of failure.

To overcome these limitations, we took inspiration from Figma.

“Our servers currently spin up a separate process for each multiplayer document which everyone editing that document connects to.” - Evan Wallace, CTO, Figma

This approach has the advantage of keeping document state confined to a single process. This means that there is no longer a need for distributed document state, eliminating the difficulties in horizontally scaling a stateful service. Further, each process/room can be scaled independently of others resulting in minimised cost and efficient utilisation of system resources.

This improved architecture has the following requirements:

  • Isolating each process/room from other rooms running on the same host.
  • Dynamically orchestrating process creation, execution, and termination. Processes should also automatically be restarted in case of crashes.
  • Autoscaling processes according to a specified scaling metric- in our case, this would likely be CPU or memory utilisation.
  • Proxying requests to the correct service

Implementation

We arrived at the following high-level architecture.

Architecture overview

A client sends a request to connect to a room via WebSocket. As before, the client specifies the room to connect to by specifying the room ID in the URL path. The request is intercepted by a proxy server. The proxy server extracts the room ID and queries a database to check if a room process with that id is active. If there isn’t, the server requests a process, uniquely identified by room ID to be started. Once a process with the requested ID is running and ready to accept requests, a key-value record mapping the room ID to the IP address of the process is added to the database and the server proxies the client request to the relevant process and the standard collaboration session, described in Section 1 can begin.

When the last remaining client disconnects from the room, the process waits for a predefined grace period after which the process is terminated. The corresponding process record is removed from the database.

With an overall direction in mind, we then explored different options for each component of our core architecture.

Isolating Room Processes

To execute isolated room server processes, we had two potential choices of infrastructure: containers or virtual machines.

Since rooms should be ephemeral and rapidly scalable, we chose to use containers. Containers are more lightweight resulting in shorter cold start times and faster scaling. While they are less secure than virtual machine due to having a shared kernel and not providing full hardware virtualisation, this is an acceptable tradeoff for our use case since we are running trusted code.

We now needed a way of efficiently orchestrating room containers.

Orchestrating and Scaling Room Processes

One solution was to use the AWS-native way of orchestrating containers, namely AWS Elastic Container Service (ECS), as we did in our original architecture. However, we found that this suffered from considerable vendor lock-in and would make supporting multi-cloud deployment difficult in the future. Since many developers may use other cloud providers, this went against our philosophy of integrating into existing developer workflows.

Instead, we chose to use Kubernetes, a open-source container orchestration tool thanks to it’s large community, extensive tooling, and flexibility.

Serverless

Our next decision was whether to run containers in a serverless fashion or to have direct access to the virtual machines hosting the containers. In line with our design philosophy, we wanted to make it as easy as possible for developers to create real-time collaborative web applications without having to manage the underlying infrastructure. Moreover, we wanted our solution to be cost effective. Given these requirements, we chose a serverless model, where usage-based billing model i.e. per K8s pod is employed- this means that a developer will only be charged for the number of active rooms.

For hosting the cluster, we initially turned to AWS Elastic Kubernetes Service (EKS) with Fargate. However, we found a number of drawbacks to it. The most significant drawback is that EKS does not provide a fully managed option- while automated cluster creation tools such as eksctl give the illusion of a fully-managed service, it simple auto generates the required resources and does not abstract away their existence. This means that the developer is still implicitly responsible for maintaining them and may mistakenly modify the cluster configuration.

EKS also has less flexibility than other solutions. For example, EKS insists that namespaces that require Fargate compute profiles must be specified before cluster creation. If namespaces are modified in the future, it means the infrastructure configuration also needs to be changed and the cluster recreated. Thirdly, upgrading EKS clusters can be difficult- to upgrade Kubernetes version, service pods needs to be deleted so that the underlying node is destroyed and a new one with the correct Kubernetes version is created. The lack of zero-downtime upgrades adds further burden on developers.

Instead, we found that a better solution for our Kubernetes deployment was Google Kubernetes Engine (GKE) Autopilot. GKE Autopilot provides faster cluster creation, global serverless compute across all namespaces by default, and abstracts away all the underlying components such as provisioning node pools etc. from the developer, providing a cleaner developer experience.

Proxying Requests

When a client request to connect to a particular room is received via the Kubernetes Ingress, it is intercepted by the Symphony proxy service. This service has two requirements:

  • Find or create the requested room service
  • Proxy the request to the requested room service

To satisfy the first requirement, we query etcd to check if a service with name corresponding to the room ID exists. If it doesn’t, we send a request to the K8s API server to create a new room deployment where the service name is the room id. We then poll service endpoints in etcd until the service is marked as ready. In this case, polling was justified over a more complex mechanism such as using Kubernetes Watch since pods typically spin up within a few seconds so polling does not add much additional load. Each service has been configured with K8s readiness and liveness probes to ensure that it is not prematurely added to the list of available service endpoints and marked as healthy before the room server is ready to accept requests.

As implied by the above, we decided to use etcd as the source of truth on the existence and status of services instead of keeping a service registry cached locally- this ensures the proxy services remains stateless. Since etcd is strongly consistent, it is guaranteed to represent the true state of the system when queried. By keeping the proxy service stateless, we can horizontally scale by simply adding additional replicas without having to worry about state synchronisation. Whilst this does introduce additional latency since we need to make network calls to etcd, we decided this was a valid tradeoff as having a stateful service would radically increase complexity.

Once the required room service is ready to accept requests, the server proxies the client request to it.

Overview of the Final Architecture

Ultimately, we settled on the following implementation for our final architecture:

  1. A client requests to connect to a room. The request is intercepted by the Symphony proxy.
  2. The proxy extracts the room id from the URL pathname and queries etcd to check if a service with that name exists.
  3. If the service does not exists, a request is sent to the K8s API server to create a new room deployment where the service name is the room id.
  4. The proxy polls etcd to check if the service is ready to accept requests. Once it is, the client request is proxied to the service.
  5. If the number of connections to the room remains at 0 for a specified grace period (by default 30s), the room sends a request to the K8s API server to terminate the room, returning resources back to the system.

The creation of the K8s infrastructure and the required services is automated using Terraform. We use a K8s job to automate the initialization of the database schema.

Final Architecture

Additional Improvements

With our final architecture in place, there were a few additional considerations and features remaining for us to review. We wanted to make Symphony more performant, scalable, and secure. We also wanted to add features that would make it easier for developers to monitor the state of the system.

Monitoring and Visibility

In production applications, it’s imperative that developers have the ability to observe the usage patterns and condition of the system.

To integrate observability into Symphony, we first needed a way to scrape metrics from Symphony services, particularly room servers. We sought a flexible system that would allow us to expose and inspect large volumes of custom metrics. We chose Prometheus, an open-source, industry-standard monitoring tool that provides a variety of integrations to instrument applications and a powerful query language to querying and analyze scraped metrics.

For each room, we expose pertinent application- and system-level metrics such as the number of active WebSocket connections CPU usage and memory usage via the Prometheus client for Node.js. After provisioning the Prometheus server and configuring it to dynamically detect rooms, we deployed the Prometheus UI which allowed us to query scraped room metrics Prometheus Query Language (PromQL).

Whilst this provided satisfactory visibility, using PromQL has a small learning curve. In line with our design philosophy of creating a developer-friendly experience, we wanted the ability to visualise these metrics in an intuitive manner.

To achieve this, we integrated Prometheus with Grafana, an open-source tool that is widely used for creating interactive and customizable dashboards.

As a final touch, we created an intuitive developer dashboard UI which provides a centralised location for the developer to monitor the system. In particular, the UI provides a visualisation of room metrics that are scraped and aggregated by Prometheus in real-time as a collection of pre-configured Grafana dashboards. It also exposes historical metadata about each room by querying the Cloud SQL Postgres database such as the last time the room was active, the size of room state (bytes) per room, and the total number of rooms created (inactive + active rooms).

Reducing Pod Cold Start Time

When clients attempt to connect to a room which does not exist, the proxy must wait for the K8s scheduler to match a pod to a node and the node kubelet to run it before proxying can begin.

In certain cases, we noticed that when room deployment took as long as 2 minutes. This was surprising since K8s guarantees that “99% of pods (with pre-pulled images) start within 5 seconds” 10. After some investigation, we realised that the delay was introduced when the K8s scheduler has no available node to schedule the pod on. This resulted in a lengthy autoscaling operation until a new node was provisioned.

To mitigate this, we provisioned spare capacity using balloon pods 11. A balloon pod is a low priority (defined using a K8s PriorityClass resource) pod, which reserves extra node capacity. When a room is scheduled, the balloon pod is evicted so that the room can immediately start booting. The balloon pod is also then re-scheduled continuing to reserve capacity for the next room pod.

balloon pods
Image from William Denniss

This reduced pod-startup times by 10x. Whilst this solution eliminated the problem of prolonged cold-start times, it is more expensive and the ‘always-on’ balloon pods reduces the benefit of a serverless compute layer. To minimise this disadvantage, we provision only 3 balloon pods by default, where the size of each balloon pod is equal to the size of the smallest room pod.

Securing the Deployment

To ensure our infrastructure conformed to security best practice, we added the following configurations.

Firstly, we regulated access to all K8s services in line with the principle of least privilege using Role-based access control (RBAC). We also configured Workload Identity with Google Cloud Platform (GCP) which ensures that each K8s service has least privilege when accessing GCP services external to the cluster including the database and object storage. Additionally, all non-public facing services including the Postgres database were added to private subnets to prevent direct network access.

Snapshotting

Currently, documents are only persisted to object storage once, immediately preceding room termination. This means that a process or system failure during a collaboration session would lead to irrevocable data loss, particularly given that pods are ephemeral in K8s.

To mitigate this occurrence, we implemented checkpointing, where the in-memory document is periodically serialized and persisted to object storage. This approach does, however, lead to increased costs since cloud storage has an operation-billing component, where developers are charged per use of the API. In order to balance the need to snapshot with the associated additional costs, we set the default snapshot interval to 30s i.e. in the worst-case, a user could lose 30s of work. We felt this was reasonable since a client also has a local copy which could be used to replay the state- in combination with snapshotting, this makes the system adequately fault-tolerant.

Future Work

Going forward, there are additional features that we think would enhance Symphony:

  • Integrating authentication so that users can only interact with rooms they have access to.
  • Expanding deployment targets beyond Google Kubernetes Engine (GKE). Since Symphony is built on Kubernetes and provisioned with Terraform, we can easily add support for other providers of K8s services include AWS EKS and Azure AKS.
  • Develop a set of React hooks and providers enabling Symphony to be used declaratively.

References

+ \ No newline at end of file diff --git a/img/.DS_Store b/img/.DS_Store index b572604..3231223 100644 Binary files a/img/.DS_Store and b/img/.DS_Store differ diff --git a/img/case-study/conflict-comparison.png b/img/case-study/conflict-comparison.png index d90c2fa..2eddd2d 100644 Binary files a/img/case-study/conflict-comparison.png and b/img/case-study/conflict-comparison.png differ diff --git a/img/case-study/manual.png b/img/case-study/manual.png index 2eddd2d..d90c2fa 100644 Binary files a/img/case-study/manual.png and b/img/case-study/manual.png differ diff --git a/img/case-study/relational-model.png b/img/case-study/relational-model.png index 3addb5f..b24fed6 100644 Binary files a/img/case-study/relational-model.png and b/img/case-study/relational-model.png differ diff --git a/index.html b/index.html index 42ef183..9574f71 100644 --- a/index.html +++ b/index.html @@ -5,13 +5,13 @@ Symphony - +

Collaboration. Made Simple.

Symphony is an open-source, runtime for building collaborative web applications.

Powered by Kubernetes on GKE

Symphony is a cloud-native solution built on Google Cloud Platform using GKE, Cloud SQL, Cloud Storage, Prometheus, and Grafana.

- + \ No newline at end of file