University project implementing a basic prototype for a distributed file storage with multiple replica storage, with redistribution on failovers. Any node node can die and the system will still work. Master node stores no data persistently and generates the file mesh from the nodes' data.
A TS refactor of the project was made here: file-mesh-ts.
Project is written in node (so npm
and node
are requirements). Installing npm dependencies:
npm i
Starting the master:
$ node runner MASTER
Starting nodes:
$ node runner NODE node1 &
$ node runner NODE node2 &
$ node runner NODE node3
$ # .. and more
The master starts a HTTP server for the clients to manage files and a socket-io server for the nodes to connect to it and store files.
Each node connects to the master at startup, identifying themselves with their configured name. Node checks for a memory file at startup and uses it to resume their state if there's any. Then it starts sending a ping/heartbeat to the master every a few seconds.
The master iterates trough the connected nodes (keeping track of their connection states and pings), and periodically sends a request for meta (nodes responding with metadata for their file list). The master then generates the entire file mesh and makes it available in a HTTP API readable by the web client.
Alert after adding a file.
File list and metadata for file background.jpg
(stored on nodes: 4, 5 and 6).
Node statuses page.
Logs from the master after node2 was killed.