diff --git a/src/SUMMARY.md b/src/SUMMARY.md index beb80287..962847ab 100644 --- a/src/SUMMARY.md +++ b/src/SUMMARY.md @@ -71,4 +71,4 @@ - [SQLite API](./apis/sqlite.md) - [Terminal API](./apis/terminal.md) - [VFS API](./apis/vfs.md) -- [Websocket API](./apis/websocket_authentication.md) +- [WebSocket API](./apis/websocket_authentication.md) diff --git a/src/apis/vfs.md b/src/apis/vfs.md index dfc1b5f8..d9aec540 100644 --- a/src/apis/vfs.md +++ b/src/apis/vfs.md @@ -8,11 +8,11 @@ Every request takes a path and a corresponding action. ## Drives -VFS paths are normal relative paths within the directory `/your_node_home/vfs/`, but to be valid they need to be within a drive. -A drive is just a directory within your vfs, consisting of 2 parts: `/package_id/drive_name/`. +VFS paths are normal relative paths within the directory `your_node_home/vfs/`, but to be valid they need to be within a drive. +A drive is just a directory within your VFS, consisting of 2 parts: `package_id/drive_name/`. -For example: `/your_package:publisher.os/pkg/`. -This directory is usually filled with files put into the `/pkg` directory when installing with app_store. +For example: `your_package:publisher.os/pkg/`. +This directory is usually filled with files put into the `pkg/` directory when installing with `app_store`. [Capabilities](../process-capabilities.md) are checked on the drive part of the path. When calling `create_drive()` you'll be given "read" and "write" caps that you can share with other processes. diff --git a/src/apis/websocket_authentication.md b/src/apis/websocket_authentication.md index b7abb943..2a32692b 100644 --- a/src/apis/websocket_authentication.md +++ b/src/apis/websocket_authentication.md @@ -15,7 +15,7 @@ const api = new KinodeEncryptorApi({ nodeId: window.our.node, // this is set if the /our.js script is present in index.html processId: "my_package:my_package:template.os", onOpen: (_event, api) => { - console.log('Connected to kinode node') + console.log('Connected to Kinode') // Send a message to the node via WebSocket api.send({ data: 'Hello World' }) }, diff --git a/src/chess_app/chess_engine.md b/src/chess_app/chess_engine.md index 8ff2d1ce..f33d191d 100644 --- a/src/chess_app/chess_engine.md +++ b/src/chess_app/chess_engine.md @@ -15,7 +15,7 @@ Once you have the template app installed and can see it running on your testing # Chess Engine -Chess is a good example for an Kinode application walk-through because: +Chess is a good example for a Kinode application walk-through because: 1. The basic game logic is already readily available. There are thousands of high-quality chess libraries across many languages that can be imported into a Wasm app that runs on Kinode. We'll be using [pleco](https://github.com/pleco-rs/Pleco) @@ -157,11 +157,11 @@ lto = true anyhow = "1.0" base64 = "0.13" bincode = "1.3.3" +kinode_process_lib = { git = "ssh://git@github.com/uqbar-dao/process_lib.git", tag = "v0.5.4-alpha" } pleco = "0.5" serde = { version = "1.0", features = ["derive"] } serde_json = "1.0" url = "*" -kinode_process_lib = { git = "ssh://git@github.com/uqbar-dao/process_lib.git", rev = "a2d3e9e" } wit-bindgen = { git = "https://github.com/bytecodealliance/wit-bindgen", rev = "efcc759" } [lib] @@ -183,7 +183,7 @@ use kinode_process_lib::{ extern crate base64; -// Boilerplate: generate the Wasm bindings for an Kinode app +// Boilerplate: generate the Wasm bindings for a Kinode app wit_bindgen::generate!({ path: "wit", world: "process", diff --git a/src/chess_app/frontend.md b/src/chess_app/frontend.md index 3b6844a3..e23786a9 100644 --- a/src/chess_app/frontend.md +++ b/src/chess_app/frontend.md @@ -1,21 +1,24 @@ # Adding a Frontend -Here, we'll add a web frontend to the code from the [previous section](./chess_engine.md). +Here, you'll add a web frontend to the code from the [previous section](./chess_engine.md). Creating a web frontend has two parts: 1. Altering the process code to serve and handle HTTP requests 2. Writing a webpage to interact with the process. Here, you'll use React to make a single-page app that displays your current games and allows us to: create new games, resign from games, and make moves on the chess board. -JavaScript and React development aren't in the scope of this tutorial, so we'll provide that code [here](https://github.com/uqbar-dao/chess-ui). +JavaScript and React development aren't in the scope of this tutorial, so you can find that code [here](https://github.com/uqbar-dao/chess-ui). -The important part of the frontend for the purpose of this tutorial is the build, specifically the `pkg/ui` directory that will be loaded into the VFS during installation. -Serve these as static files, [which you can get here](https://github.com/uqbar-dao/chess-ui/tree/tutorial/tutorial_build) if you don't want to build them yourself. +The important part of the frontend for the purpose of this tutorial is how to set up those pre-existing files to be built and installed by `kit`. +When files found in the `ui/` directory, if a `package.json` file is found with a `build:copy` field in `scripts`, `kit` will run that to build the UI (see [here](https://github.com/uqbar-dao/chess-ui/blob/82419ea0e53e6d86d6dc6c8ed7f656c3ab51fdc8/package.json#L10)). +The `build:copy` in that file builds the UI and then places the resulting files into the `pkg/ui/` directory where they will be installed by `kit start-package`. +This allows your process to fetch them from the virtual filesystem, as all files in `pkg/` are mounted. +See the [VFS API overview](../apis/vfs.md) to see how to use files mounted in `pkg/`. -Run `npm run build` in the `chess-ui` repo and copy the output `dist` folder into the `pkg` folder in your app, so it'll be ingested on-install. -This allows your process to fetch them from the virtual filesystem, as all files in `pkg` are mounted. -Rename it to `ui` so that you have the files in `pkg/ui`. -See the [VFS API overview](../apis/vfs.md) to see how to use files mounted in `pkg`. +Get the chess UI files and place them in the proper place (next to `pkg/`): +```bash +git clone https://github.com/uqbar-dao/chess-ui ui +``` Chess will use the `http_server` runtime module to serve a static frontend and receive HTTP requests from it. You'll also use a WebSocket connection to send updates to the frontend when the game state changes. @@ -162,14 +165,14 @@ fn handle_http_request( state: &mut ChessState, http_request: &http::IncomingHttpRequest, ) -> anyhow::Result<()> { - if http_request.path()? != "games" { + if http_request.path()? != "/games" { return http::send_response( http::StatusCode::NOT_FOUND, None, "Not Found".to_string().as_bytes().to_vec(), ); } - match http_request.method.as_str() { + match http_request.method()?.as_str() { // on GET: give the frontend all of our active games "GET" => http::send_response( http::StatusCode::OK, @@ -355,7 +358,7 @@ fn send_ws_update( ) -> anyhow::Result<()> { for channel in open_channels { Request::new() - .target((&our.node, "http_server", "sys", "kinode")) + .target((&our.node, "http_server", "distro", "sys")) .body(serde_json::to_vec( &http::HttpServerAction::WebSocketPush { channel_id: *channel, diff --git a/src/cookbook/file_transfer.md b/src/cookbook/file_transfer.md index 93b9352d..3504d435 100644 --- a/src/cookbook/file_transfer.md +++ b/src/cookbook/file_transfer.md @@ -1,7 +1,7 @@ # File Transfer This entry will teach you to build a simple file transfer app, allowing nodes to download files from a public directory. -It will use the [vfs](../apis/vfs.md) to read and write files, and will spin up worker processes for the transfer. +It will use the [VFS](../apis/vfs.md) to read and write files, and will spin up worker processes for the transfer. This guide assumes a basic understanding of Kinode process building, some familiarity with [`kit`](../kit/kit.md), requests and responses, and some knowledge of rust syntax. @@ -19,20 +19,22 @@ This guide assumes a basic understanding of Kinode process building, some famili First, initialize a new project with ``` kit new file_transfer +cd file_transfer ``` Here's a clean template so you have a complete fresh start: -This guide will use the following `kinode_process_lib` version in `Cargo.toml` for this app: +This guide will use the following `kinode_process_lib` version in `file_transfer/Cargo.toml`: ``` -kinode_process_lib = { git = "ssh://git@github.com/uqbar-dao/process_lib.git", rev = "64d2856" } +kinode_process_lib = { git = "ssh://git@github.com/uqbar-dao/process_lib.git", tag = "v0.5.4-alpha" } ``` +Replace the `file_transfer/src/lib.rs` with: ```rust use serde::{Deserialize, Serialize}; use std::str::FromStr; -use kinode_process_lib::{await_message, println, Address, Message, ProcessId, Request, Response}; +use kinode_process_lib::{await_message, println, Address, Message, Response}; wit_bindgen::generate!({ path: "wit", @@ -67,7 +69,8 @@ impl Guest for Component { } ``` -Before delving into the code, you can handle the capabilities you need to request at spawn, these will be messaging capabilities to `"net:distro:sys"` (as you'll want to talk to other nodes), and one to `"vfs:distro:sys"` as you'll want to talk to the filesystem. +Before delving into the code, you can handle the capabilities you need to request at spawn. +These will be messaging capabilities to `"net:distro:sys"` (as you'll want to talk to other nodes), and one to `"vfs:distro:sys"` as you'll want to talk to the filesystem. `pkg/manifest.json` @@ -88,18 +91,20 @@ Before delving into the code, you can handle the capabilities you need to reques ] ``` -Now, start by creating a [drive](../apis/vfs.md#drives) in your vfs and opening it, where files will be downloaded by other nodes. -You can add a whitelist a bit later! - -Also, import some vfs functions from the `process_lib`. - +Now, look at `file_transfer/src/lib.rs`. +First, add an import of some VFS functions from the `process_lib`: ```rust use kinode_process_lib::vfs::{create_drive, metadata, open_dir, Directory, FileType}, - +``` +and, to `init()`, create a [drive](../apis/vfs.md#drives) in your VFS and open it. +This is where files will be downloaded by other nodes. +You can add a whitelist a bit later! +```rust let drive_path = create_drive(our.package_id(), "files").unwrap(); ``` -To start, this will be an app without UI, so to upload files into your public directory, simply copy them into the "files" folder located in `your_node/vfs/file_transfer:file_transfer:template.uq/files` +At first, this will be an app without UI. +To upload files into your public directory, simply copy them into the "files" directory located in `your_node/vfs/file_transfer:template.os/files` You now need to let other nodes know what files they can download from you, so add some message types. @@ -121,14 +126,15 @@ pub struct FileInfo { } ``` -You can handle these messages cleanly by modifying the handle message function slightly. +You can handle these messages cleanly by modifying the `handle_message()` function slightly. It will match on whether a message is a request or a response, the errors get thrown to the main loop automatically with the `?` after the `await_message()` function. +The skeleton of `file_transfer/src/lib.rs` ends up looking like: ```rust use kinode_process_lib::{ await_message, println, vfs::{create_drive, metadata, open_dir, Directory, FileType}, - Address, Message, ProcessId, Request, Response, + Address, Message, Response, }; use serde::{Deserialize, Serialize}; use std::str::FromStr; @@ -161,20 +167,10 @@ fn handle_message(our: &Address, file_dir: &Directory) -> anyhow::Result<()> { let message = await_message()?; match message { - Message::Response { - ref source, - ref body, - .. - } => { - handle_transfer_response(our, source, body, file_dir)?; - } - Message::Request { - ref source, - ref body, - .. - } => { - handle_transfer_request(&our, source, body, file_dir)?; - } + Message::Response { ref source, ref body, .. } => + handle_transfer_response(source, body)?, + Message::Request { ref source, ref body, .. } => + handle_transfer_request(&our, source, body, file_dir)?, }; Ok(()) @@ -202,12 +198,12 @@ impl Guest for Component { } ``` -You can then add the `handle_transfer_request` and `handle_transfer_response` functions. +You can then add the `handle_transfer_request()` and `handle_transfer_response()` functions. ```rust fn handle_transfer_request( - our: &Address, - source: &Address, + _our: &Address, + _source: &Address, body: &Vec, files_dir: &Directory, ) -> anyhow::Result<()> { @@ -235,13 +231,13 @@ fn handle_transfer_request( .send()?; } } + + Ok(()) } fn handle_transfer_response( - our: &Address, source: &Address, body: &Vec, - file_dir: &Directory, ) -> anyhow::Result<()> { let transfer_response = serde_json::from_slice::(body)?; @@ -249,30 +245,48 @@ fn handle_transfer_response( TransferResponse::ListFiles(files) => { println!("got files from node: {:?} ,files: {:?}", source, files); } + _ => {} } Ok(()) } ``` -Now try this out by booting two nodes (fake or real), placing files in the /files folder of one of them, and sending a request. +Now try this out by [booting two nodes](../kit/boot-fake-node.md#example-usage), i.e., +``` +kit f + +# In another terminal +kit f --home /tmp/kinode-fake-node-2 -p 8081 -f fake2.os +``` + +and then placing files in the `/files` directory of the second (the `--home` dir path is specified as an argument to `boot-fake-node`), and sending a request from the first: ``` -/m node2.os@file_transfer:file_transfer:template.uq "ListFiles" +/m fake.os@file_transfer:file_transfer:template.os "ListFiles" ``` You should see a printed response. ```md -Thu 1/11 13:14 response from node2.os@file_transfer:file_transfer:template.os: {"ListFiles":[{"name":"file_transfer:template.os/files/barry-lyndon.mp4","size":8760244}, {"name":"file_transfer:template.os/files/blue-danube.mp3","size":9668359}]} +Thu 1/11 13:14 response from fake2.os@file_transfer:file_transfer:template.os: {"ListFiles":[{"name":"file_transfer:template.os/files/barry-lyndon.mp4","size":8760244}, {"name":"file_transfer:template.os/files/blue-danube.mp3","size":9668359}]} ``` ### Transfer -Now the fun part, downloading/sending files! +Now the fun part: downloading/sending files! -You could handle all of this within the `file_transfer` process, but you can also spin up another process, a worker, that handles the downloading/sending and then sends progress updates back to the main `file_transfer`. -This way you can download several files downloading at the same time without waiting for one to finish. +In the following, you'll create a child process to handle the downloading/sending and sends progress updates to the parent `file_transfer` process. +Why the complicated architecture? + +The `file_transfer` application must be able to handle multiple up/downloads simultaneously. +There are two ways to accomplish this. +The first is to add `context` to Requests sent so that different up/downloads can be disambiguated as they come in. +The second is to spawn a child "worker" to handle each up/download. +Using a child process also allows Requests to await the corresponding Response. +For further reading, see discussion on [`contexts`](../processes.md#please-respond), [awaiting](../processes.md#awaiting-a-response), [spawning children](../processes.md#spawning-child-processes), and more on the [parent-child pattern](../cookbook/manage_child_processes.md). + +#### The main process: `file_transfer` Start by defining some types. You'll need a request that tells our main process to spin up a worker, requesting the node you're downloading from to do the same. @@ -319,20 +333,58 @@ pub enum WorkerRequest { } ``` +Some notes: + - Workers will take an `Inititialize` request from their own node, that either tells them they're a receiver or a sender based on if they have a target worker `Option
`. -- Progress reports are sent back to the main process, which you can then pipe them through as websocket updates to the frontend. -- To enable spawning, import the `spawn` function from the `process_lib`. -- The only additional part you need to handle in the transfer app is the Download request you've added. +- Progress reports are sent back to the main process; if adding a frontend, these could be sent to it via WebSocket updates. +The only additional part you need to handle in the transfer app is the Download request you've added. `TransferRequest::Download` will handle 2 cases: +1. An incoming download request; spawn a worker, which sends chunks to the remote `target_worker` given in the request, +2. An outgoing download request: spawn a worker, which sends its address to the remote node hosting the file. + -1. A node sent us a download request, you spawn a worker, and tell it to send chunks to the `target_worker` you got in the request. -2. You want to download a file from another node, you send yourself a download request, you spin up a worker and send it's address to the remote node. +To enable spawning and other features, change `file_transfer/src/lib.rs`s imports to: +```rust +use kinode_process_lib::{ + await_message, our_capabilities, println, spawn, + vfs::{create_drive, metadata, open_dir, Directory, FileType}, + Address, Message, OnExit, Request, Response, +}; +use serde::{Deserialize, Serialize}; +use std::str::FromStr; +``` +and change `handle_transfer_request()` to: ```rust +fn handle_transfer_request( + our: &Address, + source: &Address, + body: &Vec, + files_dir: &Directory, +) -> anyhow::Result<()> { + let transfer_request = serde_json::from_slice::(body)?; + match transfer_request { TransferRequest::ListFiles => { - // like before + let entries = files_dir.read()?; + let files: Vec = entries + .iter() + .filter_map(|file| match file.file_type { + FileType::File => match metadata(&file.path) { + Ok(metadata) => Some(FileInfo { + name: file.path.clone(), + size: metadata.len, + }), + Err(_) => None, + }, + _ => None, + }) + .collect(); + + Response::new() + .body(serde_json::to_vec(&TransferResponse::ListFiles(files))?) + .send()?; } TransferRequest::Progress { name, progress } => { // for now, progress reports are just printed @@ -387,19 +439,72 @@ pub enum WorkerRequest { } } } + + Ok(()) +} ``` There you go. As you can see, the main transfer doesn't actually do much — it only handles a handshake. This makes adding more features later on very simple. +#### The `worker` + Now, the actual worker. -Add this bit by bit: +The worker is its own process, just like the `file_transfer` process. +Therefore, you need to create a new process directory, `worker`, next to the `file_transfer` process, inside the `file_transfer` package. +E.g., +```bash +cp -r file_transfer worker +``` +and change the `worker/Cargo.toml` `name` to `worker`. -First, because when you spawn your worker you give it `our_capabilities()` (i.e. it has the same capabilities as the parent process), the worker will have the ability to message both `"net:distro:sys"` and `"vfs:distro:sys"`. -As it's also within the same package, you can simply open the `files_dir` without issue. +First, its worth noting that because when you spawn `worker` you give it `our_capabilities()` (i.e. it has the same capabilities as the parent process), the worker will have the ability to message both `"net:distro:sys"` and `"vfs:distro:sys"`. +Since `worker` is in the same package as `file_transfer`, it has the capability to open the `files` directory, see discussion on [VFS drives](../apis/vfs.md) for more details. + +Overwrite the copied in `worker/src/lib.rs` with the skeleton of `worker`, including imports and `init()`: ```rust +use serde::{Deserialize, Serialize}; +use std::str::FromStr; + +use kinode_process_lib::{ + await_message, get_blob, println, + vfs::{open_dir, open_file, Directory, File, SeekFrom}, + Address, Message, ProcessId, Request, Response, +}; + +wit_bindgen::generate!({ + path: "wit", + world: "process", + exports: { + world: Component, + }, +}); + +const CHUNK_SIZE: u64 = 1048576; // 1MB + +#[derive(Serialize, Deserialize, Debug)] +pub enum WorkerRequest { + Initialize { + name: String, + target_worker: Option
, + }, + Chunk { + name: String, + offset: u64, + length: u64, + }, + Size(u64), +} + +#[derive(Serialize, Deserialize, Debug)] +pub enum TransferRequest { + ListFiles, + Download { name: String, target: Address }, + Progress { name: String, progress: u64 }, +} + struct Component; impl Guest for Component { fn init(our: String) { @@ -424,7 +529,7 @@ impl Guest for Component { You'll also need a bit of state for the receiving worker. This is not persisted (you'll add that soon!), but when different chunks arrive, you need to know what file to write to and how long that file should eventually become to generate progress updates. -This is not known at the point of spawning (`init` takes just an `our: String`), but you've created a `WorkerRequest::Initialize` precisely for this reason. +This is not known at the point of spawning (`init()` takes just an `our: String`), but instead from `WorkerRequest::Initialize`. The state you'll initialize at the start of the worker will look like this: @@ -433,7 +538,7 @@ let mut file: Option = None; let mut size: Option = None; ``` -And then in the main loop we pass it to `handle_message`: +And then in the main loop we pass it to `handle_message()`: ```rust struct Component; @@ -460,12 +565,11 @@ impl Guest for Component { } ``` -The `handle_message` function will handle three `WorkerRequest` variants: the requests `Initialize`, `Chunk` and `Size`. +The `handle_message()` function will handle three `WorkerRequest` variants: the requests `Initialize`, `Chunk` and `Size`. `WorkerRequest::Initialize` runs once, received from the spawner: ```rust - fn handle_message( our: &Address, file: &mut Option, @@ -538,9 +642,6 @@ fn handle_message( } } } - _ => { - println!("Chunk and Size next!") - } } } _ => { @@ -551,15 +652,14 @@ fn handle_message( } ``` -So upon `Initialize`, you open the existing file or create an empty one. Then, depending on whether the worker is a sender or receiver, you take one of two options: - +So upon `Initialize`, you open the existing file or create an empty one. +Then: - if receiver, save the `File` to your state, and then send a Started response to parent. -- if sender, get the file's length, send it as `Size` to the `target_worker`, and then chunk the data, loop, read into a buffer and send to `target_worker`. +- if sender, get the file's length, send it as `Size` to the `target_worker`, and then iteratively send chunks to `target_worker`. -`WorkerRequest::Chunk` will look like this: +The `WorkerRequest::Chunk` branch of the `handle_message()` `match` will look like this: ```rust -// someone sending a chunk to us! WorkerRequest::Chunk { name, offset, @@ -611,7 +711,7 @@ WorkerRequest::Chunk { } ``` -And `WorkerRequest::Size` is easy: +And `WorkerRequest::Size` branch is easy: ```rust WorkerRequest::Size(incoming_size) => { @@ -620,8 +720,7 @@ WorkerRequest::Size(incoming_size) => { ``` One more thing: once you're done sending, you can exit the process; the worker is not needed anymore. -Change your `handle_message` function to return a `Result` instead telling the main loop whether it should exit or not. - +Change your `handle_message()` function to return a `Result` instead telling the main loop whether it should exit or not. As a bonus, we can add a print when it exits of how long it took to send/receive! ```rust @@ -633,7 +732,8 @@ fn handle_message( ) -> anyhow::Result { ``` -Changing the main loop and the places we return `Ok(())` appropriately. +Change the return value of `handle_message()` return the Ok(exit)` as appropriate. +Finally, change the main loop to: ```rust struct Component; @@ -710,6 +810,8 @@ pub enum WorkerRequest { #[derive(Serialize, Deserialize, Debug)] pub enum TransferRequest { + ListFiles, + Download { name: String, target: Address }, Progress { name: String, progress: u64 }, } @@ -723,7 +825,6 @@ fn handle_message( match message { Message::Request { - ref source, ref body, .. } => { @@ -1026,24 +1127,14 @@ fn handle_transfer_response(source: &Address, body: &Vec) -> anyhow::Result< Ok(()) } -fn handle_message(our: &Address, files_dir: &Directory) -> anyhow::Result<()> { +fn handle_message(our: &Address, file_dir: &Directory) -> anyhow::Result<()> { let message = await_message()?; match message { - Message::Response { - ref source, - ref body, - .. - } => { - handle_transfer_response(source, body)?; - } - Message::Request { - ref source, - ref body, - .. - } => { - handle_transfer_request(&our, source, body, files_dir)?; - } + Message::Response { ref source, ref body, .. } => + handle_transfer_response(source, body)?, + Message::Request { ref source, ref body, .. } => + handle_transfer_request(&our, source, body, file_dir)?, }; Ok(()) @@ -1078,7 +1169,7 @@ There you have it! Try and run it, you can download a file with the command ``` -/m our@file_transfer:file_transfer:template.os {"Download": {"name": "dawg.jpeg", "target": "buenosaires.os@file_transfer:file_transfer:template.os"}} +/m our@file_transfer:file_transfer:template.os {"Download": {"name": "dawg.jpeg", "target": "fake2.os@file_transfer:file_transfer:template.os"}} ``` replacing node name and file name! diff --git a/src/cookbook/websocket_authentication.md b/src/cookbook/websocket_authentication.md index abc2f7d3..0fb00573 100644 --- a/src/cookbook/websocket_authentication.md +++ b/src/cookbook/websocket_authentication.md @@ -1 +1 @@ -# Websocket Authentication +# WebSocket Authentication diff --git a/src/files.md b/src/files.md index bf41245e..3ca9ac01 100644 --- a/src/files.md +++ b/src/files.md @@ -33,7 +33,7 @@ For example, part of the VFS might look like: ## Usage -To access files in the vfs, you need to create or open a [drive](./apis/vfs.md#drives), this can be done with the function `create_drive` from the [standard library](./process_stdlib/overview.md): +To access files in the VFS, you need to create or open a [drive](./apis/vfs.md#drives), this can be done with the function `create_drive` from the [standard library](./process_stdlib/overview.md): ```rust let drive_path: String = create_drive(our.package_id(), "drive_name")?; diff --git a/src/identity_system.md b/src/identity_system.md index d78f5601..2220a7a4 100644 --- a/src/identity_system.md +++ b/src/identity_system.md @@ -6,7 +6,7 @@ Kinode OS uses a domain system similar to [ENS](https://ens.domains/) to achieve It should be noted that, in our system, the concepts of `domain`, `identity`, and `username` are identical and interchangeable. Like ENS, Kinode domains (managed by our KNS) are registered by a wallet and owned in the form of an NFT. -However, unlike ENS, Kinode domains never expire. Additionally, they contain metadata.osessary to both: +However, unlike ENS, Kinode domains never expire. Additionally, they contain metadata necessary to both: - demonstrate the provenance of a given identity. - route messages to the identity on the Kinode network. @@ -18,7 +18,7 @@ Instead, it is designed to easily extend and wrap existing NFTs, enabling users What does this look like in practice? It's easy enough to check for provenance of a given identity. -If you have an Kinode domain, you can prove ownership by signing a message with the wallet that owns the domain. +If you have a Kinode domain, you can prove ownership by signing a message with the wallet that owns the domain. However, to essentially use your Kinode identity as a domain name for your personal server, KNS domains have routing information, similar to a DNS record, that points to an IP address. A KNS domain can either be `direct` or `indirect`. diff --git a/src/intro.md b/src/intro.md index e0e7cff3..76ffc805 100644 --- a/src/intro.md +++ b/src/intro.md @@ -28,7 +28,7 @@ Kinode's kernel handles the startup and teardown of processes, as well as messag Processes are programs compiled to Wasm, which export a single `init()` function. They can be started once and complete immediately, or they can run "forever". -Peers in Kinode OS are identified by their onchain username in the "KNS": Kinode Domain Name System, which is modeled after ENS. +Peers in Kinode OS are identified by their onchain username in the "KNS": Kinode Name System, which is modeled after ENS. The modular architecture of the KNS allows for any Ethereum NFT, including ENS names themselves, to generate a unique Kinode identity once it is linked to a KNS entry. Data persistence and blockchain access, as fundamental primitives for p2p apps, are built directly into the kernel. @@ -38,4 +38,4 @@ Accessing global state in the form of the Ethereum blockchain is now trivial, wi Several other I/O primitives also come with the kernel: an HTTP server and client framework, as well as a simple key-value store. Together, these tools can be used to build performant and self-custodied full-stack applications. -Finally, by the end of this book, you will learn how to deploy applications to the Kinode network, where they will be discoverable and installable by any user with an Kinode. +Finally, by the end of this book, you will learn how to deploy applications to the Kinode network, where they will be discoverable and installable by any user with a Kinode. diff --git a/src/kit/new.md b/src/kit/new.md index 90c53bde..786ec7b8 100644 --- a/src/kit/new.md +++ b/src/kit/new.md @@ -70,7 +70,7 @@ Must be URL-safe. ### `--publisher` -Name of the publisher; defaults to `template.uq`. +Name of the publisher; defaults to `template.os`. Must be URL-safe. ### `--language` diff --git a/src/kit/remove-package.md b/src/kit/remove-package.md index 6b49471b..3d0beed9 100644 --- a/src/kit/remove-package.md +++ b/src/kit/remove-package.md @@ -13,7 +13,7 @@ kit remove-package foo or ```bash -kit remove-package -package foo --publisher template.uq +kit remove-package -package foo --publisher template.os ``` ## Discussion diff --git a/src/login.md b/src/login.md index 7eee5382..fed8a8a5 100644 --- a/src/login.md +++ b/src/login.md @@ -5,26 +5,24 @@ Let's get onto the live network! These directions are particular to the Kinode OS alpha release. Joining the network will become significantly easier on subsequent releases. -Note: While Kinode will eventually post identities to Optimism, the alpha release uses the Ethereum Sepolia testnet. +Kinode has two live networks: mainnet on Optimism and a testnet on Ethereum Sepolia. +Identities created on one are unrelated to identities on the other, and nodes cannot communicate across networks. +This document discusses how to get on to either. ## Creating an Alchemy Account -Alchemy is used as an [Ethereum RPC endpoint](#acquiring-an-rpc-api-key) and as a [faucet for Sepolia testnet ETH](#aside-acquiring-sepolia-testnet-eth). -An Ethereum RPC endpoint and Sepolia ETH are required to send and receive Ethereum transactions that support the Kinode identity system. -If you do not already have one, register an [Alchemy account](https://www.alchemy.com/). +Alchemy is used as an [Ethereum RPC endpoint](#acquiring-an-rpc-api-key) and, for the testnet, as a [faucet for Sepolia testnet ETH](#aside-acquiring-sepolia-testnet-eth). +An Ethereum RPC endpoint and either Optimism or Sepolia ETH are required to send and receive Ethereum transactions that support the Kinode identity system. +If you do not already have an Alchemy account, [register one](https://www.alchemy.com/). The account is free and requires only an email address for registration. ## Starting the Kinode -Start an Kinode using the binary acquired in the [previous section](./install.md). -Locating the binary on your system, run: +Start a Kinode using the binary acquired in the [previous section](./install.md). +Locating the binary on your system, print out the arguments expected by the binary: -```bash -$ ./kinode --help ``` -This will print the arguments expected by the binary: - -```bash +$ ./kinode --help A General Purpose Sovereign Cloud Computing Platform Usage: kinode [OPTIONS] --rpc @@ -33,8 +31,8 @@ Arguments: Path to home directory Options: - --port First port to try binding - --testnet Use Sepolia testnet + --port Port to bind [default: first unbound at or above 8080] + --testnet If set, use Sepolia testnet --rpc Ethereum RPC endpoint (must be wss://) -h, --help Print help -V, --version Print version @@ -43,11 +41,15 @@ Options: A home directory must be supplied — where the node will store its files. The binary also takes a required `--rpc` flag. The `--rpc` flag is a `wss://` WebSocket link to an Ethereum RPC, allowing the Kinode can send and receive Ethereum transactions — used in the [identity system](./identity_system.md) as mentioned [above](#creating-an-alchemy-account). -Finally, by default, the node will bind to port 8080; this can be modified with the `--port` flag. +If the `--port` flag is supplied, Kinode will attempt to bind that port and will exit if that port is already taken. +If no `--port` flag is supplied, Kinode will bind to `8080` if it is available, or the first port above `8080` if not. + +By default, the binary will connect to the Optimism mainnet. +To connect to the Sepolia testnet instead, supply the `--testnet` flag. ### Acquiring an RPC API Key -Create a new "app" on [Alchemy](https://dashboard.alchemy.com/apps) on the Ethereum Sepolia network. +Create a new "app" on [Alchemy](https://dashboard.alchemy.com/apps) for either Optimism Mainnet or Ethereum Sepolia. ![Alchemy Create App](./assets/alchemy-create-app.png) @@ -58,9 +60,12 @@ Copy the WebSocket API key from the API Key button: ### Running the Binary Replace the `--rpc` field below with the WebSocket API key link copied from [the previous step](#acquiring-an-rpc-api-key), and start the node with: - ```bash -./kinode home --rpc wss://eth-sepolia.g.alchemy.com/v2/ --testnet +# For Optimism mainnet +./kinode home --rpc wss://opt-mainnet.g.alchemy.com/v2/ + +# For Sepolia testnet +./kinode home --rpc wss://eth-sepolia.g.alchemy.com/v2/ ``` (See runtime README if you wish to boot on Optimism mainnet) @@ -99,6 +104,11 @@ After registering a username, click through until you reach `Connect Wallet` and ![Register connect wallet](./assets/register-connect-wallet.png) +### Aside: Bridging ETH to Optimism + +Bridge ETH to Optimism using the [official bridge](https://app.optimism.io/bridge). +Many exchanges also allow sending ETH directly to Optimism wallets. + ### Aside: Acquiring Sepolia Testnet ETH Using the Alchemy account [registered above](#creating-an-alchemy-account), use the [Sepolia faucet](https://sepoliafaucet.com/) to acquire Sepolia ETH if you do not already have some in your wallet. @@ -112,7 +122,7 @@ To do this, simply leave the box below name registration unchecked. ![Register select name](./assets/register-select-name.png) -Am indirect node connects to the network through a router, which is a direct node that serves as an intermediary, passing packets from sender to receiver. +An indirect node connects to the network through a router, which is a direct node that serves as an intermediary, passing packets from sender to receiver. Routers make connecting to the network convenient, and so are the default. If you are connecting from a laptop that isn't always on, or that changes WiFi networks, use an indirect node. diff --git a/src/my_first_app/chapter_1.md b/src/my_first_app/chapter_1.md index 55532ef2..719666fe 100644 --- a/src/my_first_app/chapter_1.md +++ b/src/my_first_app/chapter_1.md @@ -11,8 +11,8 @@ The `$ ` should not be copied into the terminal. # Environment Setup -In this chapter, you'll walk through setting up an Kinode development environment. -By the end, you will have created an Kinode application, or package, composed of one or more processes that run on a live Kinode. +In this chapter, you'll walk through setting up a Kinode development environment. +By the end, you will have created a Kinode application, or package, composed of one or more processes that run on a live Kinode. The application will be a simple chat interface: `my_chat_app`. The following assumes a Unix environment — macOS or Linux. @@ -31,7 +31,7 @@ cargo install --git https://github.com/uqbar-dao/kit ## Creating a New Kinode Package Template The `kit` toolkit has a [variety of features](../kit/kit.md). -One of those tools is `new`, which creates a template for an Kinode package. +One of those tools is `new`, which creates a template for a Kinode package. The `new` tool takes two arguments: a path to create the template directory and a name for the package: ``` @@ -47,7 +47,7 @@ Options: -a, --package Name of the package [default: DIR] -u, --publisher Name of the publisher [default: template.os] -l, --language Programming language of the template [default: rust] [possible values: rust, python, javascript] - -t, --template