From 9a83304f7b25d38d0c8fc46d69667c1034f95a6f Mon Sep 17 00:00:00 2001 From: Sergei Lavrentev <23312691+lavrd@users.noreply.github.com> Date: Sat, 16 Dec 2023 00:07:28 +0400 Subject: [PATCH] Staex IoD Application --- applications/staex-iod.md | 598 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 598 insertions(+) create mode 100644 applications/staex-iod.md diff --git a/applications/staex-iod.md b/applications/staex-iod.md new file mode 100644 index 00000000000..9f7988dbf6b --- /dev/null +++ b/applications/staex-iod.md @@ -0,0 +1,598 @@ +# StaexIoD + +- **Team Name:** Staex +- **Payment address**: Fiat 24.12.1971, 11:59 +- **[Level](https://github.com/w3f/Grants-Program/tree/master#level_slider-levels):** 3 + +## Project Overview :page_facing_up: + +### Overview + +Staex Internet of Data. + +The goal of this project is to create a Web3 IoT data infrastructure with a stable economy. In simple words we want to simplify Web3 onboarding process and bring more IoT device owners to share their useful data from their devices with some profit and other people to find and use such data transparently, securely and easily for their life or research. + +We want to bring Web3 data access infrastructure to your ecosystem. As a part of Web3 Foundation ecosystem we want to use Shiden parachain for our smart contracts and ecosystem tokens for project economy. + +Our team is interested in creating this project because Staex are working on several distributed solutions for IoT devices, so we are fully interesting to propose such topics to community and improve IoT field at all with new solutions. + +### Project Details + +#### Architecture + +![architecture](https://raw.githubusercontent.com/lavrd/cdn/main/image.png) + +### Milestone 1 - IoT device indexing + +#### DID smart contract + +To give users possibility to find particular type of data or particular IoT device from user interface we need to index them somehow. Indexing part we want to implement through DIDs smart contract in **ink!**. + +- `create` - method to create DID on-chain record. + +This is the required method to start work with DID for IoT device owner. Before add or update DID attributes this method should be executed as it contains several required fields which should be present in every DID: type of data, location, on-chain id, prices. + +- `update_attribute` - add or update required or additional DID attribute. +- `delete_attribute` - delete DID attribute. + +`delete_attribute` smart contract method will restrict to delete required attributes which will be predefined on smart contract deployment (initialization). + +All these methods will produce smart contract events which will be stored in blockchain state and can be indexed by us later at any time. + +As smart contract events are stored in blockchain permanently, DID smart contract doesn't store anything in their state as we can index events and it is enough to index required data. + +#### Indexer + +To index and retrieve IoT devices (DIDs) we want to implement our custom indexer as a part of provisioner (which will be described later). + +Indexer will be optionally embedded to provisioner as a module and will scan smart contract events using [subxt library](https://github.com/paritytech/subxt) sequentially block by block then parse and index DID attributes to local database (SQLite atm.). We already made a [research](https://github.com/staex-mcc/staex-iod/tree/main/provisioner/src) how to retrieve blocks and parse events in it. + +"Optionally embedded" means we will use conditional compilation to embed or not embed indexer to provisioner. As some devices don't want to index on-chain data we can produce for them less-sized binary through not including indexer functionality at all. Also, even if indexer is included in provisioner it can be turned off by specifying field in config file. + +Indexer will have an HTTP API with several routes to retrieve indexed information by user interface with pagination: + +- `/devices?search=&limit=&offset=` - get devices filtered by attributes and paginated +- `/devices/{on-chain-id}` - get particular device information + +Users can use some public indexer or start its own. + +#### DID sync + +To make process of setup or update DID for IoT device easier, we will implement such logic in our provisioner. So user will need just to set some fields in config file and provisioner will sync DID attributes with network through smart contracts calls. + +Attributes which we want to store in DIDs (required): + +- Type of data - type of data which IoT device produces, it can be temperature, humidity or something like CCTV camera output, etc. +- Location - it can be coordinates or name of the place. +- On-chain id - Substrate account id. We will need it later to create shared secret or know exact files location in IPFS. +- Price to access data per KB. +- Price to pin data per hour. + +Also, device can include some other attributes, but it is optional. + +To interact with Substrate node we will use [subxt](https://github.com/paritytech/subxt) library. + +
+Config file example + +Example: + +```toml +[did.attributes] +type = "" +location = "" +price_access = "" +price_pin = "" +``` +
+ +
+Synchronization flow diagram + +Synchronization part contains of the following steps: + +```mermaid +sequenceDiagram + autonumber + participant p as Provisioner + participant i as Indexer + participant s as Substrate + Note over i: Indexer can be local or remote. + p ->> i: Gets current DID state + activate p + activate p + activate p + alt If indexer is not available + Note over i: Indexer can be
unavailable because of:
not started locally,
remote is not active. + p -x p: Print warning log and skip synchronization + deactivate p + end + i -->> p: Returns current DID state + p ->> p: Check for any differences + alt If there are no differences + p -x p: Print info log and skip synchronization + deactivate p + end + p ->> s: Call smart contract method to sync + deactivate p +``` +
+ +#### UI + +After that we will implement a UI where users can see devices and find necessary one through filters. UI will be quite simple: header/footer + a table with device list, each row will have an information from DID. Plus we need an input or select field for filters. + +UI will be embedded to provisioner binary to give user possibility to use one binary for the whole interaction with the project. And can be disable by conditional compilation in the same way as indexer. + +### Milestone 2 - Provisioner & Applications + +Provisioner is a background daemon which will be written in Rust. It is responsible for: + +- Manage applications lifecycle through Docker containers. +- Sync local DIDs attributes from config file with smart contract. +- Manage local IPFS node lifecycle and distribute data through it. +- Listen for on-chain events to distribute data and index IoT devices. + +Applications are small programs to read data from different types of sensors. Through config file device owner can simply start them and see how data from sensors are securely distributed to users on request. + +In the current version applications are Docker containers because it is simple to build them for different IoT devices and provisioner will be responsible for their lifecycle as an orchestrator. Also with Docker containers applications can be written in any language. In future we will add possibility to use Linux system processes and runC containers as a lightweight alternative. + +#### Applications lifecycle management + +Applications are Docker containers so we will write a module which will maintain their lifecycle on device owner request: start, stop, delete, etc. + +To start new application device owner will need to provide config file and make a request using our provisioner CLI. + +After that provisioner will start Docker container and will watch for Docker daemon events. On provisioner restart it can also restart or re-create dependent containers in case they were stopped by some reason. + +
+Application configuration in JSON format + +Configuration: + +```json +{ + "image": "example.registry.io/project/name", + "command": "/bin/sh", + "args": ["1", "2"], + "envs": ["key=val"] +} +``` +
+ +To interact with Docker containers we are planning to use [bollard](https://github.com/fussybeaver/bollard) library as we have experience with it. + +#### Data exchange + +As provisioner is responsible to deliver data to consumers through IPFS node, it needs to get this data from sensors. And applications are responsible to get data from sensors. To send data from application to provisioner we want to use unix sockets. + +So provisioner start unix socket server and on applications starting it creates a volume for each application with unix socket. Also, it creates and pass environment variable with path to unix socket inside container. Every application can read this environment variable and connect to unix socket server. + +At them moment as a format for data transmitting we choose JSON for simplicity and its availability in any programming language. + +
+Provisioner RPC server will have several RPC endpoints + +Endpoints: + +1. Save data + +Request: + +```json +{ + "method": 1, + "data": { + "seq_num": 44, + "blob": [0, 1, 2, 3] + } +} +``` + +2. Get latest sequence number + +Request: + +```json +{ + "method": 2 +} +``` + +Response: + +```json +{ + "seq_num": 44 +} +``` +
+ +#### IPFS application + +Provisioner will have at least one running application - IPFS node. It is required because it needs it to encrypt and deliver data to consumers (as a device) or to retrieve and decrypt data from device (as a consumer). User can't delete this application by CLI or by other methods. + +This application will be based on [Kubo](https://github.com/ipfs/kubo) implementation. So in general provisioner will run this application (Docker container) on start and maintain it lifecycle. Through Kubo RPC requests provisioner will work and synchronize data on IPFS. Configuration for this application will be embedded to provisioner binary to start everything automatically without user interaction. + +#### Example application + +Example application is required to demonstrate how to write applications for this project and how to interact with the provisioner. And also for demo purposes. This application will be written in Go and will produce random mock data on start as a IoT sensor simulation. + +#### Provisioner CLI + +Provisioner CLI is responsible to interact with provisioner and has following commands: + +- `applications create ` - create new application using configuration from file. +- `applications get ` - get exact application and their status (running, stopped), last time data was inserted. +- `applications list` - get all applications. +- `applications delete ` - delete application. +- `consumers` - get information about consumers and synchronization process, show requested and consumed amount of data. +- `consumers ` - the same as above but for exact consumer. +- `data ` - information about how much data already saved into database. + +To interact with provisioner CLI uses predefined unix socket like `/tmp/staex-iod.sock`. + +### Milestone 3 - ACL - 1 +### Milestone 4 - ACL - 2 + +#### ACL + +Access control layer is here to restrict data usage from users which didn't pay for a data to IoT device owner. + +As the goal of the platform is to give IoT device owners possibility to share their data from their devices, so to not share this data for everyone, we need smart contract (which will be implemented in **ink!**) to restrict accessing data by everyone and open access only for particular users. So ACL will be implemented in a following way: provisioner will encrypt data before saving it into IPFS and only particular user can decrypt data using shared secret key. + +Here is a flow: + +1. User puts some tokens to smart contract for exact owner and point which type of data it needs. +2. Provisioner on IoT device sees an event from smart contract that there is new data access request. +3. If provisioner can produce such data. +4. Provisioner gets tokens from smart contract. +5. Provisioner starts to distribute data for this user. + +##### Data access smart contract + +To implement flow below we will implement a kind of multisig **ink!** smart contract with three methods: + +- `request` - user requests for particular data access and sends required amount of tokens, also points for how much time device should pin the data. +- `accept` - device accepts request and gets required amount of tokens. +- `reject` - device rejects request and tokens will be returned to user. + +Device can reject request because of: + +- Requested data is not available at the moment. +- Device has high amount of request (highload). + +This contract will use DID smart contract to check that user sent required amount of tokens otherwise will reject automatically. + +We call it "a kind of multisig" because on request user puts some tokens for exact device owner to smart contract and only device owner can withdraw these tokens from smart contract. + +##### Data distribution + +To distribute data for exact user we want to use IPFS and it is fully open source, distributed and IoT devices don't need to have public IP. + +To make device root folder static in IPFS network we will use IPNS. With IPNS user which requested for data can look for exact place in IPFS to find it. So IPFS structure will be following: + +```shell +/ipfs/// +``` + +With such structure every user can know where requested data is located. + +When provisioner gets tokens from smart contract it will start distributing process. To prevent data reading from IPFS by users which didn't pay for it, provisioner will encrypt data before upload. To generate encrypt key we will use Diffie–Hellman key exchange. With this algorithm we can generate encrypt/decrypt key without transferring it by the network. So device private key + user public key and device public key + user private key can generate exact the same shared secret key which provisioner will use for encryption. + +After getting data from database and encrypting it provisioner will save and pin data in IPFS and notify user through IPFS pub/sub protocol. + +##### Data accessing + +To access decrypted data automatically user can start provisioner on their own laptop or server. Provisioner will start local IPFS node and subscribe for IPFS events and start sync files from IPFS to local database. Also it will generate shared secret key and decrypt all data while sync process. + +Also provisioner will start simple server to access data from UI through HTTP. UI will be simple and user can see a table with synced data. Each row will have some metadata like IoT device public key, data created time and data itself. Such UI also will be available on device to see produced data. + +##### Data access prolongation + +For example data cost 1 token per 1KB. If device saved to IPFS 1KB of data it will stop data distribution. If user doesn't want such data access prevents it should put more tokens on smart contract on data request. When provisioner almost done with 1KB it will try to get more tokens from smart contract and in case of success continue to distribute data. + +WRITE THAT USER CAN USE REQUEST METHOD FROM SMART CONTRACT + +##### Data access expiration + +Through data access smart contract user can request data pinning for exact time interval. So while this time interval provisioner will keep data inside IPFS and will notify user in case data is expired through IPFS pub/sub. + +#### Provisioner + + + +Also provisioner will have a HTTP server to work with user interfaces through it. For example provisioner will have a UI to see running applications, data and data access subscribers. It will be simple UI with a table for each type of data and a table with data consumers. + +When provisioner sees request data on-chain event it makes new record in "requests" table and by trigger SQLite updates "consumers" table fields to start or continue process of synchronization. + +To make it possible for parallel writes using SQLite from applications, provisioner will make migrations for each application while starting it and we will setup SQLite to use WAL mode. + +Also we will implement a command line interface to start/stop application by provisioner. We want to use for it CLI instead of UI because it is the same experience as with kubectl for Kubernetes. It will be easier to manage application lifecycle with just some commands. + +#### Applications + + + +Applications should write to local SQLite database to give and access to data from sensors to provisioner for their future work. + +#### SQLite database structure + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldTypeDescription
Table "applications" - to store applications state.
idserial-
nametextApplication name.
datatextStored as JSON. Configuration: image, command, args, volumes, envs, etc. State: is application running, error, events, restart count, etc.
Table "consumers" - to store data access requests and consumed amount of data by consumers.
idserial-
chain_idtextOn-chain consumer identification like Substrate account id.
requestedintegerOverall requested amount of data in KB.
consumedintegerOverall consumed amount of data in KB.
Table "requests" - to store on-chain request for the data.
idserial-
chain_idtextFor idempotency: %block_number%_%event_index_in_block%.
requestedintegerRequested amount of data in KB.
Table "data" - to store data produced by applications.
idserial-
app_idintegerLink to applications table by id: applications.id.
seq_numblobSequence number for idempotency.
datablobData itself.
Table "data_sync" - to process synchronization with IPFS.
idserial-
data_idintegerLink data from data table by id: data.id.
consumer_idintegerLink consumer from consumers table by id: consumers.id.
synced_atintegerTimestamp.
expired_atintegerTimestamp.
Table "devices" - to store and index decentralized identities attributes to find particular data type or device.
idserial-
chain_idtextOn-chain id: Substrate account id.
data_typetextData type like temperature, speed, etc.
locationtextLocation as coordinates or place name.
price_accesstextPrice to access the data.
price_pintextPrice to pin the data for some time.
additionaltextAdditional attributes which is stored as JSON.
Table "system" - to store additional data for properly provisioner work. For example in this table we can store last indexed block number.
idserial-
keytext-
valtext-
+ +#### Technologies + +- IPFS - for decentralized data distribution and notifications. +- Rust - to effectively implement provisioner. +- Docker - to ship applications and manage their lifecycle. +- **ink!** - smart contracts as a transparent and honest way to implement data access economy. +- Vue - for fast and comfortable implementation of user interfaces. +- SQLite - to share data between applications, IPFS and provisioner. + +### Ecosystem Fit + +Help us locate your project in the Polkadot/Substrate/Kusama landscape and what problems it tries to solve by answering each of these questions: + +- Where and how does your project fit into the ecosystem? - our project is fit into the ecosystem as distributed data access infrastructure. +- Who is your target audience? - our target audience is IoT device owners and data researches. Moreover data like real-time temperature from particular place can be interesting for a lot of people and other type data too. +- What need(s) does your project meet? - our project meets easier, secure and honest data access. +- Are there any other projects similar to yours in the Substrate / Polkadot / Kusama ecosystem? - we didn't find similar projects. + +## Team :busts_in_silhouette: + +### Team members + +- **Alexandra Mikityuk, CEO.** PhD in Computer Science. Previously co-founder of IT companies, startup consultant, 17 years in the software industry. +- **Paksy Plackis-Cheng, CSO.** Award-winning entrepreneur, 20 years executive management experience in (deep) tech startups and renowned global corporations. IPO and M&A experience. Strategic partnerships, BizDev, GTM. +- **Ivan Gankevich, Product lead / system architect.** PhD in computer science. Previously a researcher in the field of distributed systems with 10 years of experience. +- **Maksim Sukhotin, Senior software engineer.** 13 Years Experience in various corporate projects as senior software engineer. Specialization in industrial process control and automation. +- **Sergei Lavrentev, Senior software engineer.** M.Sc. in Computer Science and 6 years of experience in software development in different areas. + +### Contact + +- **Contact Name:** Sergei Lavrentev +- **Contact Email:** sergei@staex.io +- **Website:** https://staex.io + +### Legal Structure + +- **Registered Address:** Staex GmbH, c/o Unicorn, Am Neuen Markt 9 e-f, 14467 Potsdam, HRB 36981 P +- **Registered Legal Entity:** Staex GmbH + +### Team's experience + +Our team has huge experience in IoT infrastructure field. As a part of projects from T-Labs (Deutsche Telekom) and Staex, we were able to implement orchestrators for Blockchain networks, distributed networks and currently VPN with IoT focus. https://staex.io/product here you can find more about our benchmarks and we are able to implement this solution. + +- https://staex.io/product/features/low-data-usage +- https://staex.io/product/features/high-performance + +Our solution at Staex is written in Rust. + +Also to prove our team experience we want to mention that we have some patents: https://staex.io/patent. + +### Team Code Repos + +- https://github.com/staex-mcc/staex-iod (this project) +- https://github.com/staex-mcc/rise-next - dRone InfraStructure paymEnts - our solution for automotive drone infrastructure. + +Please also provide the GitHub accounts of all team members. If they contain no activity, references to projects hosted elsewhere or live are also fine. + +- https://github.com/igankevich +- https://github.com/lavrd + +### Team LinkedIn Profiles (if available) + +- https://www.linkedin.com/in/dr-alexandra-mikityuk/ +- https://www.linkedin.com/in/paksy/ +- https://www.linkedin.com/in/ivan-gankevich/ +- https://www.linkedin.com/in/max-sukhotin/ +- https://www.linkedin.com/in/lavrdx/ + +## Development Status :open_book: + +We have started this project in this repository: https://github.com/staex-mcc/staex-iod. Also we did a research about possible implementation which is described in this document. + +## Development Roadmap :nut_and_bolt: + +### Overview + +- **Total Estimated Duration:** 4 months +- **Full-Time Equivalent (FTE):** 2 +- **Total Costs:** 52,000 USD + +### Milestone 1 — IoT device indexing + +- **Estimated duration:** 1 month +- **FTE:** 2 +- **Costs:** 13,000 USD + +| Number | Deliverable | Specification | +| -----: | ----------- | ------------- | +| **0a.** | License | MIT | +| **0b.** | Documentation | 1. How to use DID smart contract.
2. How to start and setup indexer node.
3. How to configure provisioner to sync DID with chain.
4. Attribute definitions and their meaning.
5. How to start UI and use it.
6. How to start local Substrate based node for completely local setup.
7. How to use ink! for our local development.
8. General description of the first milestone and why.
9. Current architecture diagram.
10. Instruction how to test with local development.
11. How to start end-to-end tests.
12. How to use conditional complication for provisioner to include or not indexer.
13. Indexer OpenAPI specification. | +| **0c.** | Testing and Testing Guide | ink! smart contract will be covered by unit tests. Provisioner DID logic will be covered through Rust end-to-end tests. | +| **0d.** | Docker | We will provide docker compose to setup local development. | +| **1.** | DID smart contract | DID smart contract description and implementation with following methods: `create`, `update_attribute`, `delete_attribute` and corresponding events. | +| **2.** | Indexer | Index on-chain events produced by DID smart contract and save them to database. Provide possibility to exclude indexer from provisioner by conditional compilation. HTTP API. | +| **3.** | DID sync | Sync DID attributes from config file by provisioner in auto way on start using subxt library. | +| **4.** | UI | Embedded UI to get data from indexer and show filtered indexed IoT devices. | + +### Milestone 2 — Provisioner & Applications + +- **Estimated Duration:** 1 month +- **FTE:** 2 +- **Costs:** 13,000 USD + +| Number | Deliverable | Specification | +| -----: | ----------- | ------------- | +| **0a.** | License | MIT | +| **0b.** | Documentation | 1. How to run and use provisioner CLI.
2. How to start end-to-end tests.
3. How to write new application.
4. How data exchange process works. | +| **0c.** | Testing and Testing Guide | 1. End-to-end tests in Rust for lifecycle management.
2. End-to-end tests in Rust for data exchange.
3. End-to-end tests in Rust to check IPFS application integration.
4. Provisioner RPC documentation. | +| **0d.** | Docker | We will provide docker compose to setup local development. | +| **1.** | Applications lifecycle management | Source code in Rust which is responsible for managing Docker containers lifecycle though configuration files: start, get, delete. | +| **2.** | IPFS application | Embedded application (Docker container and provisioner configuration) with IPFS node which can be run by provisioner on start. | +| **3.** | Example application | Application (written in Go) which simulate reading from IoT device sensor and send data to provisioner through data exchange. | +| **4.** | Data exchange | Implementation of data exchange protocol and RPC server. | +| **5.** | Provisioner CLI | CLI to manage applications lifecycle, get consumers information and information about stored data. | + +### Milestone 3 — ACL 1 + +- **Estimated Duration:** 1 month +- **FTE:** 2 +- **Costs:** 13,000 USD + +| Number | Deliverable | Specification | +| -----: | ----------- | ------------- | +| **0a.** | License | MIT | +| **0b.** | Documentation | 1. How to use data access smart contract.
2. How data distribution works.
3. How data accessing works. | +| **0c.** | Testing and Testing Guide | 1. Data access smart contract unit tests.
2. Data distribution and access end-to-end tests in Rust. | +| **0d.** | Docker | We will provide docker compose to setup local development. | +| **1.** | Data access smart contract | Smart contract to implement data access requests and tokens flow. | +| **2.** | Smart contract integration | Use data access smart contract from provisioner. | +| **3.** | Data distribution | Provisioner syncs local database with IPFS with data encryption for particular user. | +| **4.** | Data accessing | Provisioner syncs local database with IPFS with data decryption from particular device. | + +### Milestone 4 — ACL 2 + +- **Estimated Duration:** 1 month +- **FTE:** 2 +- **Costs:** 13,000 USD + +| Number | Deliverable | Specification | +| -----: | ----------- | ------------- | +| **0a.** | License | MIT | +| **0b.** | Documentation | 1. Describe new features of our data accessing smart contract.
2. Describe how prolongation and expiration work.
3. Describe how to use our new user interface features. | +| **0c.** | Testing and Testing Guide | 1. End-to-end test in Rust for prolongation and expiration cases.
2. Unit tests for new features in smart contract. | +| **0d.** | Docker | We will provide docker compose to setup local development. | +| 0e. | Article | We will write an article that explains the work done as part of the grant. We will publish this article to our dev.to organization and to our website and blog (https://blog.staex.io). Also we want to post non-technical article on LinkedIn. | +| **1.** | Data access smart contract | Improve data access smart contract to manage data expiration. Improve smart contract to keep user tokens for prolongation. | +| **2.** | Data access prolongation | Improve provisioner to see tokens on smart contract and prolong data access. | +| **3.** | Data access expiration | Improve provisioner to remove data from IPFS based on data expiration. | +| **4.** | Accessing data interface | Improve database structure and UI to see decrypted data from device on user machine. | + +## Future Plans + +We already have a lot of improvement and plans for feature: + +1. At the moment we will implement history or near-realtime data access. But sometimes devices want to distribute their data in realtime. So it is a huge task and we really want to implement it. It will be implemented with MQTT and some kind of mesh network solution which provide fast and secure access to IoT device data stream (possible usage of StaexMCC?). Btw we already have some docs how to implement it. +2. It is crucial to implement some kind of oracle to check that data was delivered to user and this data is not a garbage. +3. Some kind of data indexing in a folder so user can apply some filters or make data sorting. +4. Marketplace - some kind of public marketplace on IPFS to distribute applications and provide reward for application developers. +5. Sometimes it is not possible to start so many services on single IoT device or store a lof of data, so we want to give devices possibility to use remote IPFS for data distribution or data pinning. +6. Support small boards like STM32/ESP32. +7. Provide user possibility to request historical data, for example for the past year. +8. Auto-detecting sensors on IoT device and start applications according to it. +9. New type of data consuming. In this grant we will implement counting by amount of data, but it is also possible to request for a data not in "bytes" but in events count. +10. More user friendly price system for mass adoption. IoT device can ask for their data in some token, for example, DOT, but on the website we can ask for data access, for example, USDT. So we can automatically make an exchange or teleport tokens to parachain to make it easier to sell or buy data access. +11. To not scan every on-chain block by provisioner to see to which user it should send data by request, provide possibility to provisioner to use external indexer and sync with it. + +For project promotions we want to use our community from LinkedIn and partners. As a part of our content marketing we will put some articles to our own blog on our website and through dev.to organisation. Also we are almost started our podcast. + +## Referral Program (optional) :moneybag: + +You can find more information about the program [here](../README.md#moneybag-referral-program). + +- **Referrer:** [David Hawig](https://github.com/Noc2) + +## Additional Information :heavy_plus_sign: + +**How did you hear about the Grants Program?** Web3 Foundation Website / Medium / Twitter / Element / Announcement by another team / personal recommendation / etc. + +We found this program by suggestion after winning in [Web3 Foundation hackathon](https://staex.io/paris-blockchain-week-2023). [External article](https://www.essec.edu/en/news/unlocking-future-web3-key-highlights-after-paris-blockchain-week-exclusive-talk-michael-amar-chairman-pbw/). + +[Our YouTube channel with podcast and videos.](https://www.youtube.com/@staex5585) + +**Staex itself doesn't collect any revenue or pay to IoT device owners or to users which request data from devices or to any other entity.**