ARC is a transaction processor for Bitcoin that keeps track of the life cycle of a transaction as it is processed by the Bitcoin network. Next to the mining status of a transaction, ARC also keeps track of the various states that a transaction can be in.
- Documentation
- Configuration
- How to run ARC
- Microservices
- Message Queue
- K8s-Watcher
- Broadcaster-cli
- Tests
- Monitoring
- Building ARC
- Acknowledgements
- Contribution Guidelines
- Support & Contacts
- Find full documentation at https://bitcoin-sv.github.io/arc
Settings for ARC are defined in a configuration file. The default configuration is shown in config/example_config.yaml
. Each setting is documented in the file itself.
If you want to load config.yaml
from a different location, you can specify it on the command line using the -config=<path>
flag.
Arc also has a default configuration specified in code (config/defaults.go
), therefore, in your config.yaml
you can specify only the values that you want override. Example:
---
logLevel: INFO
logFormat: text
network: mainnet
tracing:
dialAddr: http://tracing:1234
The rest of the settings will be taken from defaults.
Each setting in the file config.yaml
can be overridden with an environment variable. The environment variable needs to have this prefix ARC_
. A sub setting will be separated using an underscore character. For example the following config setting could be overridden by the environment variable ARC_METAMORPH_LISTENADDR
metamorph:
listenAddr:
To run all the microservices in one process (during development), use the main.go
file in the root directory.
go run main.go
The main.go
file accepts the following flags (main.go --help
):
usage: main [options]
where options are:
-api=<true|false>
whether to start ARC api server (default=true)
-metamorph=<true|false>
whether to start metamorph (default=true)
-blocktx=<true|false>
whether to start block tx (default=true)
-k8s-watcher=<true|false>
whether to start k8s-watcher (default=true)
-config=/location
directory to look for config.yaml (default='')
-dump_config=/file.yaml
dump config to specified file and exit (default='config/dumped_config.yaml')
Each individual microservice can also be started individually by running e.g. go run main.go -api=true
.
NOTE: If you start the main.go
with a microservice set to true, it will not start the other services. For example, if
you run go run main.go -api=true
, it will only start the API server, and not the other services, although you can start multiple services by specifying them on the command line.
In order to run ARC there needs to be a Postgres database available. The connection to the database is defined in the config.yaml
file. The database needs to be created before running ARC. The migrations for the database can be found in the internal/metamorph/store/postgresql/migrations
folder. The migrations can be executed using the go-migrate tool (see section Metamorph stores and Blocktx stores).
Additionally, ARC relies on a message queue to communicate between Metamorph and BlockTx (see section Message Queue) section. The message queue can be started as a docker container. The docker image can be found here. The message queue can be started like this:
docker run -p 4222:4222 nats
The docker-compose file additionally shows how ARC can be run with the message queue and the Postgres database and db migrations. You can run ARC with all components with the following command
docker-compose -f deployments/docker-compose.yml up
ARC can be run as a docker container. The docker image can be built using the provided Dockerfile
(see section Building ARC).
The latest docker image of ARC can be found here.
API is the REST API microservice for interacting with ARC. See the API documentation for more information.
The API takes care of authentication, validation, and sending transactions to Metamorph. The API talks to one or more Metamorph instances using client-based, round robin load balancing.
To register a callback, the client must add the X-CallbackUrl
header to the
request. The callbacker will then send a POST request to the URL specified in the header, with the transaction ID in
the body. See the API documentation for more information.
You can run the API like this:
go run main.go -api=true
The only difference between the two is that the generic main.go
starts the Go profiler, while the specific cmd/api/main.go
command does not.
If you want to integrate the ARC API into an existing echo server, check out the examples folder in the GitHub repo.
Metamorph is a microservice that is responsible for processing transactions sent by the API to the Bitcoin network. It takes care of re-sending transactions if they are not acknowledged by the network within a certain time period (60 seconds by default).
Metamorph is designed to be horizontally scalable, with each instance operating independently. As a result, they do not communicate with each other and remain unaware of each other's existence.
You can run metamorph like this:
go run main.go -metamorph=true
Metamorph keeps track of the lifecycle of a transaction, and assigns it a status. The following statuses are available:
Code | Status | Description |
---|---|---|
0 | UNKNOWN |
The transaction has been sent to metamorph, but no processing has taken place. This should never be the case, unless something goes wrong. |
1 | QUEUED |
The transaction has been queued for processing. |
2 | RECEIVED |
The transaction has been properly received by the metamorph processor. |
3 | STORED |
The transaction has been stored in the metamorph store. This should ensure the transaction will be processed and retried if not picked up immediately by a mining node. |
4 | ANNOUNCED_TO_NETWORK |
The transaction has been announced (INV message) to the Bitcoin network. |
5 | REQUESTED_BY_NETWORK |
The transaction has been requested from metamorph by a Bitcoin node. |
6 | SENT_TO_NETWORK |
The transaction has been sent to at least 1 Bitcoin node. |
7 | ACCEPTED_BY_NETWORK |
The transaction has been accepted by a connected Bitcoin node on the ZMQ interface. If metamorph is not connected to ZMQ, this status will never by set. |
8 | SEEN_ON_NETWORK |
The transaction has been seen on the Bitcoin network and propagated to other nodes. This status is set when metamorph receives an INV message for the transaction from another node than it was sent to. |
9 | MINED |
The transaction has been mined into a block by a mining node. |
10 | SEEN_IN_ORPHAN_MEMPOOL |
The transaction has been sent to at least 1 Bitcoin node but parent transaction was not found. |
108 | CONFIRMED |
The transaction is marked as confirmed when it is in a block with 100 blocks built on top of that block (Currently this status is not maintained) |
109 | REJECTED |
The transaction has been rejected by the Bitcoin network. |
This status is returned in the txStatus
field whenever the transaction is queried.
Currently, Metamorph only offers one storage implementation which is Postgres.
Migrations have to be executed prior to starting Metamorph. For this you'll need the go-migrate tool. Once go-migrate
has been installed, the migrations can be executed as follows:
migrate -database "postgres://<username>:<password>@<host>:<port>/<db-name>?sslmode=<ssl-mode>" -path internal/metamorph/store/postgresql/migrations up
Metamorph can connect to multiple Bitcoin nodes, and will use a subset of the nodes to send transactions to. The other nodes will be used to listen for transaction INV message, which will trigger the SEEN_ON_NETWORK status of a transaction.
The Bitcoin nodes can be configured in the settings file.
Metamorph is talking to the Bitcoin nodes over the p2p network. If metamorph sends invalid transactions to the Bitcoin node, it will be banned by that node. Either make sure not to send invalid or double spend transactions through metamorph, or make sure that all metamorph servers are whitelisted on the Bitcoin nodes they are connecting to.
Although not required, zmq can be used to listen for transaction messages (hashtx
, invalidtx
, discardedfrommempool
).
This is especially useful if you are not connecting to multiple Bitcoin nodes, and therefore are not receiving INV
messages for your transactions. Currently, ARC can only detect whether a transaction was rejected e.g. due to double spending if ZMQ is connected to at least one node.
If you want to use zmq, you can set the host.port.zmq
setting for the respective peers
setting in the configuration file.
ZMQ does seem to be a bit faster than the p2p network, so it is recommended to turn it on, if available.
BlockTx is a microservice that is responsible for processing blocks mined on the Bitcoin network, and for propagating the status of transactions to Metamorph. The communication between BlockTx and Metamorph is asynchronous and happens through a message queue. More details about that message queue can be found here.
The main purpose of BlockTx is to de-duplicate processing of (large) blocks. As an incoming block is processed by BlockTx, each Metamorph is notified of transactions that they have registered an interest in. BlockTx does not store the transaction data, but instead stores only the transaction IDs and the block height in which they were mined. Metamorph is responsible for storing the transaction data.
You can run BlockTx like this:
go run main.go -blocktx=true
Currently, BlockTx only offers one storage implementation which is Postgres.
Migrations have to be executed prior to starting BlockTx. For this you'll need the go-migrate tool. Once go-migrate
has been installed, the migrations can be executed as follows:
migrate -database "postgres://<username>:<password>@<host>:<port>/<db-name>?sslmode=<ssl-mode>" -path internal/blocktx/store/postgresql/migrations up
For the communication between Metamorph and BlockTx a message queue is used. Currently the only available implementation of that message queue uses NATS. A message queue of this type has to run in order for ARC to run.
Metamorph publishes new transactions to the message queue and BlockTx subscribes to the message queue, receive the transactions and stores them. Once BlockTx finds these transactions have been mined in a block it updates the block information and publishes the block information to the message queue. Metamorph subscribes to the message queue and receives the block information and updates the status of the transactions.
The K8s-Watcher is a service which is needed for a special use case. If ARC runs on a Kubernetes cluster, then the K8s-Watcher can be run as a safety measure. Due to the centralisation of metamorph
storage, each metamorph
pod has to ensure the exclusive processing of records by locking the records. If metamorph
shuts down gracefully it will unlock all the records it holds in memory. The graceful shutdown is not guaranteed though. For this eventuality the K8s-Watcher can be run in a separate pod. K8s-Watcher detects when metamorph
pods are terminated and will additionally call on the metamorph
service to unlock the records of that terminated metamorph
pod. This ensures that no records will stay in a locked state.
The K8s-Watcher can be started as follows
go run main.go -k8s-watcher=true
The broadcaster-cli
provides a set of functions which allow to interact with any instance of ARC. It also provides functions for key sets.
The broadcaster-cli can be installed using the following command.
go install github.com/bitcoin-sv/arc/cmd/broadcaster-cli@latest
If the ARC repository is checked out it can also be installed from that local repository like this
go install ./cmd/broadcaster-cli/
broadcaster-cli
uses flags for adding context needed to run it. The flags and commands available can be shown by running broadcaster-cli
with the flag --help
.
As there can be a lot of flags you can also define them in a .env
file. For example like this:
keyfile=./cmd/broadcaster-cli/arc-0.key
testnet=true
If file .env
is present in either the folder where broadcaster-cli
is run or the folder ./cmd/broadcaster-cli/
, then these values will be used as flags (if available to the command). You can still provide the flags, in that case the value provided in the flag will override the value provided in .env
These instructions will provide the steps needed in order to use broadcaster-cli
to send transactions to ARC.
- Create a new key set by running
broadcaster-cli keyset new
- You can give a path where the key set should be stored as a file using the
--filename
flag - Existing files will not be overwritten
- Omitting the
--filename
flag will create the file using a file name./cmd/broadcaster-cli/arc-{i}.key
, where i is an iterator counting up until an available filename is found
- You can give a path where the key set should be stored as a file using the
- The keyfile flag
--keyfile=<path to key file>
and--testnet
flag have to be given in all commands exceptbroadcaster-cli keyfile new
- Add funds to the funding address
- Show the funding address by running
broadcaster-cli keyset address
- In case of
testnet
(using the--testnet
flag) funds can be added using the WoC faucet. For that you can use the commandbroadcaster-cli keyset topup --testnet
- You can view the balance of the key set using the command
broadcaster-cli keyset balance
- Show the funding address by running
- Create utxo set
- There must be a certain utxo set available so that
broadcaster-cli
can broadcast a reasonable number of transactions in batches - First look at the existing utxo set using
broadcaster-cli keyset utxos
- In order to create more outputs use the following command
broadcaster-cli utxos create --outputs=<number of outputs> --satoshis=<number of satoshis per output>
- This command will send transactions creating the requested outputs to ARC. There are more flags needed for this command. Please see
go run cmd/broadcaster-cli/main.go utxos -h
for more details - See the new distribution of utxos using
broadcaster-cli keyset utxos
- There must be a certain utxo set available so that
- Broadcast transactions to ARC
- Now
broadcaster-cli
can be used to broadcast transactions to ARC at a given rate using this commandbroadcaster-cli utxos broadcast --rate=<txs per second> --batchsize=<nr ot txs per batch>
- The limit flag
--limit=<nr of transactions at which broadcasting stops>
is optional. If not givenbroadcaster-cli
will only stop at abortion e.g. usingCTRL+C
- The optional
--store
flag will store all the responses of each request to ARC in a folderresults/
as a json file - In order to broadcast a large number of transactions in parallel, multiple key sets can be given in a comma separated way using the keyfile flag
--keyfile=./cmd/broadcaster-cli/arc-0.key,./cmd/broadcaster-cli/arc-1.key,./cmd/broadcaster-cli/arc-2.key
- Each concurrently running broadcasting process will broadcast at the given rate
- For example: If a rate of
--rate=100
is given with 3 key files--keyfile=arc-1.key,arc-2.key,arc-3.key
, then the final rate will be 300 transactions per second.
- Now
- Consolidate outputs
- If not enough outputs are available for another test run it is best to consolidate the outputs so that there remains only output using
broadcaster-cli utxos consolidate
- After this step you can continue with step 4
- Before continuing with step 4 it is advisable to wait until all consolidation transactions were mined
- The command
broadcaster-cli keyset balance
shows the amount of satoshis in the balance that have been confirmed and the amount which has not yet been confirmed
- If not enough outputs are available for another test run it is best to consolidate the outputs so that there remains only output using
In order to run the unit tests do the following
make test
Integration tests of the postgres database need docker installed to run them. If colima
implementation of Docker is being used on macOS, the DOCKER_HOST
environment variable may need to be given as follows
DOCKER_HOST=unix:///Users/<username>/.colima/default/docker.sock make test
These integration tests can be excluded from execution with go test ./...
by adding the -short
flag like this go test -short ./...
.
The end-to-end tests are located in the folder test
. Docker needs to be installed in order to run them. End-to-end tests can be run locally together with arc and 3 nodes using the provided docker-compose file.
The tests can be executed like this:
make clean_restart_e2e_test
The docker-compose file also shows the minimum setup that is needed for ARC to run.
Prometheus can collect ARC metrics. It improves observability in production and enables debugging during development and deployment. As Prometheus is a very standard tool for monitoring, any other complementary tool such as Grafana and others can be added for better data analysis.
Prometheus periodically poll the system data by querying specific urls.
ARC can expose a Prometheus endpoint that can be used to monitor the metamorph servers. Set the prometheusEndpoint
setting in the settings file to activate prometheus. Normally you would want to set this to /metrics
.
Enable monitoring consists of setting the prometheusEndpoint property in config.yaml file:
prometheusEndpoint: /metrics # endpoint for prometheus metrics
Each service runs a http profiler server if it is configured in config.yaml
. In order to access it, a connection can be created using the Go pprof
tool. For example to investigate the memory usage
go tool pprof http://localhost:9999/debug/pprof/allocs
Then type top
to see the functions which consume the most memory. Find more information here.
In order to enable tracing for each service ther respective setting in the service has to be set in config.yaml
tracing:
enabled: true # is tracing enabled
dialAddr: http://localhost:4317 # address where traces are exported to
Currently the traces are exported only in open telemtry protocol (OTLP) on the gRPC endpoint. This endpoint URL of the receiving tracing backend (e.g. Jaeger, Grafana Tempo, etc.) can be configured with the respective tracing.dialAddr
setting.
For building the ARC binary, there is a make target available. ARC can be built for Linux OS and amd64 architecture using
make build_release
Once this is done additionally a docker image can be built using
make build_docker
GRPC code are generated from protobuf definitions. In order to generate the necessary tools need to be installed first by running
make install_gen
Additionally, protoc needs to be installed.
Once that is done, GRPC code can be generated by running
make gen
The rest api is defined in a yaml file following the OpenAPI 3.0.0 specification. Before the rest API can be generated install the necessary tools by running
make install_gen
Once that is done, the API code can be generated by running
make api
Before the documentation can be generated swagger-cli and widdershins need to be installed.
Once that is done the documentation can be created by running
make docs
Special thanks to rloadd for his inputs to the documentation of ARC.
We're always looking for contributors to help us improve the project. Whether it's bug reports, feature requests, or pull requests - all contributions are welcome.
- Fork & Clone: Fork this repository and clone it to your local machine.
- Set Up: Run
make deps
to install all dependencies. - Make Changes: Create a new branch and make your changes.
- Test: Ensure all tests pass by running
make test
andmake clean_restart_e2e_test
. - Commit: Commit your changes and push to your fork.
- Pull Request: Open a pull request from your fork to this repository.
For more details, check the contribution guidelines.
For information on past releases, check out the changelog.
For questions, bug reports, or feature requests, please open an issue on GitHub.