diff --git a/README.md b/README.md index fa4d13e2..38d4d41b 100644 --- a/README.md +++ b/README.md @@ -1,16 +1,30 @@ -# LNDg -Lite GUI web interface to analyze lnd data and manage your node with automation. +# LNDg - GUI for LND Data Analysis and Node Management -Start by choosing one of the following installation methods: [Docker Installation](https://github.com/cryptosharks131/lndg#docker-installation-requires-docker-and-docker-compose-be-installed) | [Manual Installation](https://github.com/cryptosharks131/lndg#manual-installation) +Welcome to LNDg, an advanced web interface designed for analyzing LND data and automating node management tasks. -LNDg can also be found directly on popular apps like Umbrel and Citadel with a 1-click install from the GUI. +Choose your preferred installation method: -## Docker Installation (requires docker and docker-compose be installed) -### Build and deploy -1. Clone respository `git clone https://github.com/cryptosharks131/lndg.git` -2. Change directory into the repo `cd lndg` -3. Copy and replace the contents (adjust custom volume paths to LND and LNDg folders) of the `docker-compose.yaml` with the below: `nano docker-compose.yaml` +- **1-Click Installation**: Easily install LNDg directly from popular platforms like Umbrel, Citadel, Start9 and RaspiBlitz. +- [Docker Installation](https://github.com/cryptosharks131/lndg#docker-installation-requires-docker-and-docker-compose-be-installed): Ideal for users familiar with Docker and Docker Compose. +- [Manual Installation](https://github.com/cryptosharks131/lndg#manual-installation): If you prefer a hands-on approach to set up LNDg. + + +## Docker Installation (requires Docker and Docker Compose) + +### Prepare Install + +```bash +# Clone the repository +git clone https://github.com/cryptosharks131/lndg.git + +# Change directory to the repository +cd lndg + +# Customize the docker-compose.yaml file +nano docker-compose.yaml ``` +**Replace the contents of `docker-compose.yaml` with your desired volume paths and settings. An example configuration is shown below.** +```yaml services: lndg: build: . @@ -23,13 +37,19 @@ services: - python initialize.py -net 'mainnet' -server '127.0.0.1:10009' -d && supervisord && python manage.py runserver 0.0.0.0:8889 network_mode: "host" ``` -4. Deploy your docker image: `docker-compose up -d` -5. LNDg should now be available on port `http://localhost:8889` -6. Open and copy the password from output file: `nano data/lndg-admin.txt` -7. Use the password from the output file and the username `lndg-admin` to login +### Build and Deploy +```bash +# Deploy the Docker image +docker-compose up -d -### Updating +# Retrieve the admin password for login +nano data/lndg-admin.txt ``` +- **This example configuration will host LNDg at http://0.0.0.0:8889. Use the machine IP to reach the LNDg instance.** +- **Log in to LNDg using the provided password and the username `lndg-admin`.** + +### Updating +```bash docker-compose down docker-compose build --no-cache docker-compose up -d @@ -37,49 +57,55 @@ docker-compose up -d # OPTIONAL: remove unused builds and objects docker system prune -f ``` - ## Manual Installation -### Step 1 - Install lndg -1. Clone respository `git clone https://github.com/cryptosharks131/lndg.git` -2. Change directory into the repo `cd lndg` -3. Make sure you have python virtualenv installed `sudo apt install virtualenv` -4. Setup a python3 virtual environment `virtualenv -p python3 .venv` -5. Install required dependencies `.venv/bin/pip install -r requirements.txt` -6. Initialize some settings for your django site (see notes below) `.venv/bin/python initialize.py` -7. The initial login user is `lndg-admin` and the password is output here: `data/lndg-admin.txt` -8. Generate some initial data for your dashboard `.venv/bin/python jobs.py` -9. Run the server via a python development server `.venv/bin/python manage.py runserver 0.0.0.0:8889` -Tip: If you plan to only use the development server, you will need to setup whitenoise (see note below). - -### Step 2 - Setup Backend Data, Automated Rebalancing and HTLC Stream Data -The files `jobs.py`, `rebalancer.py` and `htlc_stream.py` inside lndg/gui/ serve to update the backend database with the most up to date information, rebalance any channels based on your lndg dashboard settings and to listen for any failure events in your htlc stream. A refresh interval of at least 10-20 seconds is recommended for the `jobs.py` and `rebalancer.py` files for the best user experience. - -Recommend Setup With Supervisord (least setup) or Systemd (most compatible) -1. Supervisord - a) Setup supervisord config `.venv/bin/python initialize.py -sd` - b) Install Supervisord `.venv/bin/pip install supervisor` - c) Start Supervisord `supervisord` - -2. Systemd (2 options) - Option 1 - Bash script install (requires install at ~/lndg) `sudo bash systemd.sh` - Option 2 - [Manual Setup Instructions](https://github.com/cryptosharks131/lndg/blob/master/systemd.md) - -Alternatively, you may also make your own task for these files with your preferred tool (task scheduler/cronjob/etc). + +### Step 1 - Install LNDg + +1. Clone the repository: `git clone https://github.com/cryptosharks131/lndg.git` +2. Change the directory into the repository: `cd lndg` +3. Ensure you have Python virtualenv installed: `sudo apt install virtualenv` +4. Set up a Python 3 virtual environment: `virtualenv -p python3 .venv` +5. Install the required dependencies: `.venv/bin/pip install -r requirements.txt` +6. Initialize necessary settings for your Django site (refer to notes below): `.venv/bin/python initialize.py` +7. The initial login user is `lndg-admin`, and the password can be found here: `data/lndg-admin.txt` +8. Generate initial data for your dashboard: `.venv/bin/python jobs.py` +9. Run the server using a Python development server: `.venv/bin/python manage.py runserver 0.0.0.0:8889` + +*Note: If you plan to use the development server exclusively, you will need to set up whitenoise (see note below).* + +### Step 2 - Setup Backend Controller For Data, Automated Rebalancing, and HTLC Stream Data + +The file `controller.py` inside the `lndg/gui/` directory orchastrates the services needed to update the backend database with the most up-to-date information, rebalance any channels based on your LNDg dashboard settings, and listen for any failure events in your HTLC stream. + +**Recommended Setup with Supervisord (least setup) or Systemd (most compatible):** + +1. **Systemd (2 options)** + - Option 1 - Bash script install: `sudo bash systemd.sh` + - Option 2 - [Manual Setup Instructions](https://github.com/cryptosharks131/lndg/blob/master/systemd.md) + +2. **Supervisord** + - Configure Supervisord by running: `.venv/bin/python initialize.py -sd` + - Install Supervisord: `.venv/bin/pip install supervisor` + - Start Supervisord: `supervisord` + + +Alternatively, you may create your own task for these files using your preferred tool (task scheduler, cron job, etc). ### Updating -1. Make sure you are in the lndg folder `cd lndg` -2. Pull the new files `git pull` -3. Migrate any database changes `.venv/bin/python manage.py migrate` + +1. Make sure you are in the LNDg folder: `cd lndg` +2. Pull the new files from the repository: `git pull` +3. Migrate any database changes: `.venv/bin/python manage.py migrate` ### Notes -1. If you are not using the default settings for LND or you would like to run a LND instance on a network other than `mainnet` you can use the correct flags in step 6 (see `initialize.py --help`) or you can edit the variables directly in `lndg/lndg/settings.py`. -2. Some systems have a hard time serving static files (docker/macOs) and installing whitenoise and configuring it can help solve this issue. - You can use the following to install and setup whitenoise: - `.venv/bin/pip install whitenoise && rm lndg/settings.py && .venv/bin/python initialize.py -wn` -4. If you want to recreate a settings file, delete it from `lndg/lndg/settings.py` and rerun. `initialize.py` -5. If you plan to run this site continuously, consider setting up a proper web server to host it (see Nginx below). -6. You can manage your login credentials from the admin page. Example: `lndg.local/lndg-admin` -7. If you have issues reaching the site, verify the firewall is open on port 8889 where LNDg is running + +1. If not using default settings for LND or would like to run on a network other than `mainnet`, use the correct flags in step 6 (see `initialize.py --help`) or edit the variables directly in `lndg/lndg/settings.py`. +2. You can not run the development server outside of DEBUG mode due to static file issues. To address this, install and configure Whitenoise by running the following command: +```.venv/bin/pip install whitenoise && rm lndg/settings.py && .venv/bin/python initialize.py -wn``` +3. If you need to recreate a settings file, delete it from `lndg/lndg/settings.py` and rerun `initialize.py`. +4. If you plan to run this site continuously, it's advisable to set up a proper web server to host it (see Nginx below). +5. You can manage your login credentials from the admin page, accessible at `/lndg-admin`. +6. If you encounter issues accessing the site, ensure that any firewall is open on port 8889, where LNDg is running. ### Setup lndg initialize.py options 1. `-ip` or `--nodeip` - Accepts only this host IP to serve the LNDg page - default: `*` @@ -93,37 +119,50 @@ Alternatively, you may also make your own task for these files with your preferr 9. `-d` or `--docker` - Single option for docker container setup (supervisord + whitenoise) - default: `False` 10. `-dx` or `--debug` - Setup the django site in debug mode - default: `False` 11. `-u` or `--adminuser` Setup a custom admin username - default: `lndg-admin` -10. `-pw` or `--adminpw` Setup a custom admin password - default: `Randomized` -10. `-csrf` or `--csrftrusted` Set trusted CSRF origins - default: `None` -10. `-tls` or `--tlscert` Set a custom path to the tls cert - default: `--lnddir used` -10. `-mcrn` or `--macaroon` Set a custom path to the macroon file - default: `--lnddir used` -10. `-lnddb` or `--lnddatabase` Set a custom path to the channel.db for monitoring - default: `--lnddir used` -10. `-nologin` or `--nologinrequired` Remove authentication requirements from LNDg - default: `False` +12. `-pw` or `--adminpw` Setup a custom admin password - default: `Randomized` +13. `-csrf` or `--csrftrusted` Set trusted CSRF origins - default: `None` +14. `-tls` or `--tlscert` Set a custom path to the tls cert - default: `--lnddir used` +15. `-mcrn` or `--macaroon` Set a custom path to the macroon file - default: `--lnddir used` +16. `-lnddb` or `--lnddatabase` Set a custom path to the channel.db for monitoring - default: `--lnddir used` +17. `-nologin` or `--nologinrequired` Remove authentication requirements from LNDg - default: `False` +18. `-f` or `--force` Force the replacement of an existing settings file - default: `False` ### Using A Webserver -You can serve the dashboard at all times using a webserver instead of the development server. Using a webserver will serve your static files and installing whitenoise is not required when running in this manner. Any webserver can be used to host the site if configured properly. A bash script has been included to help aide in the setup of a nginx webserver. -`sudo bash nginx.sh` +You can serve the dashboard at all times using a webserver instead of the development server. Using a webserver will serve your static files, and installing whitenoise is not required when running in this manner. Any webserver can be used to host the site if configured properly. A bash script has been included to help aid in the setup of an nginx webserver. + +To set up the nginx webserver, run the following command: + +```bash +sudo bash nginx.sh +``` +### When updating +When updating your LNDg installation, follow the same steps as described above. However, after updating, you will also need to restart the uWSGI service to apply the changes to the user interface (UI). -When updating, follow the same steps as above. You will also need to restart the uwsgi service for changes to take affect on the UI. -`sudo systemctl restart uwsgi.service` +To restart the uWSGI service, use the following command: + +```bash +sudo systemctl restart uwsgi.service +``` ## Key Features + ### Track Peer Events LNDg will track the changes your peers make to channel policies you have in open channels and any connection events that may happen with those channels. ### Batch Opens -You can use LNDg to batch open up to 10 channels at a time with a single transaction. This can help to signicantly reduce the channel open fees incurred when opening multiple channels. +You can use LNDg to batch open up to 10 channels at a time with a single transaction. This can help to significantly reduce the channel open fees incurred when opening multiple channels. ### Watch Tower Management -You can use LNDg to add, monitor or remove watch towers from the lnd node. +You can use LNDg to add, monitor, or remove watch towers from the LND node. ### Suggests Fee Rates LNDg will make suggestions on an adjustment to the current set outbound fee rate for each channel. This uses historical payment and forwarding data over the last 7 days to drive suggestions. You can use the Auto-Fees feature in order to automatically act upon the suggestions given. -You may see another adjustment right after setting the new suggested fee rate on some channels. This is normal and you should wait ~24 hours before changing the fee rate again on any given channel. +You may see another adjustment right after setting the new suggested fee rate on some channels. This is normal, and you should wait ~24 hours before changing the fee rate again on any given channel. ### Suggests New Peers -LNDg will make suggestions for new peers to open channels to based on your node's successful routing history. +LNDg will make suggestions for new peers to open channels to based on your node's successful routing history. + #### There are two unique values in LNDg: 1. Volume Score - A score based upon both the count of transactions and the volume of transactions routed through the peer 2. Savings By Volume (ppm) - The amount of sats you could have saved during rebalances if you were peered directly with this node over the total amount routed through the peer @@ -131,7 +170,7 @@ LNDg will make suggestions for new peers to open channels to based on your node' ### Channel Performance Metrics #### LNDg will aggregate your payment and forwarding data to provide the following metrics: 1. Outbound Flow Details - This shows the amount routed outbound next to the amount rebalanced in -2. Revenue Details - This shows the revenue earned on the left, the profit (revenue - cost) in the middle and the assisted revenue (amount earned due to this channel's inbound flow) on the right +2. Revenue Details - This shows the revenue earned on the left, the profit (revenue - cost) in the middle, and the assisted revenue (amount earned due to this channel's inbound flow) on the right 3. Inbound Flow Details - This shows the amount routed inbound next to the amount rebalanced out 4. Updates - This is the number of updates the channel has had and is directly correlated to the space it takes up in channel.db @@ -179,7 +218,7 @@ AF Notes: ## Auto-Rebalancer - [Quick Start Guide](https://github.com/cryptosharks131/lndg/blob/master/quickstart.md) ### Here are some additional notes to help you better understand the Auto-Rebalancer (AR). -The objective of the Auto-Rebalancer is to "refill" the liquidity on the local side (i.e. OUTBOUND) of profitable and lucarative channels. So that, when a forward comes in from another node there is always enough liquidity to route the payment and in return collect the desired routing fees. +The objective of the Auto-Rebalancer is to "refill" the liquidity on the local side (i.e. OUTBOUND) of profitable and lucrative channels. So that, when a forward comes in from another node there is always enough liquidity to route the payment and in return collect the desired routing fees. 1. The AR variable `AR-Enabled` must be set to 1 (enabled) in order to start looking for new rebalance opportunities. (default=0) 2. The AR variable `AR-Target%` defines the % size of the channel capacity you would like to use for rebalance attempts. Example: If a channel size is 1M Sats and AR-Target% = 0.05 LNDg will select an amount of 5% of 1M = 50K for rebalancing. (default=5) diff --git a/gui/af.py b/af.py similarity index 84% rename from gui/af.py rename to af.py index bc21b595..84164608 100644 --- a/gui/af.py +++ b/af.py @@ -45,7 +45,7 @@ def main(channels): if LocalSettings.objects.filter(key='AF-LowLiqLimit').exists(): lowliq_limit = int(LocalSettings.objects.filter(key='AF-LowLiqLimit').get().value) else: - LocalSettings(key='AF-LowLiqLimit', value='5').save() + LocalSettings(key='AF-LowLiqLimit', value='15').save() lowliq_limit = 5 if LocalSettings.objects.filter(key='AF-ExcessLimit').exists(): excess_limit = int(LocalSettings.objects.filter(key='AF-ExcessLimit').get().value) @@ -57,12 +57,20 @@ def main(channels): lowliq_limit = 5 excess_limit = 95 forwards = Forwards.objects.filter(forward_date__gte=filter_7day, amt_out_msat__gte=1000000) - if forwards.exists(): - forwards_df_in_7d_sum = DataFrame.from_records(forwards.values('chan_id_in').annotate(amt_out_msat=Sum('amt_out_msat'), fee=Sum('fee')), 'chan_id_in') - forwards_df_out_7d_sum = DataFrame.from_records(forwards.values('chan_id_out').annotate(amt_out_msat=Sum('amt_out_msat'), fee=Sum('fee')), 'chan_id_out') + forwards_1d = forwards.filter(forward_date__gte=filter_1day) + if forwards_1d.exists(): + forwards_df_in_1d_sum = DataFrame.from_records(forwards_1d.values('chan_id_in').annotate(amt_out_msat=Sum('amt_out_msat'), fee=Sum('fee')), 'chan_id_in') + if forwards.exists(): + forwards_df_in_7d_sum = DataFrame.from_records(forwards.values('chan_id_in').annotate(amt_out_msat=Sum('amt_out_msat'), fee=Sum('fee')), 'chan_id_in') + forwards_df_out_7d_sum = DataFrame.from_records(forwards.values('chan_id_out').annotate(amt_out_msat=Sum('amt_out_msat'), fee=Sum('fee')), 'chan_id_out') + else: + forwards_df_in_7d_sum = DataFrame() + forwards_df_out_7d_sum = DataFrame() else: + forwards_df_in_1d_sum = DataFrame() forwards_df_in_7d_sum = DataFrame() forwards_df_out_7d_sum = DataFrame() + channels_df['amt_routed_in_1day'] = channels_df.apply(lambda row: int(forwards_df_in_1d_sum.loc[row.chan_id].amt_out_msat/1000) if (forwards_df_in_1d_sum.index == row.chan_id).any() else 0, axis=1) channels_df['amt_routed_in_7day'] = channels_df.apply(lambda row: int(forwards_df_in_7d_sum.loc[row.chan_id].amt_out_msat/1000) if (forwards_df_in_7d_sum.index == row.chan_id).any() else 0, axis=1) channels_df['amt_routed_out_7day'] = channels_df.apply(lambda row: int(forwards_df_out_7d_sum.loc[row.chan_id].amt_out_msat/1000) if (forwards_df_out_7d_sum.index == row.chan_id).any() else 0, axis=1) channels_df['net_routed_7day'] = channels_df.apply(lambda row: round((row['amt_routed_out_7day']-row['amt_routed_in_7day'])/row['capacity'], 1), axis=1) @@ -79,13 +87,13 @@ def main(channels): failed_htlc_df = failed_htlc_df[(failed_htlc_df['wire_failure']==15) & (failed_htlc_df['failure_detail']==6) & (failed_htlc_df['amount']>failed_htlc_df['chan_out_liq']+failed_htlc_df['chan_out_pending'])] lowliq_df['failed_out_1day'] = 0 if failed_htlc_df.empty else lowliq_df.apply(lambda row: len(failed_htlc_df[failed_htlc_df['chan_id_out']==row.chan_id]), axis=1) # INCREASE IF (failed htlc > threshhold) && (flow in == 0) - lowliq_df['new_rate'] = lowliq_df.apply(lambda row: row['local_fee_rate']+(5*multiplier) if row['failed_out_1day']>failed_htlc_limit and row['amt_routed_in_7day'] == 0 else row['local_fee_rate'], axis=1) + lowliq_df['new_rate'] = lowliq_df.apply(lambda row: row['local_fee_rate']+(5*multiplier) if row['failed_out_1day']>failed_htlc_limit and row['amt_routed_in_1day'] == 0 else row['local_fee_rate'], axis=1) # Balanced Liquidity balanced_df = channels_df[(channels_df['out_percent'] > lowliq_limit) & (channels_df['out_percent'] < excess_limit)].copy() # IF NO FLOW THEN DECREASE FEE AND IF HIGH FLOW THEN SLOWLY INCREASE FEE - balanced_df['new_rate'] = balanced_df.apply(lambda row: row['local_fee_rate']+((2*multiplier)*(1+(row['net_routed_7day']/row['capacity']))) if row['net_routed_7day'] > row['capacity'] else row['local_fee_rate'], axis=1) - balanced_df['new_rate'] = balanced_df.apply(lambda row: row['local_fee_rate']-(3*multiplier) if (row['amt_routed_in_7day']+row['amt_routed_out_7day']) == 0 else row['local_fee_rate'], axis=1) + balanced_df['new_rate'] = balanced_df.apply(lambda row: row['local_fee_rate']+((2*multiplier)*(1+(row['net_routed_7day']/row['capacity']))) if row['net_routed_7day'] > 1 else row['local_fee_rate'], axis=1) + balanced_df['new_rate'] = balanced_df.apply(lambda row: row['local_fee_rate']-(3*multiplier) if (row['amt_routed_in_7day']+row['amt_routed_out_7day']) == 0 else row['new_rate'], axis=1) # Excess Liquidity excess_df = channels_df[channels_df['out_percent'] >= excess_limit].copy() @@ -93,7 +101,7 @@ def main(channels): excess_df['revenue_assist_7day'] = excess_df.apply(lambda row: int(forwards_df_in_7d_sum.loc[row.chan_id].fee) if forwards_df_in_7d_sum.empty == False and (forwards_df_in_7d_sum.index == row.chan_id).any() else 0, axis=1) # DECREASE IF (assisting channel or stagnant liq) excess_df['new_rate'] = excess_df.apply(lambda row: row['local_fee_rate']-(5*multiplier) if row['net_routed_7day'] < 0 and row['revenue_assist_7day'] > (row['revenue_7day']*10) else row['local_fee_rate'], axis=1) - excess_df['new_rate'] = excess_df.apply(lambda row: row['local_fee_rate']-(5*multiplier) if (row['amt_routed_in_7day']+row['amt_routed_out_7day']) == 0 else row['local_fee_rate'], axis=1) + excess_df['new_rate'] = excess_df.apply(lambda row: row['local_fee_rate']-(5*multiplier) if (row['amt_routed_in_7day']+row['amt_routed_out_7day']) == 0 else row['new_rate'], axis=1) #Merge back results result_df = concat([lowliq_df, balanced_df, excess_df]) diff --git a/controller.py b/controller.py new file mode 100644 index 00000000..913b1cbe --- /dev/null +++ b/controller.py @@ -0,0 +1,23 @@ +import multiprocessing +import jobs, rebalancer, htlc_stream, p2p + +def run_task(task): + task() + +def main(): + tasks = [jobs.main, rebalancer.main, htlc_stream.main, p2p.main] + print('Controller is starting...') + + processes = [] + for task in tasks: + process = multiprocessing.Process(target=run_task, args=(task,)) + processes.append(process) + process.start() + + for process in processes: + process.join() + + print('Controller is stopping...') + +if __name__ == '__main__': + main() diff --git a/gui/forms.py b/gui/forms.py index 9e9d3c98..d3ac9aee 100644 --- a/gui/forms.py +++ b/gui/forms.py @@ -1,32 +1,6 @@ from django import forms from .models import Channels -class RebalancerModelChoiceIterator(forms.models.ModelChoiceIterator): - def choice(self, obj): - return (self.field.prepare_value(obj), - (str(obj.chan_id) + ' | ' + obj.alias + ' | ' + "{:,}".format(obj.local_balance) + ' | ' + obj.remote_pubkey)) - -class RebalancerModelChoiceField(forms.models.ModelMultipleChoiceField): - def _get_choices(self): - if hasattr(self, '_choices'): - return self._choices - return RebalancerModelChoiceIterator(self) - choices = property(_get_choices, - forms.MultipleChoiceField._set_choices) - -class ChanPolicyModelChoiceIterator(forms.models.ModelChoiceIterator): - def choice(self, obj): - return (self.field.prepare_value(obj), - (str(obj.chan_id) + ' | ' + obj.alias + ' | ' + str(obj.local_base_fee) + ' | ' + str(obj.local_fee_rate) + ' | ' + str(obj.local_cltv))) - -class ChanPolicyModelChoiceField(forms.models.ModelMultipleChoiceField): - def _get_choices(self): - if hasattr(self, '_choices'): - return self._choices - return ChanPolicyModelChoiceIterator(self) - choices = property(_get_choices, - forms.MultipleChoiceField._set_choices) - class OpenChannelForm(forms.Form): peer_pubkey = forms.CharField(label='peer_pubkey', max_length=66) local_amt = forms.IntegerField(label='local_amt') @@ -34,7 +8,7 @@ class OpenChannelForm(forms.Form): class CloseChannelForm(forms.Form): chan_id = forms.CharField(label='chan_id') - target_fee = forms.IntegerField(label='target_fee') + target_fee = forms.IntegerField(label='target_fee', required=False) force = forms.BooleanField(widget=forms.CheckboxSelectMultiple, required=False) class ConnectPeerForm(forms.Form): @@ -59,7 +33,7 @@ class Meta: fields = [] value = forms.IntegerField(label='value') fee_limit = forms.IntegerField(label='fee_limit') - outgoing_chan_ids = RebalancerModelChoiceField(widget=forms.CheckboxSelectMultiple, queryset=Channels.objects.filter(is_open=1, is_active=1).order_by('-local_balance'), required=False) + outgoing_chan_ids = forms.ModelMultipleChoiceField(queryset=Channels.objects.filter(is_open=1, is_active=1), required=False) last_hop_pubkey = forms.CharField(label='funding_txid', max_length=66, required=False) duration = forms.IntegerField(label='duration') diff --git a/gui/lnd_deps/lnd_connect.py b/gui/lnd_deps/lnd_connect.py index ecef6f86..cac664d7 100644 --- a/gui/lnd_deps/lnd_connect.py +++ b/gui/lnd_deps/lnd_connect.py @@ -1,7 +1,7 @@ import os, codecs, grpc from lndg import settings -def creds(): +def get_creds(): #Open connection with lnd via grpc with open(os.path.expanduser(settings.LND_MACAROON_PATH), 'rb') as f: macaroon_bytes = f.read() @@ -15,11 +15,12 @@ def metadata_callback(context, callback): creds = grpc.composite_channel_credentials(cert_creds, auth_creds) return creds +creds = get_creds() def lnd_connect(): - return grpc.secure_channel(settings.LND_RPC_SERVER, creds(), options=[('grpc.max_send_message_length', int(settings.LND_MAX_MESSAGE)*1000000), ('grpc.max_receive_message_length', int(settings.LND_MAX_MESSAGE)*1000000),]) + return grpc.secure_channel(settings.LND_RPC_SERVER, creds, options=[('grpc.max_send_message_length', int(settings.LND_MAX_MESSAGE)*1000000), ('grpc.max_receive_message_length', int(settings.LND_MAX_MESSAGE)*1000000),]) def async_lnd_connect(): - return grpc.aio.secure_channel(settings.LND_RPC_SERVER, creds(), options=[('grpc.max_send_message_length', int(settings.LND_MAX_MESSAGE)*1000000), ('grpc.max_receive_message_length', int(settings.LND_MAX_MESSAGE)*1000000),]) + return grpc.aio.secure_channel(settings.LND_RPC_SERVER, creds, options=[('grpc.max_send_message_length', int(settings.LND_MAX_MESSAGE)*1000000), ('grpc.max_receive_message_length', int(settings.LND_MAX_MESSAGE)*1000000),]) def main(): pass diff --git a/gui/migrations/0037_tradesales.py b/gui/migrations/0037_tradesales.py new file mode 100644 index 00000000..f39b1e52 --- /dev/null +++ b/gui/migrations/0037_tradesales.py @@ -0,0 +1,28 @@ +# Generated by Django 4.2.6 on 2023-11-08 20:52 + +from django.db import migrations, models +import django.utils.timezone + + +class Migration(migrations.Migration): + + dependencies = [ + ('gui', '0036_peers'), + ] + + operations = [ + migrations.CreateModel( + name='TradeSales', + fields=[ + ('id', models.CharField(max_length=64, primary_key=True, serialize=False)), + ('creation_date', models.DateTimeField(default=django.utils.timezone.now)), + ('expiry', models.DateTimeField(null=True)), + ('description', models.CharField(max_length=100)), + ('price', models.BigIntegerField()), + ('sale_type', models.IntegerField()), + ('secret', models.CharField(max_length=1000, null=True)), + ('sale_limit', models.IntegerField(null=True)), + ('sale_count', models.IntegerField(default=0)), + ], + ), + ] diff --git a/gui/models.py b/gui/models.py index 8d8ced6f..adcdb8f0 100644 --- a/gui/models.py +++ b/gui/models.py @@ -129,15 +129,15 @@ def save(self, *args, **kwargs): if LocalSettings.objects.filter(key='AR-Inbound%').exists(): inbound_setting = int(LocalSettings.objects.filter(key='AR-Inbound%')[0].value) else: - LocalSettings(key='AR-Inbound%', value='100').save() - inbound_setting = 100 + LocalSettings(key='AR-Inbound%', value='90').save() + inbound_setting = 90 self.ar_in_target = inbound_setting if not self.ar_amt_target: if LocalSettings.objects.filter(key='AR-Target%').exists(): amt_setting = float(LocalSettings.objects.filter(key='AR-Target%')[0].value) else: - LocalSettings(key='AR-Target%', value='5').save() - amt_setting = 5 + LocalSettings(key='AR-Target%', value='3').save() + amt_setting = 3 self.ar_amt_target = int((amt_setting/100) * self.capacity) if not self.ar_max_cost: if LocalSettings.objects.filter(key='AR-MaxCost%').exists(): @@ -325,4 +325,15 @@ class HistFailedHTLC(models.Model): other_count = models.IntegerField() class Meta: app_label = 'gui' - unique_together = (('date', 'chan_id_in', 'chan_id_out'),) \ No newline at end of file + unique_together = (('date', 'chan_id_in', 'chan_id_out'),) + +class TradeSales(models.Model): + id = models.CharField(max_length=64, primary_key=True) + creation_date = models.DateTimeField(default=timezone.now) + expiry = models.DateTimeField(null=True) + description = models.CharField(max_length=100) + price = models.BigIntegerField() + sale_type = models.IntegerField() + secret = models.CharField(null=True, max_length=1000) + sale_limit = models.IntegerField(null=True) + sale_count = models.IntegerField(default=0) \ No newline at end of file diff --git a/gui/serializers.py b/gui/serializers.py index e9bc48f3..041ce8bd 100644 --- a/gui/serializers.py +++ b/gui/serializers.py @@ -1,16 +1,18 @@ from rest_framework import serializers from rest_framework.relations import PrimaryKeyRelatedField -from .models import LocalSettings, Payments, PaymentHops, Invoices, Forwards, Channels, Rebalancer, Peers, Onchain, PendingHTLCs, FailedHTLCs, Closures, Resolutions, PeerEvents +from .models import LocalSettings, Payments, PaymentHops, Invoices, Forwards, Channels, Rebalancer, Peers, Onchain, PendingHTLCs, FailedHTLCs, Closures, Resolutions, PeerEvents, TradeSales, Autofees ##FUTURE UPDATE 'exclude' TO 'fields' class PaymentSerializer(serializers.HyperlinkedModelSerializer): + id = serializers.IntegerField(source='index') payment_hash = serializers.ReadOnlyField() class Meta: model = Payments exclude = [] class InvoiceSerializer(serializers.HyperlinkedModelSerializer): + id = serializers.IntegerField(source='index') r_hash = serializers.ReadOnlyField() creation_date = serializers.ReadOnlyField() settle_date = serializers.ReadOnlyField() @@ -100,6 +102,9 @@ class Meta: class ConnectPeerSerializer(serializers.Serializer): peer_id = serializers.CharField(label='peer_pubkey', max_length=200) +class DisconnectPeerSerializer(serializers.Serializer): + peer_id = serializers.CharField(label='peer_pubkey', max_length=66) + class OpenChannelSerializer(serializers.Serializer): peer_pubkey = serializers.CharField(label='peer_pubkey', max_length=66) local_amt = serializers.IntegerField(label='local_amt') @@ -119,6 +124,27 @@ class BumpFeeSerializer(serializers.Serializer): class BroadcastTXSerializer(serializers.Serializer): raw_tx = serializers.CharField(label='raw_tx') +class CreateTradeSerializer(serializers.Serializer): + description = serializers.CharField(max_length=100) + price = serializers.IntegerField() + type = serializers.IntegerField() + secret = serializers.CharField(max_length=100, required=False, default=None) + expiry = serializers.DateTimeField(required=False, default=None) + sale_limit = serializers.IntegerField(required=False, default=None) + +class TradeSalesSerializer(serializers.HyperlinkedModelSerializer): + id = serializers.ReadOnlyField() + creation_date = serializers.ReadOnlyField() + sale_type = serializers.ReadOnlyField() + secret = serializers.ReadOnlyField() + sale_count = serializers.ReadOnlyField() + class Meta: + model = TradeSales + exclude = [] + +class SignMessageSerializer(serializers.Serializer): + message = serializers.CharField(label='message') + class AddInvoiceSerializer(serializers.Serializer): value = serializers.IntegerField(label='value') @@ -190,8 +216,17 @@ def get_out_liq_percent(self, obj): capacity = Channels.objects.filter(chan_id=obj.chan_id).get().capacity return int(round((obj.out_liq/capacity)*100, 1)) +class FeeLogSerializer(serializers.HyperlinkedModelSerializer): + id = serializers.ReadOnlyField() + class Meta: + model = Autofees + exclude = [] + class FailedHTLCSerializer(serializers.HyperlinkedModelSerializer): id = serializers.ReadOnlyField() class Meta: model = FailedHTLCs - exclude = [] \ No newline at end of file + exclude = [] + +class ResetSerializer(serializers.Serializer): + table = serializers.CharField(max_length=20) \ No newline at end of file diff --git a/gui/static/api.js b/gui/static/api.js index afdad3ca..d8f9305c 100644 --- a/gui/static/api.js +++ b/gui/static/api.js @@ -23,7 +23,7 @@ async function DELETE(url, {method = 'DELETE'} = {}){ async function call({url, method, data, body, headers = {'Content-Type':'application/json'}}){ if(url.charAt(url.length-1) != '/') url += '/' if(method != 'GET') headers['X-CSRFToken'] = document.getElementById('api').dataset.token - const result = await fetch(`api/${url}${data ? '?': ''}${new URLSearchParams(data).toString()}`, {method, body: JSON.stringify(body), headers}) + const result = await fetch(`/api/${url}${data ? '?': ''}${new URLSearchParams(data).toString()}`, {method, body: JSON.stringify(body), headers}) return result.json() } diff --git a/gui/static/helpers.js b/gui/static/helpers.js index e3b16c90..7e7d5d30 100644 --- a/gui/static/helpers.js +++ b/gui/static/helpers.js @@ -2,7 +2,6 @@ function byId(id){ return document.getElementById(id) } String.prototype.toInt = function(){ return parseInt(this.replace(/,/g,''))} String.prototype.toBool = function(if_false = 0){ return this && /^true$/i.test(this) ? 1 : if_false} -String.prototype.default = function(value){ return (this || '').length === 0 ? value : this} Number.prototype.intcomma = function(){ return parseInt(this).toLocaleString() } HTMLElement.prototype.defaultCloneNode = HTMLElement.prototype.cloneNode HTMLElement.prototype.cloneNode = function(attrs){ @@ -32,7 +31,7 @@ function adjustTZ(datetime){ } async function toggle(button){ try{ - button.children[0].style.visibility = 'collapse'; + button.children[0].style.visibility = 'hidden'; button.children[1].style.visibility = 'visible'; navigator.clipboard.writeText(button.getAttribute('data-value')) await sleep(1000) @@ -42,7 +41,7 @@ async function toggle(button){ } finally{ button.children[0].style.visibility = 'visible'; - button.children[1].style.visibility = 'collapse' + button.children[1].style.visibility = 'hidden' } } function use(template){ @@ -105,43 +104,82 @@ function formatDate(start, end = new Date().getTime() + new Date().getTimezoneOf end = new Date(end) if (start == null) return '---' difference = (end - new Date(start))/1000 - if (difference < 0) return 'Just now' - if (difference < 60) { + if (difference > 0) { + if (difference < 60) { if (Math.floor(difference) == 1){ return `a second ago`; }else{ return `${Math.floor(difference)} seconds ago`; } - } else if (difference < 3600) { - if (Math.floor(difference / 60) == 1){ - return `a minute ago`; + } else if (difference < 3600) { + if (Math.floor(difference / 60) == 1){ + return `a minute ago`; + }else{ + return `${Math.floor(difference / 60)} minutes ago`; + } + } else if (difference < 86400) { + if (Math.floor(difference / 3600) == 1){ + return `an hour ago`; + }else{ + return `${Math.floor(difference / 3600)} hours ago`; + } + } else if (difference < 2620800) { + if (Math.floor(difference / 86400) == 1){ + return `a day ago`; + }else{ + return `${Math.floor(difference / 86400)} days ago`; + } + } else if (difference < 31449600) { + if (Math.floor(difference / 2620800) == 1){ + return `a month ago`; + }else{ + return `${Math.floor(difference / 2620800)} months ago`; + } + } else { + if (Math.floor(difference / 31449600) == 1){ + return `a year ago`; + }else{ + return `${Math.floor(difference / 31449600)} years ago`; + } + } + } else if (difference < 0) { + if (-difference < 60) { + if (Math.floor(-difference) == 1){ + return `in a second`; }else{ - return `${Math.floor(difference / 60)} minutes ago`; + return `in ${Math.floor(-difference)} seconds`; } - } else if (difference < 86400) { - if (Math.floor(difference / 3600) == 1){ - return `an hour ago`; - }else{ - return `${Math.floor(difference / 3600)} hours ago`; - } - } else if (difference < 2620800) { - if (Math.floor(difference / 86400) == 1){ - return `a day ago`; - }else{ - return `${Math.floor(difference / 86400)} days ago`; - } - } else if (difference < 31449600) { - if (Math.floor(difference / 2620800) == 1){ - return `a month ago`; - }else{ - return `${Math.floor(difference / 2620800)} months ago`; - } - } else { - if (Math.floor(difference / 31449600) == 1){ - return `a year ago`; - }else{ - return `${Math.floor(difference / 31449600)} years ago`; - } - } + } else if (-difference < 3600) { + if (Math.floor(-difference / 60) == 1){ + return `in a minute`; + }else{ + return `in ${Math.floor(-difference / 60)} minutes`; + } + } else if (-difference < 86400) { + if (Math.floor(-difference / 3600) == 1){ + return `in an hour`; + }else{ + return `in ${Math.floor(-difference / 3600)} hours`; + } + } else if (-difference < 2620800) { + if (Math.floor(-difference / 86400) == 1){ + return `in a day`; + }else{ + return `in ${Math.floor(-difference / 86400)} days`; + } + } else if (-difference < 31449600) { + if (Math.floor(-difference / 2620800) == 1){ + return `in a month`; + }else{ + return `in ${Math.floor(-difference / 2620800)} months`; + } + } else { + if (Math.floor(-difference / 31449600) == 1){ + return `in a year`; + }else{ + return `in ${Math.floor(-difference / 31449600)} years`; + } + } + } else { return 'Just now' } } //END: COMMON FUNCTIONS & VARIABLES \ No newline at end of file diff --git a/gui/static/w3style.css b/gui/static/w3style.css index dde29a9a..6a98ba3b 100644 --- a/gui/static/w3style.css +++ b/gui/static/w3style.css @@ -260,3 +260,13 @@ input:not([type=submit]):not([type=checkbox]):focus, select:focus {outline: none .input {position:relative;float:left;padding:0px 4px}.input::after{position:absolute;right: 25px;top:2px;opacity:.8;content:attr(data-unit);color:#555599} .encrypted td:not(th){-webkit-text-security:circle;}.encrypted thead th:not(a){-webkit-text-security:circle;} .soft-encrypted table td, .soft-encrypted table thead{color: transparent;text-shadow: 0 0 5px rgba(0,0,0,0.1);} +.switch{position:relative;display:inline-block;width:45px;height:25.5px} +.switch input {opacity:0;width:0;height:0} +.slider {position:absolute;cursor:pointer;top:0;left:0;right:0;bottom:0;background-color:#ccc;-webkit-transition:.4s;transition:.4s} +.slider:before {position:absolute;content:"";height:19.5px;width:19.5px;left:3px;bottom:3px;background-color:white;-webkit-transition:.4s;transition:.4s} +input:checked + .slider {background-color:#2196F3} +input:focus + .slider {box-shadow:0 0 1px #2196F3} +input:checked + .slider:before {-webkit-transform:translateX(19.5px);-ms-transform:translateX(19.5px);transform:translateX(19.5px)} +.slider.round {border-radius:25.5px} +.slider.round:before {border-radius:50%} +pre {background-color: #f0f0f5;} .dark-mode pre{ background-color: #30363d; color:white } \ No newline at end of file diff --git a/gui/templates/advanced.html b/gui/templates/advanced.html index 8ec53ee4..6f982834 100644 --- a/gui/templates/advanced.html +++ b/gui/templates/advanced.html @@ -19,7 +19,7 @@

Advanced Channel Settings

{% csrf_token %} - +
@@ -131,7 +131,7 @@

Advanced Channel Settings

{% csrf_token %} - +
diff --git a/gui/templates/balances.html b/gui/templates/balances.html index 55443788..a3253567 100644 --- a/gui/templates/balances.html +++ b/gui/templates/balances.html @@ -32,18 +32,26 @@

Pending Sweeps

Total Attempts Next Attempt Force + Bump Fee {% for sweep in pending_sweeps %} {{ sweep.txid_str }} {{ sweep.amount_sat|intcomma }} - {% if sweep.witness_type == 0 %}Unknown{% elif sweep.witness_type == 1 %}Commitment Time Lock{% elif sweep.witness_type == 13 %}Commitment Anchor{% else %}{{ sweep.witness_type }}{% endif %} + {% if sweep.witness_type == 0 %}Unknown{% elif sweep.witness_type == 1 %}Commitment Time Lock{% elif sweep.witness_type == 2 %}Commitment No Delay{% elif sweep.witness_type == 3 %}Commitment Revoke{% elif sweep.witness_type == 4 %}HTLC Offered Revoke{% elif sweep.witness_type == 5 %}HTLC Accepted Revoke{% elif sweep.witness_type == 6 %}HTLC Offered Timeout Second Level{% elif sweep.witness_type == 8 %}HTLC Offered Remote Timeout{% elif sweep.witness_type == 13 %}Commitment Anchor{% else %}{{ sweep.witness_type }}{% endif %} {% if sweep.requested_sat_per_vbyte == 0 %}---{% else %}{{ sweep.requested_sat_per_vbyte|intcomma }}{% endif %} {{ sweep.sat_per_vbyte|intcomma }} {{ sweep.requested_conf_target|intcomma }} {{ sweep.broadcast_attempts|intcomma }} {{ sweep.next_broadcast_height|intcomma }} {% if sweep.force %}Yes{% else %}No{% endif %} + +
+ + + +
+ {% endfor %} @@ -70,7 +78,7 @@

Balances

{% if utxo.confirmations == 0 %}
- +
{% else %} @@ -152,4 +160,4 @@

Transactions

byId('estimated').style.display = "block"; } -{% endblock %} \ No newline at end of file +{% endblock %} diff --git a/gui/templates/base.html b/gui/templates/base.html index 4b3b53d5..f6e34113 100644 --- a/gui/templates/base.html +++ b/gui/templates/base.html @@ -19,25 +19,55 @@ } const routed_template = { "forward_date": f => ({innerHTML: formatDate(f.forward_date), title: adjustTZ(f.forward_date) }), - "amt_in": f => ({innerHTML: f.amt_in > 1000 ? f.amt_in.intcomma() : f.amt_in}), - "amt_out": f => ({innerHTML: f.amt_out > 1000 ? f.amt_out.intcomma() : f.amt_out}), - "chan_in_alias": f => ({innerHTML: f.chan_in_alias.default(f.chan_id_in)}), - "chan_out_alias": f => ({innerHTML: f.chan_out_alias.default(f.chan_id_out)}), "chan_id_in": f => ({innerHTML: `${(BigInt(f.chan_id_in)>>40n)+'x'+(BigInt(f.chan_id_in)>>16n & BigInt('0xFFFFFF'))+'x'+(BigInt(f.chan_id_in) & BigInt('0xFFFF'))}`}), + "chan_in_alias": f => ({innerHTML: f.chan_in_alias || f.chan_id_in}), + "amt_in": f => ({innerHTML: f.amt_in > 1000 ? f.amt_in.intcomma() : f.amt_in, style: {paddingRight: "0px", width: "160px"} }), + "symbolIn": f => ({innerHTML: "●━━━", style: {paddingRight: "0px", paddingLeft: "0px", textAlign: 'right'} }), + "symbolOut": f => ({innerHTML: "━━━○", style: {paddingLeft: "0px", paddingRight: "0px", textAlign: 'left'} }), + "amt_out": f => ({innerHTML: (f.amt_out > 1000 ? f.amt_out.intcomma() : f.amt_out) + ` +${f.fee.intcomma().toLocaleString()}`, style: {paddingLeft: "0px", width: "160px"} }), + "chan_out_alias": f => ({innerHTML: f.chan_out_alias || f.chan_id_out}), "chan_id_out": f => ({innerHTML: `${(BigInt(f.chan_id_out)>>40n)+'x'+(BigInt(f.chan_id_out)>>16n & BigInt('0xFFFFFF'))+'x'+(BigInt(f.chan_id_out) & BigInt('0xFFFF'))}`}), "fee": f => ({innerHTML: parseFloat(f.fee.toFixed(3)).toLocaleString()}), "ppm": f => ({innerHTML: parseInt(f.ppm.toFixed(0)).toLocaleString()}), } + let {fee, ppm} = routed_template + const payments_template = Object.assign({}, { + "creation_date": p => ({innerHTML: formatDate(p.creation_date), title: adjustTZ(p.creation_date) }), + "value": p => ({innerHTML: p.value.intcomma()}), + "fee": p => ({innerHTML: parseFloat(p.fee.toFixed(3)).toLocaleString()}) + }, {ppm}, { + "status": p => ({innerHTML: ['Unknown', 'In-Flight', 'Succeeded', 'Failed'][p.status]}), + "chan_out_alias": p => ({innerHTML: p.chan_out_alias||'---'}), + "chan_out": p => ({innerHTML: p.status == 2 && p.chan_out !== 'MPP' ? `${(BigInt(p.chan_out)>>40n)+'x'+(BigInt(p.chan_out)>>16n & BigInt('0xFFFFFF'))+'x'+(BigInt(p.chan_out) & BigInt('0xFFFF'))}` : p.chan_out || '---' }), + "payment_hash": p => ({innerHTML: `${p.payment_hash.substring(0,7)}`}), + "rebal_chan": p => ({innerHTML: (p.rebal_chan || '').length > 0 ? 'Yes' : 'No'}), + "keysend_preimage": p => ({innerHTML: p.keysend_preimage ? 'Yes' : 'No'}) + } + ) + let {creation_date, value, keysend_preimage} = payments_template + const invoices_template = Object.assign({}, {creation_date}, { + "settle_date": i => ({innerHTML: formatDate(i.settle_date), title: adjustTZ(i.settle_date)}) + }, {value}, { + "amt_paid": i => ({innerHTML: i.amt_paid.intcomma()}), + "state": i => ({innerHTML: ['Open', 'Settled', 'Cancelled', 'Accepted'][i.state]}), + "chan_in_alias": i => ({innerHTML: i.chan_in_alias||'---'}), + "chan_in": i => ({innerHTML: i.chan_in? `${(BigInt(i.chan_in)>>40n)+'x'+(BigInt(i.chan_in)>>16n & BigInt('0xFFFFFF'))+'x'+(BigInt(i.chan_in) & BigInt('0xFFFF'))}` : '---'}), + "r_hash": i => ({innerHTML: `${i.r_hash.substring(0,7)}`}), + "keysend_preimage": i => ({innerHTML: keysend_preimage(i).innerHTML, title: i.message }), + } + ) const wire_failures = "0|1|2|3|4|5|6|Expiry Too Soon|8|9|10|Amount Below Minimum|Fee Insufficient|13|14|Temporary Channel Failure|16|17|Unknown Next Peer|19|20|21|Expiry Too Far".split('|') - const failure_details = "0|---|2|3|4|HTLC Exceeds Max|Insufficient Balance|7|8|9|10|11|12|Invoice Not Open|14|15|16|17|18|19|Invalid Keysend|21|Circular Route".split('|') + const failure_details = "0|---|2|Link Not Eligible|4|HTLC Exceeds Max|Insufficient Balance|7|8|9|10|11|12|Invoice Not Open|14|15|16|17|18|19|Invalid Keysend|21|Circular Route".split('|') const failedHTLCs_template = { "timestamp": htlc => ({innerHTML: formatDate(htlc.timestamp), title: adjustTZ(htlc.timestamp) }), "chan_id_in": htlc => ({innerHTML: `${(BigInt(htlc.chan_id_in)>>40n)+'x'+(BigInt(htlc.chan_id_in)>>16n & BigInt('0xFFFFFF'))+'x'+(BigInt(htlc.chan_id_in) & BigInt('0xFFFF'))}`}), - "chan_id_out": htlc => ({innerHTML: `${(BigInt(htlc.chan_id_out)>>40n)+'x'+(BigInt(htlc.chan_id_out)>>16n & BigInt('0xFFFFFF'))+'x'+(BigInt(htlc.chan_id_out) & BigInt('0xFFFF'))}`}), "chan_in_alias": htlc => ({innerHTML: `${htlc.chan_in_alias || htlc.chan_id_in}`}), + "fw_amount": htlc => ({innerHTML: htlc.amount.intcomma() + ` +${htlc.missed_fee.intcomma()}`, style: {paddingRight: "4px", width: "160px"} }), + "symbolIn" : _ => ({innerHTML: "●━━━", style: {paddingRight: "0px", paddingLeft: "0px", textAlign: 'right'} }), + "symbolOut" : _ => ({innerHTML: "━━━▏", style: {paddingLeft: "0px", paddingRight: "0px", textAlign: 'left'} }), + "chan_out_liq": htlc => ({innerHTML: `${(htlc.chan_out_liq || 0).intcomma()} ${htlc.chan_out_pending}`, style: {paddingLeft: "0px", width: "160px"} }), "chan_out_alias": htlc => ({innerHTML: `${htlc.chan_out_alias || htlc.chan_id_out}`}), - "amount": htlc => ({innerHTML: htlc.amount.intcomma()}), - "chan_out_liq": htlc => ({innerHTML: `${(htlc.chan_out_liq || 0).intcomma()} ${htlc.chan_out_pending}`}), + "chan_id_out": htlc => ({innerHTML: `${(BigInt(htlc.chan_id_out)>>40n)+'x'+(BigInt(htlc.chan_id_out)>>16n & BigInt('0xFFFFFF'))+'x'+(BigInt(htlc.chan_id_out) & BigInt('0xFFFF'))}`}), "missed_fee": htlc => ({innerHTML: parseFloat(htlc.missed_fee.toFixed(3)).toLocaleString()}), "wire_failure": htlc => ({innerHTML: htlc.wire_failure > wire_failures.length ? htlc.wire_failure : wire_failures[htlc.wire_failure]}), "failure_detail": htlc => ({innerHTML: htlc.failure_detail > failure_details.length ? htlc.failure_detail : failure_details[htlc.failure_detail]}), @@ -58,7 +88,6 @@ {% block meta %}{% endblock %} {% block title %}LNDg{% endblock %} - {% load qr_code %} @@ -69,11 +98,6 @@
X

{{ message.message }}

- {% with sliced_msg=message.message|slice:":17" %} - {% if sliced_msg == "Deposit Address: " or sliced_msg == "Invoice created! " %} - {% qr_from_text message.message|slice:"17:" size="s" image_format="png" error_correction="L" %} - {% endif %} - {% endwith %}
{% endfor %} @@ -87,19 +111,23 @@

My Lnd Overview

diff --git a/gui/templates/channel.html b/gui/templates/channel.html index d756e4cf..f28b7274 100644 --- a/gui/templates/channel.html +++ b/gui/templates/channel.html @@ -325,18 +325,18 @@

Last Payments Routed

- - + + + - - @@ -344,90 +344,77 @@

Last Payments Routed

TimestampAmount InAmount OutChannel In Id Channel In AliasAmount InAmount Out Channel Out AliasChannel In Id Channel Out Id Fees Earned PPM Earned
+ Load More
- {% include 'rebalances_table.html' with count=5 last_hop_pubkey=channel.remote_pubkey load_count=5 title='Rebalance Requests' %} +{% autoescape off %} + {% with rebal_url=''|add:channel.alias|add:' Rebalances'%} + {% include 'rebalances_table.html' with count=5 last_hop_pubkey=channel.remote_pubkey load_count=5 title=rebal_url %} + {% endwith %} +{% endautoescape %}
-{% if payments %} -
+

Last 5 Payments Sent

- +
- - - - - - - - - - - {% for payment in payments %} - - - - - - - - - - - + + + + + + + + + - {% endfor %} + + + + +
TimestampHashValueFee PaidPPM PaidStatusChan Out AliasChan Out IDRouteKeysend
{{ payment.creation_date|naturaltime }}{{ payment.payment_hash }}{{ payment.value|add:"0"|intcomma }}{{ payment.fee|intcomma }}{{ payment.ppm|intcomma }}{% if payment.status == 1 %}In-Flight{% elif payment.status == 2 %}Succeeded{% elif payment.status == 3 %}Failed{% else %}{{ payment.status }}{% endif %}{% if payment.status == 2 %}{% if payment.chan_out_alias == '' %}---{% else %}{{ payment.chan_out_alias }}{% endif %}{% else %}---{% endif %}{% if payment.status == 2 %}{{ payment.chan_out }}{% else %}---{% endif %}{% if payment.status == 2 %}Open{% else %}---{% endif %}{% if payment.keysend_preimage != None %}Yes{% else %}No{% endif %}ValueFee PaidPPM PaidStatusChan Out AliasChan Out IDHashRebalanceKeysend
+ Load More +
-{% endif %} -{% if invoices %} -
+

Last 5 Payments Received

- +
- - + + - {% for invoice in invoices %} - - - - - - - - - - - - {% endfor %} + + + + +
Created SettledPayment Hash Value Amount Paid State Channel In AliasChannel InChannel InHash Keysend
{{ invoice.creation_date|naturaltime }}{% if invoice.state == 1 %}{{ invoice.settle_date|naturaltime }}{% else %}---{% endif %}{{ invoice.r_hash }}{{ invoice.value|add:"0"|intcomma }}{% if invoice.state == 1 %}{{ invoice.amt_paid|intcomma }}{% else %}---{% endif %}{% if invoice.state == 0 %}Open{% elif invoice.state == 1 %}Settled{% elif invoice.state == 2 %}Canceled{% else %}{{ invoice.state }}{% endif %}{% if invoice.state == 1 %}{% if invoice.chan_in_alias == '' %}---{% else %}{{ invoice.chan_in_alias }}{% endif %}{% else %}---{% endif %}{% if invoice.state == 1 and invoice.chan_in != None %}{{ invoice.chan_in }}{% else %}---{% endif %}{% if invoice.keysend_preimage != None %}Yes{% else %}No{% endif %}
+ Load More +
-{% endif %}

Last Failed HTLCs

- + + - - + - @@ -455,7 +442,7 @@

Last Peer Log Events

- @@ -475,14 +462,20 @@

Channel Notes

{{ node_info.alias }} | {{ node_info.identity_pubkey }}

- LND - {% for info in node_info.chains %}{{ info }}{% endfor %} | {{ node_info.block_height }} #{{ node_info.block_hash }} + LND + ---
Public Capacity: 0 - Active Channels: {{node_info.num_active_channels}} / {{node_info.num_active_channels|add:node_info.num_inactive_channels}} - Peers: {{ node_info.num_peers }} - DB Size: {% if db_size > 0 %}{{ db_size }} GB{% else %}---{% endif %} + Active Channels: 0 / 0 + 0 + --- Total States Updates: --- Sign a message
-
+ Addresses: {% for addr in node_info.uris %}
Timestamp Chan In IDChan Out ID Chan In AliasForward AmountActual Outbound Chan Out AliasForward AmountActual OutboundChan Out ID Potential Fee HTLC Failure Failure Detail
+ Load More
+ Load More
- - - - - - + + + + + + - + @@ -145,9 +153,11 @@
RebalancingTowersBatching + TradesAdvanced Settings + Logs -
+
Balance: {{total_balance}}Onchain: {{ balances.total_balance|intcomma }}Confirmed: {{ balances.confirmed_balance|intcomma }}Unconfirmed: {{ balances.unconfirmed_balance|intcomma }}Limbo: {{ limbo_balance|intcomma }} Pending HTLCs: 0{{ pending_htlc_count }}Balance: 0Onchain: 0Confirmed: 0Unconfirmed: 0Limbo: 0Pending HTLCs: 0
{% csrf_token %} @@ -106,7 +114,7 @@
Onchain fees Cost Profit/OutboundOutbound UtilizationOutbound Utilization
1-Day
@@ -158,14 +168,15 @@
- - - - - - - - + + + + + + + + + @@ -174,7 +185,7 @@
Capacity Inbound Liquidity UnsettledoRateoBaseo1Di1Do7Di7DiRateiBaseoRateoBaseo1Di1Do7Di7DiRateiBaseoTarget% iTarget% AR
-
+ -
+ -{% if pending_open %} -
-

Pending Open Channels

- - - - - - - - - - - - - - - - - - {% for channel in pending_open %} - - - - - - - - - - - - - - - - - {% endfor %} -
Peer AliasChannel PointCapacityLocal BalanceRemote BalanceAFFee RateBase FeeCLTVAmtMax Cost%oTarget%iTarget%AR
{% if channel.alias == '' %}{{ channel.remote_node_pub|slice:":12" }}{% else %}{{ channel.alias }}{% endif %}{{ channel.channel_point }}{{ channel.capacity|intcomma }}{{ channel.local_balance|intcomma }}{{ channel.remote_balance|intcomma }} - - {% csrf_token %} - - - - - - - -
- {% csrf_token %} - - - - -
-
-
- {% csrf_token %} - - - - -
-
-
- {% csrf_token %} - - - - -
-
-
- {% csrf_token %} - - - - -
-
-
- {% csrf_token %} - - - - -
-
-
- {% csrf_token %} - - - - -
-
-
- {% csrf_token %} - - - - -
-
-
- {% csrf_token %} - - - - - -
-
+ -{% endif %} -{% if pending_closed %} -
-

Pending Close Channels

+ -{% endif %} -{% if pending_force_closed %} -
-

Pending Force Close Channels

+ -{% endif %} -{% if waiting_for_close %} -
-

Channels Waiting To Close

+ -{% endif %} -
- +
- - + + + - @@ -443,7 +328,7 @@

Channels Waiting To Close

- @@ -451,8 +336,8 @@

Channels Waiting To Close

TimestampAmount InAmount OutChannel In Id Channel In AliasAmount InAmount Out Channel Out AliasChannel In Id Channel Out Id Fees Earned PPM Earned
+ Load More
{% include 'rebalances_table.html' with count=10 load_count=10 title='Rebalance Requests' %} -
- +
@@ -468,10 +353,17 @@

Channels Waiting To Close

+ + + + +
Timestamp
+ Load More +
-
- +
@@ -486,19 +378,26 @@

Channels Waiting To Close

+ + + + +
Created
+ Load More +
-
+ + {% endif %} {% if not peers %}
diff --git a/gui/templates/rebalances.html b/gui/templates/rebalances.html index d11e0403..f7ff2003 100644 --- a/gui/templates/rebalances.html +++ b/gui/templates/rebalances.html @@ -1,6 +1,8 @@ {% extends "base.html" %} {% block title %} {{ block.super }} - Rebalances{% endblock %} {% block content %} +{% include 'rebalances_table.html' with count=20 status=1 load_count=20 title='Rebalance Requests' %} +{% include 'rebalances_table.html' with count=20 status=0 load_count=20 title='Rebalance Requests' %} {% include 'rebalances_table.html' with count=20 status=2 load_count=20 title='Rebalance Requests' %} {% include 'rebalances_table.html' with count=50 load_count=50 title='Rebalance Requests' %} {% endblock %} diff --git a/gui/templates/rebalances_table.html b/gui/templates/rebalances_table.html index 57abffc9..1e5c3614 100644 --- a/gui/templates/rebalances_table.html +++ b/gui/templates/rebalances_table.html @@ -1,6 +1,6 @@ {% load humanize %}
-

{% if status %}Successful{% else %}Last{% endif %} {{title}}

+

{% if status == 2 %}Successful{% elif status == 0 %}Pending{% elif status == 1 %}In-Flight{% else %}Last{% endif %} {{title}}

@@ -53,8 +53,12 @@

{% if status %}Successful{% else %}Last{% endif %} {{title}}

input.innerHTML = "🔂" input.title="Repeat this request" input.onclick = async function(){ - const {value, fee_limit, outgoing_chan_ids, last_hop_pubkey, duration, target_alias} = rebalance - const new_rebal = {value, fee_limit, outgoing_chan_ids, last_hop_pubkey, duration, target_alias} + const {value, fee_limit, outgoing_chan_ids, last_hop_pubkey, duration, target_alias, manual} = rebalance + if (last_hop_pubkey.length === 0){ + var new_rebal = {value, fee_limit, outgoing_chan_ids, duration, manual} + }else{ + var new_rebal = {value, fee_limit, outgoing_chan_ids, last_hop_pubkey, duration, target_alias, manual} + } const response = await POST('rebalancer', {body: new_rebal}) const table = input.parentElement.parentElement.parentElement table.prepend(use(transformations).render(response)) @@ -75,17 +79,23 @@

{% if status %}Successful{% else %}Last{% endif %} {{title}}

return {innerHTML: input} } }; - document.addEventListener('DOMContentLoaded', async () => { - const res = await GET('rebalancer', {data: {limit: '{{count}}', status: '{{status}}', payment_hash: '{{payment_hash}}', last_hop_pubkey:'{{last_hop_pubkey}}' }}) + async function buildTable(){ + const res = await GET('rebalancer', {data: {limit: '{{count}}', status: '{{status}}', payment_hash: '{{payment_hash}}', last_hop_pubkey: '{% if last_hop_pubkey %}{{last_hop_pubkey}}{%else%}{{request.GET.last_hop_pubkey}}{%endif%}' }}) if(res.errors) return if(res.results.length == 0){ - document.getElementById("rebalances_div_{{status}}").innerHTML = `

You dont have any rebalances yet.

` + document.getElementById("rebalances_div_{{status}}").innerHTML = `

You dont have any ${'{{status}}'=='' ? '' : statuses['{{status}}'].toLowerCase()} rebalance request

` return } const table = document.getElementById("rebalances_{{status}}"), template = use(transformations) + table.innerHTML = null res.results.forEach(rebalance => table.append(template.render(rebalance))); - }); + + await auto_refresh(buildTable) + } + + document.addEventListener('DOMContentLoaded', buildTable); + async function loadRebals_{{status}}(){ const status = "{{ status }}" const rebalsTable = byId('rebalsTable_'+status) diff --git a/gui/templates/rebalancing.html b/gui/templates/rebalancing.html index 34ab5a8b..dc981b72 100644 --- a/gui/templates/rebalancing.html +++ b/gui/templates/rebalancing.html @@ -50,15 +50,15 @@
Manual Rebalance R +{% endblock %} diff --git a/gui/urls.py b/gui/urls.py index fe910219..d4d9eddc 100644 --- a/gui/urls.py +++ b/gui/urls.py @@ -18,6 +18,8 @@ router.register(r'pendinghtlcs', views.PendingHTLCViewSet) router.register(r'failedhtlcs', views.FailedHTLCViewSet) router.register(r'peerevents', views.PeerEventsViewSet) +router.register(r'trades', views.TradeSalesViewSet) +router.register(r'feelog', views.FeeLogViewSet) urlpatterns = [ path('', views.home, name='home'), @@ -28,6 +30,7 @@ path('closures', views.closures, name='closures'), path('towers', views.towers, name='towers'), path('batch', views.batch, name='batch'), + path('trades', views.trades, name='trades'), path('batchopen/', views.batch_open, name='batch-open'), path('resolutions', views.resolutions, name='resolutions'), path('channel', views.channel, name='channel'), @@ -66,11 +69,13 @@ path('autofees/', views.autofees, name='autofees'), path('peerevents', views.peerevents, name='peerevents'), path('advanced/', views.advanced, name='advanced'), - path('sign_message/', views.sign_message, name='sign-message'), + path('logs/', views.logs, name='logs'), path('addresses/', views.addresses, name='addresses'), + path('reset/', views.reset, name='reset'), path('api/', include(router.urls), name='api-root'), path('api-auth/', include('rest_framework.urls'), name='api-auth'), path('api/connectpeer/', views.connect_peer, name='connect-peer'), + path('api/disconnectpeer/', views.disconnect_peer, name='disconnect-peer'), path('api/rebalance_stats/', views.rebalance_stats, name='rebalance-stats'), path('api/openchannel/', views.open_channel, name='open-channel'), path('api/closechannel/', views.close_channel, name='close-channel'), @@ -85,5 +90,10 @@ path('api/chart/', views.chart, name='chart'), path('api/chanpolicy/', views.chan_policy, name='chan-policy'), path('api/broadcast_tx/', views.broadcast_tx, name='broadcast-tx'), + path('api/node_info/', views.node_info, name='node-info'), + path('api/createtrade/', views.create_trade, name='create-trade'), + path('api/forwards_summary/', views.forwards_summary, name='forwards-summary'), + path('api/sign_message/', views.sign_message, name='sign-message'), + path('api/reset/', views.reset_api, name='reset-api'), path('lndg-admin/', admin.site.urls), ] diff --git a/gui/views.py b/gui/views.py index f5ade84d..d134d138 100644 --- a/gui/views.py +++ b/gui/views.py @@ -1,7 +1,7 @@ from django.contrib import messages from django.shortcuts import get_object_or_404, render, redirect from django.db.models import Sum, IntegerField, Count, Max, F, Q, Case, When, Value, FloatField -from django.db.models.functions import Round, TruncDay +from django.db.models.functions import Round, TruncDay, Coalesce from django.contrib.auth.decorators import login_required from django_filters import FilterSet, CharFilter, DateTimeFilter, NumberFilter from datetime import datetime, timedelta @@ -9,9 +9,9 @@ from rest_framework.response import Response from rest_framework.decorators import api_view, permission_classes from rest_framework.permissions import IsAuthenticated -from .forms import OpenChannelForm, CloseChannelForm, ConnectPeerForm, AddInvoiceForm, RebalancerForm, UpdateChannel, UpdateSetting, LocalSettingsForm, AddTowerForm, RemoveTowerForm, DeleteTowerForm, BatchOpenForm, UpdatePending, UpdateClosing, UpdateKeysend, AddAvoid, RemoveAvoid -from .models import Payments, PaymentHops, Invoices, Forwards, Channels, Rebalancer, LocalSettings, Peers, Onchain, Closures, Resolutions, PendingHTLCs, FailedHTLCs, Autopilot, Autofees, PendingChannels, AvoidNodes, PeerEvents -from .serializers import ConnectPeerSerializer, FailedHTLCSerializer, LocalSettingsSerializer, OpenChannelSerializer, CloseChannelSerializer, AddInvoiceSerializer, PaymentHopsSerializer, PaymentSerializer, InvoiceSerializer, ForwardSerializer, ChannelSerializer, PendingHTLCSerializer, RebalancerSerializer, UpdateAliasSerializer, PeerSerializer, OnchainSerializer, ClosuresSerializer, ResolutionsSerializer, BumpFeeSerializer, UpdateChanPolicy, NewAddressSerializer, BroadcastTXSerializer, PeerEventsSerializer +from .forms import * +from .serializers import * +from .models import Payments, PaymentHops, Invoices, Forwards, Channels, Rebalancer, LocalSettings, Peers, Onchain, Closures, Resolutions, PendingHTLCs, FailedHTLCs, Autopilot, Autofees, PendingChannels, AvoidNodes, PeerEvents, HistFailedHTLC, TradeSales from gui.lnd_deps import lightning_pb2 as ln from gui.lnd_deps import lightning_pb2_grpc as lnrpc from gui.lnd_deps import router_pb2 as lnr @@ -25,14 +25,16 @@ from os import path from pandas import DataFrame, merge from requests import get -from . import af +from secrets import token_bytes +from trade import create_trade_details +import af def graph_links(): if LocalSettings.objects.filter(key='GUI-GraphLinks').exists(): graph_links = str(LocalSettings.objects.filter(key='GUI-GraphLinks')[0].value) else: - LocalSettings(key='GUI-GraphLinks', value='https://amboss.space').save() - graph_links = 'https://amboss.space' + LocalSettings(key='GUI-GraphLinks', value='https://mempool.space/lightning').save() + graph_links = 'https://mempool.space/lightning' return graph_links def network_links(): @@ -70,117 +72,24 @@ def __call__(self, func): @is_login_required(login_required(login_url='/lndg-admin/login/?next=/'), settings.LOGIN_REQUIRED) def home(request): - if request.method == 'GET': - try: - stub = lnrpc.LightningStub(lnd_connect()) - #Get balance and general node information - node_info = stub.GetInfo(ln.GetInfoRequest()) - balances = stub.WalletBalance(ln.WalletBalanceRequest()) - pending_channels = stub.PendingChannels(ln.PendingChannelsRequest()) - limbo_balance = pending_channels.total_limbo_balance - pending_open = None - pending_closed = None - pending_force_closed = None - waiting_for_close = None - pending_open_balance = 0 - pending_closing_balance = 0 - if pending_channels.pending_open_channels: - target_resp = pending_channels.pending_open_channels - peers = Peers.objects.all() - pending_changes = PendingChannels.objects.all() - pending_open = [] - inbound_setting = int(LocalSettings.objects.filter(key='AR-Inbound%')[0].value) if LocalSettings.objects.filter(key='AR-Inbound%').exists() else 100 - outbound_setting = int(LocalSettings.objects.filter(key='AR-Outbound%')[0].value) if LocalSettings.objects.filter(key='AR-Outbound%').exists() else 75 - amt_setting = float(LocalSettings.objects.filter(key='AR-Target%')[0].value) if LocalSettings.objects.filter(key='AR-Target%').exists() else 5 - cost_setting = int(LocalSettings.objects.filter(key='AR-MaxCost%')[0].value) if LocalSettings.objects.filter(key='AR-MaxCost%').exists() else 65 - auto_fees = int(LocalSettings.objects.filter(key='AF-Enabled')[0].value) if LocalSettings.objects.filter(key='AF-Enabled').exists() else 0 - for i in range(0,len(target_resp)): - item = {} - pending_open_balance += target_resp[i].channel.local_balance - funding_txid = target_resp[i].channel.channel_point.split(':')[0] - output_index = target_resp[i].channel.channel_point.split(':')[1] - updated = pending_changes.filter(funding_txid=funding_txid,output_index=output_index).exists() - item['alias'] = peers.filter(pubkey=target_resp[i].channel.remote_node_pub)[0].alias if peers.filter(pubkey=target_resp[i].channel.remote_node_pub).exists() else '' - item['remote_node_pub'] = target_resp[i].channel.remote_node_pub - item['channel_point'] = target_resp[i].channel.channel_point - item['funding_txid'] = funding_txid - item['output_index'] = output_index - item['capacity'] = target_resp[i].channel.capacity - item['local_balance'] = target_resp[i].channel.local_balance - item['remote_balance'] = target_resp[i].channel.remote_balance - item['local_chan_reserve_sat'] = target_resp[i].channel.local_chan_reserve_sat - item['remote_chan_reserve_sat'] = target_resp[i].channel.remote_chan_reserve_sat - item['initiator'] = target_resp[i].channel.initiator - item['commitment_type'] = target_resp[i].channel.commitment_type - item['commit_fee'] = target_resp[i].commit_fee - item['commit_weight'] = target_resp[i].commit_weight - item['fee_per_kw'] = target_resp[i].fee_per_kw - item['local_base_fee'] = pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].local_base_fee if updated else '' - item['local_fee_rate'] = pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].local_fee_rate if updated else '' - item['local_cltv'] = pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].local_cltv if updated else '' - item['auto_rebalance'] = pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].auto_rebalance if updated and pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].auto_rebalance != None else False - item['ar_amt_target'] = pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].ar_amt_target if updated and pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].ar_amt_target != None else int((amt_setting/100) * target_resp[i].channel.capacity) - item['ar_in_target'] = pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].ar_in_target if updated and pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].ar_in_target != None else inbound_setting - item['ar_out_target'] = pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].ar_out_target if updated and pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].ar_out_target != None else outbound_setting - item['ar_max_cost'] = pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].ar_max_cost if updated and pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].ar_max_cost != None else cost_setting - item['auto_fees'] = pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].auto_fees if updated and pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].auto_fees != None else (False if auto_fees == 0 else True) - pending_open.append(item) - if pending_channels.pending_closing_channels: - target_resp = pending_channels.pending_closing_channels - pending_closed = [] - for i in range(0,len(target_resp)): - pending_item = {'remote_node_pub':target_resp[i].channel.remote_node_pub,'channel_point':target_resp[i].channel.channel_point,'capacity':target_resp[i].channel.capacity,'local_balance':target_resp[i].channel.local_balance,'remote_balance':target_resp[i].channel.remote_balance,'local_chan_reserve_sat':target_resp[i].channel.local_chan_reserve_sat, - 'remote_chan_reserve_sat':target_resp[i].channel.remote_chan_reserve_sat,'initiator':target_resp[i].channel.initiator,'commitment_type':target_resp[i].channel.commitment_type, 'local_commit_fee_sat': target_resp[i].commitments.local_commit_fee_sat,'limbo_balance':target_resp[i].limbo_balance,'closing_txid':target_resp[i].closing_txid} - pending_item.update(pending_channel_details(target_resp[i].channel.channel_point)) - pending_closed.append(pending_item) - if pending_channels.pending_force_closing_channels: - target_resp = pending_channels.pending_force_closing_channels - pending_force_closed = [] - for i in range(0,len(target_resp)): - pending_item = {'remote_node_pub':target_resp[i].channel.remote_node_pub,'channel_point':target_resp[i].channel.channel_point,'capacity':target_resp[i].channel.capacity,'local_balance':target_resp[i].channel.local_balance,'remote_balance':target_resp[i].channel.remote_balance,'initiator':target_resp[i].channel.initiator, - 'commitment_type':target_resp[i].channel.commitment_type,'closing_txid':target_resp[i].closing_txid,'limbo_balance':target_resp[i].limbo_balance,'maturity_height':target_resp[i].maturity_height,'blocks_til_maturity':target_resp[i].blocks_til_maturity if target_resp[i].blocks_til_maturity > 0 else find_next_block_maturity(target_resp[i]), - 'maturity_datetime':(datetime.now()+timedelta(minutes=(10*target_resp[i].blocks_til_maturity if target_resp[i].blocks_til_maturity > 0 else 10*find_next_block_maturity(target_resp[i]) )))} - pending_item.update(pending_channel_details(target_resp[i].channel.channel_point)) - pending_force_closed.append(pending_item) - if pending_channels.waiting_close_channels: - target_resp = pending_channels.waiting_close_channels - waiting_for_close = [] - for i in range(0,len(target_resp)): - pending_closing_balance += target_resp[i].limbo_balance - pending_item = {'remote_node_pub':target_resp[i].channel.remote_node_pub,'channel_point':target_resp[i].channel.channel_point,'capacity':target_resp[i].channel.capacity,'local_balance':target_resp[i].channel.local_balance,'remote_balance':target_resp[i].channel.remote_balance,'local_chan_reserve_sat':target_resp[i].channel.local_chan_reserve_sat, - 'remote_chan_reserve_sat':target_resp[i].channel.remote_chan_reserve_sat,'initiator':target_resp[i].channel.initiator,'commitment_type':target_resp[i].channel.commitment_type, 'local_commit_fee_sat': target_resp[i].commitments.local_commit_fee_sat, 'limbo_balance':target_resp[i].limbo_balance,'closing_txid':target_resp[i].closing_txid} - pending_item.update(pending_channel_details(target_resp[i].channel.channel_point)) - waiting_for_close.append(pending_item) - limbo_balance -= pending_closing_balance - try: - db_size = round(path.getsize(path.expanduser(settings.LND_DATABASE_PATH))*0.000000001, 3) - except: - db_size = 0 - #Build context for front-end and render page - context = { - 'node_info': node_info, - 'balances': balances, - 'total_balance': balances.total_balance + pending_open_balance + limbo_balance, - 'limbo_balance': limbo_balance, - 'pending_open': pending_open, - 'pending_closed': pending_closed, - 'pending_force_closed': pending_force_closed, - 'waiting_for_close': waiting_for_close, - 'local_settings': get_local_settings('AR-'), - 'network': 'testnet/' if settings.LND_NETWORK == 'testnet' else '', - 'graph_links': graph_links(), - 'network_links': network_links(), - 'db_size': db_size, - } - return render(request, 'home.html', context) - except Exception as e: - try: - error = str(e.code()) - except: - error = str(e) - return render(request, 'error.html', {'error': error}) - else: + if request.method != 'GET': return redirect('home') + try: + stub = lnrpc.LightningStub(lnd_connect()) + node_info = stub.GetInfo(ln.GetInfoRequest()) + except Exception as e: + try: + error = str(e.code()) + except: + error = str(e) + return render(request, 'error.html', {'error': error}) + return render(request, 'home.html', { + 'node_info': {'color': node_info.color, 'alias': node_info.alias, 'version': node_info.version, 'identity_pubkey': node_info.identity_pubkey, 'uris': node_info.uris}, + 'local_settings': get_local_settings('AR-'), + 'network': 'testnet/' if settings.LND_NETWORK == 'testnet' else '', + 'graph_links': graph_links(), + 'network_links': network_links(), + }) @is_login_required(login_required(login_url='/lndg-admin/login/?next=/'), settings.LOGIN_REQUIRED) def channels(request): @@ -317,6 +226,26 @@ def advanced(request): else: return redirect('home') +@is_login_required(login_required(login_url='/lndg-admin/login/?next=/'), settings.LOGIN_REQUIRED) +def logs(request): + if request.method == 'GET': + try: + count = request.GET.get('tail', 20) + logfile = '/var/log/lndg-controller.log' if path.isfile('/var/log/lndg-controller.log') else 'data/lndg-controller.log' + file_size = path.getsize(logfile)-2 + if file_size == 0: + logs = ['Logs are empty....'] + else: + target_size = 128*int(count) + read_size = min(target_size, file_size) + with open(logfile, "rb") as reader: + reader.seek(-read_size, 2) + logs = [line.decode('utf-8') for line in reader.readlines()] + return render(request, 'logs.html', {'logs': logs}) + except Exception as e: + return render(request, 'error.html', {'error': str(e)}) + return redirect('home') + @is_login_required(login_required(login_url='/lndg-admin/login/?next=/'), settings.LOGIN_REQUIRED) def route(request): if request.method == 'GET': @@ -381,7 +310,8 @@ def balances(request): sweeps = [] for pending_sweep in pending_sweeps: sweep = {} - sweep['txid_str'] = pending_sweep.outpoint.txid_bytes.hex() + sweep['txid_str'] = pending_sweep.outpoint.txid_bytes[::-1].hex() + sweep['txid_index'] = pending_sweep.outpoint.output_index sweep['amount_sat'] = pending_sweep.amount_sat sweep['witness_type'] = pending_sweep.witness_type sweep['requested_sat_per_vbyte'] = pending_sweep.requested_sat_per_vbyte @@ -1089,8 +1019,6 @@ def channel(request): 'channel': [] if channels_df.empty else channels_df.to_dict(orient='records')[0], 'incoming_htlcs': PendingHTLCs.objects.filter(chan_id=chan_id).filter(incoming=True).order_by('hash_lock'), 'outgoing_htlcs': PendingHTLCs.objects.filter(chan_id=chan_id).filter(incoming=False).order_by('hash_lock'), - 'payments': [] if payments_df.empty else payments_df.sort_values(by=['creation_date'], ascending=False).to_dict(orient='records')[:5], - 'invoices': [] if invoices_df.empty else invoices_df.sort_values(by=['settle_date'], ascending=False).to_dict(orient='records')[:5], 'peer_info': [] if peer_info_df.empty else peer_info_df.to_dict(orient='records')[0], 'network': 'testnet/' if settings.LND_NETWORK == 'testnet' else '', 'graph_links': graph_links(), @@ -1117,7 +1045,7 @@ def opens(request): exlcude_list = AvoidNodes.objects.values_list('pubkey') filter_60day = datetime.now() - timedelta(days=60) payments_60day = Payments.objects.filter(creation_date__gte=filter_60day, status=2).values_list('payment_hash') - open_list = PaymentHops.objects.filter(payment_hash__in=payments_60day).exclude(node_pubkey=self_pubkey).exclude(node_pubkey__in=current_peers).exclude(node_pubkey__in=exlcude_list).values('node_pubkey').annotate(ppm=(Sum('fee')/Sum('amt'))*1000000, score=Round((Round(Count('id')/5, output_field=IntegerField())+Round(Sum('amt')/500000, output_field=IntegerField()))/10, output_field=IntegerField()), count=Count('id'), amount=Sum('amt'), fees=Sum('fee'), sum_cost_to=Sum('cost_to')/(Sum('amt')/1000000), alias=Max('alias')).exclude(score=0).order_by('-score', 'ppm')[:21] + open_list = PaymentHops.objects.filter(payment_hash__in=payments_60day).exclude(node_pubkey=self_pubkey).exclude(node_pubkey__in=current_peers).exclude(node_pubkey__in=exlcude_list).values('node_pubkey').annotate(ppm=(Sum('fee')/Sum('amt'))*1000000, score=Round((Round(Count('id')/1, output_field=IntegerField())+Round(Sum('amt')/100000, output_field=IntegerField()))/10, output_field=IntegerField()), count=Count('id'), amount=Sum('amt'), fees=Sum('fee'), sum_cost_to=Sum('cost_to')/(Sum('amt')/1000000), alias=Max('alias')).exclude(score=0).order_by('-score', 'ppm')[:21] context = { 'open_list': open_list, 'avoid_list': AvoidNodes.objects.all(), @@ -1139,6 +1067,7 @@ def actions(request): result = {} result['chan_id'] = channel.chan_id result['short_chan_id'] = channel.short_chan_id + result['remote_pubkey'] = channel.remote_pubkey result['alias'] = channel.alias result['capacity'] = channel.capacity result['local_balance'] = channel.local_balance + channel.pending_outbound @@ -1268,6 +1197,47 @@ def batch(request): else: return redirect(request.META.get('HTTP_REFERER')) +@is_login_required(login_required(login_url='/lndg-admin/login/?next=/'), settings.LOGIN_REQUIRED) +def trades(request): + if request.method == 'GET': + stub = lnrpc.LightningStub(lnd_connect()) + context = { + 'trade_link': create_trade_details(stub) + } + return render(request, 'trades.html', context) + else: + return redirect(request.META.get('HTTP_REFERER')) + +@is_login_required(login_required(login_url='/lndg-admin/login/?next=/'), settings.LOGIN_REQUIRED) +def reset(request): + if request.method == 'GET': + context = { + 'tables':[ + {'name':'Forwards', 'count': Forwards.objects.count()}, + {'name':'Payments', 'count': Payments.objects.count()}, + {'name':'PaymentHops', 'count': PaymentHops.objects.count()}, + {'name':'Invoices', 'count': Invoices.objects.count()}, + {'name':'Rebalancer', 'count': Rebalancer.objects.count()}, + {'name':'Closures', 'count': Closures.objects.count()}, + {'name':'Resolutions', 'count': Resolutions.objects.count()}, + {'name':'Peers', 'count': Peers.objects.count()}, + {'name':'Channels', 'count': Channels.objects.count()}, + {'name':'PendingChannels', 'count': PendingChannels.objects.count()}, + {'name':'Onchain', 'count': Onchain.objects.count()}, + {'name':'PendingHTLCs', 'count': PendingHTLCs.objects.count()}, + {'name':'FailedHTLCs', 'count': FailedHTLCs.objects.count()}, + {'name':'HistFailedHTLC', 'count': HistFailedHTLC.objects.count()}, + {'name':'Autopilot', 'count': Autopilot.objects.count()}, + {'name':'Autofees', 'count': Autofees.objects.count()}, + {'name':'AvoidNodes', 'count': AvoidNodes.objects.count()}, + {'name':'PeerEvents', 'count': PeerEvents.objects.count()}, + {'name':'LocalSettings', 'count': LocalSettings.objects.count()} + ] + } + return render(request, 'reset.html', context) + else: + return redirect(request.META.get('HTTP_REFERER')) + @is_login_required(login_required(login_url='/lndg-admin/login/?next=/'), settings.LOGIN_REQUIRED) def addresses(request): if request.method == 'GET': @@ -1557,13 +1527,16 @@ def close_channel_form(request): target_channel = target_channel.get() funding_txid = target_channel.funding_txid output_index = target_channel.output_index + force_close = form.cleaned_data['force'] target_fee = form.cleaned_data['target_fee'] + if not force_close and not target_fee: + messages.error(request, 'Expected a fee rate for graceful closure. Please try again.') channel_point = ln.ChannelPoint() channel_point.funding_txid_bytes = bytes.fromhex(funding_txid) channel_point.funding_txid_str = funding_txid channel_point.output_index = output_index stub = lnrpc.LightningStub(lnd_connect()) - if form.cleaned_data['force']: + if force_close: for response in stub.CloseChannel(ln.CloseChannelRequest(channel_point=channel_point, force=True)): messages.success(request, 'Channel force closed! Closing TXID: ' + str(response.close_pending.txid[::-1].hex()) + ':' + str(response.close_pending.output_index)) break @@ -1687,33 +1660,33 @@ def get_local_settings(*prefixes): if 'AR-' in prefixes: form.append({'unit': '', 'form_id': 'update_channels', 'id': 'update_channels'}) form.append({'unit': '', 'form_id': 'enabled', 'value': 0, 'label': 'AR Enabled', 'id': 'AR-Enabled', 'title':'This enables or disables the auto-scheduling function', 'min':0, 'max':1},) - form.append({'unit': '%', 'form_id': 'target_percent', 'value': 5, 'label': 'AR Target Amount', 'id': 'AR-Target%', 'title': 'The percentage of the total capacity to target as the rebalance amount', 'min':0.1, 'max':100}) - form.append({'unit': 'min', 'form_id': 'target_time', 'value': 5, 'label': 'AR Target Time', 'id': 'AR-Time', 'title': 'The time spent per individual rebalance attempt', 'min':1, 'max':60}) - form.append({'unit': 'ppm', 'form_id': 'fee_rate', 'value': 100, 'label': 'AR Max Fee Rate', 'id': 'AR-MaxFeeRate', 'title': 'The max rate we can ever use to refill a channel with outbound', 'min':1, 'max':5000}) - form.append({'unit': '%', 'form_id': 'outbound_percent', 'value': 75, 'label': 'AR Target Out Above', 'id': 'AR-Outbound%', 'title': 'When a channel is not enabled for targeting; the minimum outbound a channel must have to be a source for refilling another channel', 'min':1, 'max':100}) - form.append({'unit': '%', 'form_id': 'inbound_percent', 'value': 100, 'label': 'AR Target In Above', 'id': 'AR-Inbound%', 'title': 'When a channel is enabled for targeting; the maximum inbound a channel can have before selected for auto rebalance', 'min':1, 'max':100}) - form.append({'unit': '%', 'form_id': 'max_cost', 'value': 65, 'label': 'AR Max Cost', 'id': 'AR-MaxCost%', 'title': 'The ppm to target which is the percentage of the outbound fee rate for the channel being refilled', 'min':1, 'max':100}) - form.append({'unit': '%', 'form_id': 'variance', 'value': 0, 'label': 'AR Variance', 'id': 'AR-Variance', 'title': 'The percentage of the target amount to be randomly varied with every rebalance attempt', 'min':0, 'max':100}) - form.append({'unit': 'min', 'form_id': 'wait_period', 'value': 30, 'label': 'AR Wait Period', 'id': 'AR-WaitPeriod', 'title': 'The minutes we should wait after a failed attempt before trying again', 'min':1, 'max':10080}) - form.append({'unit': '', 'form_id': 'autopilot', 'value': 0, 'label': 'Autopilot', 'id': 'AR-Autopilot', 'title': 'This enables or disables the Autopilot function which automatically acts upon suggestions on this page: /actions', 'min':0, 'max':1}) - form.append({'unit': 'days', 'form_id': 'autopilotdays', 'value': 7, 'label': 'Autopilot Days', 'id': 'AR-APDays', 'title': 'Number of days to consider for autopilot. Default 7', 'min':0, 'max':100}) - form.append({'unit': '', 'form_id': 'workers', 'value': 1, 'label': 'Workers', 'id': 'AR-Workers', 'title': 'Number of workers', 'min':1, 'max':12}) + form.append({'unit': '%', 'form_id': 'target_percent', 'value': 3.0, 'label': 'AR Target Amount', 'id': 'AR-Target%', 'title': 'The percentage of the total capacity to target as the rebalance amount. Default 3', 'min':0.1, 'max':100}) + form.append({'unit': 'min', 'form_id': 'target_time', 'value': 5, 'label': 'AR Target Time', 'id': 'AR-Time', 'title': 'The time spent in minutes for each individual rebalance attempt. Default 5', 'min':1, 'max':60}) + form.append({'unit': 'ppm', 'form_id': 'fee_rate', 'value': 500, 'label': 'AR Max Fee Rate', 'id': 'AR-MaxFeeRate', 'title': 'The max rate we can ever use to refill a channel with outbound. Default 500', 'min':1, 'max':5000}) + form.append({'unit': '%', 'form_id': 'outbound_percent', 'value': 75, 'label': 'AR Target Out Above', 'id': 'AR-Outbound%', 'title': 'Default oTarget% for new channels. When a channel is not AR enabled; the oTarget% is the minimum outbound a channel must have to be a source for refilling another channel. Default 75', 'min':1, 'max':100}) + form.append({'unit': '%', 'form_id': 'inbound_percent', 'value': 90, 'label': 'AR Target In Above', 'id': 'AR-Inbound%', 'title': 'Default iTarget% for new channels. When a channel is AR enabled; the iTarget% is the minimum inbound a channel must have before selected for auto rebalance. Default 90', 'min':1, 'max':100}) + form.append({'unit': '%', 'form_id': 'max_cost', 'value': 65, 'label': 'AR Max Cost', 'id': 'AR-MaxCost%', 'title': 'The ppm to target which is the percentage of the outbound fee rate for the channel being refilled. Default 65', 'min':1, 'max':100}) + form.append({'unit': '%', 'form_id': 'variance', 'value': 0, 'label': 'AR Variance', 'id': 'AR-Variance', 'title': 'The percentage of the target amount to be randomly varied with every rebalance attempt. Default 0', 'min':0, 'max':100}) + form.append({'unit': 'min', 'form_id': 'wait_period', 'value': 30, 'label': 'AR Wait Period', 'id': 'AR-WaitPeriod', 'title': 'The minutes we should wait after a failed attempt before trying again. Default 30', 'min':1, 'max':10080}) + form.append({'unit': '', 'form_id': 'autopilot', 'value': 0, 'label': 'Autopilot', 'id': 'AR-Autopilot', 'title': 'This enables or disables the Auto-Rebalance function for individual channels based on flow (automatically acts upon suggestions on this page: /actions)', 'min':0, 'max':1}) + form.append({'unit': 'days', 'form_id': 'autopilotdays', 'value': 7, 'label': 'Autopilot Days', 'id': 'AR-APDays', 'title': 'Number of days to consider for autopilot calculations. Default 7', 'min':0, 'max':100}) + form.append({'unit': '', 'form_id': 'workers', 'value': 1, 'label': 'Workers', 'id': 'AR-Workers', 'title': 'Number of concurrent rebalance workers to run at once (use a proper value for your hardware, this will increase the load on the lnd server). Default 1', 'min':1, 'max':12}) if 'AF-' in prefixes: form.append({'unit': '', 'form_id': 'af_enabled', 'value': 0, 'label': 'Autofee', 'id': 'AF-Enabled', 'title': 'Enable/Disable Auto-fee functionality', 'min':0, 'max':1}) - form.append({'unit': 'ppm', 'form_id': 'af_maxRate', 'value': 2500, 'label': 'AF Max Rate', 'id': 'AF-MaxRate', 'title': 'Maximum Rate', 'min':0, 'max':5000}) - form.append({'unit': 'ppm', 'form_id': 'af_minRate', 'value': 0, 'label': 'AF Min Rate', 'id': 'AF-MinRate', 'title': 'Minimum Rate', 'min':0, 'max':5000}) - form.append({'unit': 'ppm', 'form_id': 'af_increment', 'value': 5, 'label': 'AF Increment', 'id': 'AF-Increment', 'title': 'Amount to increment on each interaction', 'min':1, 'max':100}) - form.append({'unit': 'x', 'form_id': 'af_multiplier', 'value': 5, 'label': 'AF Multiplier', 'id': 'AF-Multiplier', 'title': 'Multiplier to be applied to Auto-Fee', 'min':1, 'max':100}) - form.append({'unit': '', 'form_id': 'af_failedHTLCs', 'value': 25, 'label': 'AF FailedHTLCs', 'id': 'AF-FailedHTLCs', 'title': 'Failed HTLCs', 'min':1, 'max':100}) - form.append({'unit': 'hours', 'form_id': 'af_updateHours', 'value': 24, 'label': 'AF Update', 'id': 'AF-UpdateHours', 'title': 'Number of hours to consider to update fees. Default 24', 'min':1, 'max':100}) - form.append({'unit': '%', 'form_id': 'af_lowliq', 'value': 15, 'label': 'AF LowLiq', 'id': 'AF-LowLiqLimit', 'title': 'Limit for low liq AF rules. Default 5', 'min':0, 'max':100}) - form.append({'unit': '%', 'form_id': 'af_excess', 'value': 90, 'label': 'AF Excess', 'id': 'AF-ExcessLimit', 'title': 'Limit for excess liq AF rules. Default 95', 'min':0, 'max':100}) + form.append({'unit': 'ppm', 'form_id': 'af_maxRate', 'value': 2500, 'label': 'AF Max Rate', 'id': 'AF-MaxRate', 'title': 'Maximum Rate that can be adjusted to. Default 2500', 'min':0, 'max':5000}) + form.append({'unit': 'ppm', 'form_id': 'af_minRate', 'value': 0, 'label': 'AF Min Rate', 'id': 'AF-MinRate', 'title': 'Minimum Rate that can be adjusted to. Default 0', 'min':0, 'max':5000}) + form.append({'unit': 'ppm', 'form_id': 'af_increment', 'value': 5, 'label': 'AF Increment', 'id': 'AF-Increment', 'title': 'Target fee rate will always be a multiple of this value. Default 5', 'min':1, 'max':100}) + form.append({'unit': 'x', 'form_id': 'af_multiplier', 'value': 5, 'label': 'AF Multiplier', 'id': 'AF-Multiplier', 'title': 'Multiplier to be applied to Auto-Fee adjustments. Default 5', 'min':1, 'max':100}) + form.append({'unit': '', 'form_id': 'af_failedHTLCs', 'value': 25, 'label': 'AF FailedHTLCs', 'id': 'AF-FailedHTLCs', 'title': 'Failed HTLCs required since last fee update to trigger a fee increase (when chan liq% is below AR-LowLiq). Default 25', 'min':1, 'max':100}) + form.append({'unit': 'hours', 'form_id': 'af_updateHours', 'value': 24, 'label': 'AF Update', 'id': 'AF-UpdateHours', 'title': 'Minimum number of hours between fee updates for an individual channel. Default 24', 'min':1, 'max':100}) + form.append({'unit': '%', 'form_id': 'af_lowliq', 'value': 15, 'label': 'AF LowLiq', 'id': 'AF-LowLiqLimit', 'title': 'Limit for running low liq AF rules (increase when failed htlcs + no inbound). Default 15', 'min':0, 'max':100}) + form.append({'unit': '%', 'form_id': 'af_excess', 'value': 95, 'label': 'AF Excess', 'id': 'AF-ExcessLimit', 'title': 'Limit for running excess liq AF rules (decrease for stagnant channels and those with assisting revenues). Default 95', 'min':0, 'max':100}) if 'GUI-' in prefixes: - form.append({'unit': '', 'form_id': 'gui_graphLinks', 'value': 'https://amboss.space', 'label': 'Graph URL', 'id': 'GUI-GraphLinks', 'title': 'Preferred Graph URL'}) - form.append({'unit': '', 'form_id': 'gui_netLinks', 'value': 'https://mempool.space', 'label': 'NET URL', 'id': 'GUI-NetLinks', 'title': 'Preferred NET URL'}) + form.append({'unit': '', 'form_id': 'gui_graphLinks', 'value': 'https://mempool.space/lightning', 'label': 'Graph URL', 'id': 'GUI-GraphLinks', 'title': 'Preferred Graph URL. Default https://mempool.space/lightning'}) + form.append({'unit': '', 'form_id': 'gui_netLinks', 'value': 'https://mempool.space', 'label': 'NET URL', 'id': 'GUI-NetLinks', 'title': 'Preferred NET URL. Default https://mempool.space'}) if 'LND-' in prefixes: - form.append({'unit': '', 'form_id': 'lnd_cleanPayments', 'value': 0, 'label': 'LND Clean Payments', 'id': 'LND-CleanPayments', 'title': 'Clean LND Payments', 'min':0, 'max':1}) - form.append({'unit': 'days', 'form_id': 'lnd_retentionDays', 'value': 30, 'label': 'LND Retention', 'id': 'LND-RetentionDays', 'title': 'LND Retention days', 'min':1, 'max':1000}) + form.append({'unit': '', 'form_id': 'lnd_cleanPayments', 'value': 0, 'label': 'LND Clean Payments', 'id': 'LND-CleanPayments', 'title': 'Clean LND Payments (toggles failed payment clean-up routine)', 'min':0, 'max':1}) + form.append({'unit': 'days', 'form_id': 'lnd_retentionDays', 'value': 30, 'label': 'LND Retention', 'id': 'LND-RetentionDays', 'title': 'LND Retention days for failed payment data', 'min':1, 'max':1000}) for prefix in prefixes: ar_settings = LocalSettings.objects.filter(key__contains=prefix).values('key', 'value').order_by('key') @@ -1727,18 +1700,18 @@ def get_local_settings(*prefixes): @is_login_required(login_required(login_url='/lndg-admin/login/?next=/'), settings.LOGIN_REQUIRED) def update_settings(request): if request.method == 'POST': - template = [{'form_id': 'enabled', 'value': 0, 'parse': lambda x: x,'id': 'AR-Enabled'}, - {'form_id': 'target_percent', 'value': 5, 'parse': lambda x: float(x),'id': 'AR-Target%'}, - {'form_id': 'target_time', 'value': 5, 'parse': lambda x: x,'id': 'AR-Time'}, - {'form_id': 'fee_rate', 'value': 100, 'parse': lambda x: x,'id': 'AR-MaxFeeRate'}, + template = [{'form_id': 'enabled', 'value': 0, 'parse': lambda x: int(x),'id': 'AR-Enabled'}, + {'form_id': 'target_percent', 'value': 3.0, 'parse': lambda x: float(x),'id': 'AR-Target%'}, + {'form_id': 'target_time', 'value': 5, 'parse': lambda x: int(x),'id': 'AR-Time'}, + {'form_id': 'fee_rate', 'value': 500, 'parse': lambda x: int(x),'id': 'AR-MaxFeeRate'}, {'form_id': 'outbound_percent', 'value': 75, 'parse': lambda x: int(x),'id': 'AR-Outbound%'}, - {'form_id': 'inbound_percent', 'value': 100, 'parse': lambda x: int(x),'id': 'AR-Inbound%'}, + {'form_id': 'inbound_percent', 'value': 90, 'parse': lambda x: int(x),'id': 'AR-Inbound%'}, {'form_id': 'max_cost', 'value': 65, 'parse': lambda x: int(x),'id': 'AR-MaxCost%'}, - {'form_id': 'variance', 'value': 0, 'parse': lambda x: x,'id': 'AR-Variance'}, - {'form_id': 'wait_period', 'value': 30, 'parse': lambda x: x,'id': 'AR-WaitPeriod'}, - {'form_id': 'autopilot', 'value': 0, 'parse': lambda x: x,'id': 'AR-Autopilot'}, - {'form_id': 'autopilotdays', 'value': 7, 'parse': lambda x: x,'id': 'AR-APDays'}, - {'form_id': 'workers', 'value': 1, 'parse': lambda x: x,'id': 'AR-Workers'}, + {'form_id': 'variance', 'value': 0, 'parse': lambda x: int(x),'id': 'AR-Variance'}, + {'form_id': 'wait_period', 'value': 30, 'parse': lambda x: int(x),'id': 'AR-WaitPeriod'}, + {'form_id': 'autopilot', 'value': 0, 'parse': lambda x: int(x),'id': 'AR-Autopilot'}, + {'form_id': 'autopilotdays', 'value': 7, 'parse': lambda x: int(x),'id': 'AR-APDays'}, + {'form_id': 'workers', 'value': 1, 'parse': lambda x: int(x),'id': 'AR-Workers'}, #AF {'form_id': 'af_enabled', 'value': 0, 'parse': lambda x: int(x),'id': 'AF-Enabled'}, {'form_id': 'af_maxRate', 'value': 2500, 'parse': lambda x: int(x),'id': 'AF-MaxRate'}, @@ -1748,13 +1721,13 @@ def update_settings(request): {'form_id': 'af_failedHTLCs', 'value': 25, 'parse': lambda x: int(x),'id': 'AF-FailedHTLCs'}, {'form_id': 'af_updateHours', 'value': 24, 'parse': lambda x: int(x),'id': 'AF-UpdateHours'}, {'form_id': 'af_lowliq', 'value': 15, 'parse': lambda x: int(x),'id': 'AF-LowLiqLimit'}, - {'form_id': 'af_excess', 'value': 90, 'parse': lambda x: int(x),'id': 'AF-ExcessLimit'}, + {'form_id': 'af_excess', 'value': 95, 'parse': lambda x: int(x),'id': 'AF-ExcessLimit'}, #GUI - {'form_id': 'gui_graphLinks', 'value': 'https://amboss.space', 'parse': lambda x: x,'id': 'GUI-GraphLinks'}, - {'form_id': 'gui_netLinks', 'value': 'https://mempool.space', 'parse': lambda x: x,'id': 'GUI-NetLinks'}, + {'form_id': 'gui_graphLinks', 'value': 'https://mempool.space/lightning', 'parse': lambda x: str(x),'id': 'GUI-GraphLinks'}, + {'form_id': 'gui_netLinks', 'value': 'https://mempool.space', 'parse': lambda x: str(x),'id': 'GUI-NetLinks'}, #LND - {'form_id': 'lnd_cleanPayments', 'value': 0, 'parse': lambda x: x, 'id': 'LND-CleanPayments'}, - {'form_id': 'lnd_retentionDays', 'value': 30, 'parse': lambda x: x, 'id': 'LND-RetentionDays'}, + {'form_id': 'lnd_cleanPayments', 'value': 0, 'parse': lambda x: int(x), 'id': 'LND-CleanPayments'}, + {'form_id': 'lnd_retentionDays', 'value': 30, 'parse': lambda x: int(x), 'id': 'LND-RetentionDays'}, ] form = LocalSettingsForm(request.POST) @@ -2106,23 +2079,30 @@ def get_fees(request): return redirect(request.META.get('HTTP_REFERER')) return redirect(request.META.get('HTTP_REFERER')) +@api_view(['POST']) @is_login_required(login_required(login_url='/lndg-admin/login/?next=/'), settings.LOGIN_REQUIRED) def sign_message(request): - if request.method == 'POST': - msg = request.POST.get("msg") - stub = lnrpc.LightningStub(lnd_connect()) - req = ln.SignMessageRequest(msg=msg.encode('utf-8'), single_hash=False) - response = stub.SignMessage(req) - messages.success(request, "Signed message: " + str(response.signature)) + serializer = SignMessageSerializer(data=request.data) + if serializer.is_valid(): + message = serializer.validated_data['message'] + try: + stub = lnrpc.LightningStub(lnd_connect()) + response = stub.SignMessage(ln.SignMessageRequest(msg=message.encode('utf-8'), single_hash=False)) + return Response({'message': 'Success', 'data': str(response.signature)}) + except Exception as e: + error = str(e) + details_index = error.find('details =') + 11 + debug_error_index = error.find('debug_error_string =') - 3 + error_msg = error[details_index:debug_error_index] + return Response({'error': f'Sign message failed! Error: {error_msg}'}) else: - messages.error(request, 'Invalid Request. Please try again.') - return redirect(request.META.get('HTTP_REFERER')) + return Response({'error': 'Invalid request!'}) class PaymentsViewSet(viewsets.ReadOnlyModelViewSet): permission_classes = [IsAuthenticated] if settings.LOGIN_REQUIRED else [] queryset = Payments.objects.all().order_by('-creation_date') serializer_class = PaymentSerializer - filterset_fields = {'status':['exact','lt','gt'], 'creation_date':['lte','gte']} + filterset_fields = {'status':['exact','lt','gt'], 'creation_date':['lte','gte'], 'chan_out': ['exact'], 'index': ['lt']} class PaymentHopsViewSet(viewsets.ReadOnlyModelViewSet): permission_classes = [IsAuthenticated] if settings.LOGIN_REQUIRED else [] @@ -2133,7 +2113,7 @@ class InvoicesViewSet(viewsets.ReadOnlyModelViewSet): permission_classes = [IsAuthenticated] if settings.LOGIN_REQUIRED else [] queryset = Invoices.objects.all().order_by('-creation_date') serializer_class = InvoiceSerializer - filterset_fields = {'state': ['exact','lt', 'gt'], 'is_revenue': ['exact'], 'settle_date': ['gte']} + filterset_fields = {'state': ['exact','lt', 'gt'], 'is_revenue': ['exact'], 'settle_date': ['gte'], 'chan_in': ['exact'], 'index': ['lt']} def update(self, request, pk=None): setting = get_object_or_404(Invoices.objects.all(), pk=pk) @@ -2200,6 +2180,25 @@ class PeerEventsViewSet(viewsets.ReadOnlyModelViewSet): serializer_class = PeerEventsSerializer filterset_fields = {'chan_id': ['exact'], 'id': ['lt']} +class TradeSalesViewSet(viewsets.ReadOnlyModelViewSet): + permission_classes = [IsAuthenticated] if settings.LOGIN_REQUIRED else [] + queryset = TradeSales.objects.all() + serializer_class = TradeSalesSerializer + + def update(self, request, pk): + rebalance = get_object_or_404(self.queryset, pk=pk) + serializer = self.get_serializer(rebalance, data=request.data, context={'request': request}, partial=True) + if serializer.is_valid(): + serializer.save() + return Response(serializer.data) + return Response(serializer.errors) + +class FeeLogViewSet(viewsets.ReadOnlyModelViewSet): + permission_classes = [IsAuthenticated] if settings.LOGIN_REQUIRED else [] + queryset = Autofees.objects.all().order_by('-id') + serializer_class = FeeLogSerializer + filterset_fields = {'chan_id': ['exact'], 'id': ['lt']} + class FailedHTLCFilter(FilterSet): chan_in_or_out = CharFilter(method='filter_chan_in_or_out', label='Chan In Or Out') chan_id_in = CharFilter(field_name='chan_id_in', lookup_expr='exact') @@ -2273,6 +2272,160 @@ def update(self, request, pk): return Response(serializer.data) return Response(serializer.errors) +@api_view(['GET']) +@is_login_required(permission_classes([IsAuthenticated]), settings.LOGIN_REQUIRED) +def forwards_summary(request): + filter_1day = datetime.now() - timedelta(days=1) + filter_7day = datetime.now() - timedelta(days=7) + summary_out = Forwards.objects.values(chan_id=F('chan_id_out')).annotate( + count_outgoing_1day=Count('id', filter=Q(forward_date__gte=filter_1day)), + sum_outgoing_1day=Coalesce(Sum('amt_out_msat', filter=Q(forward_date__gte=filter_1day)), 0), + count_outgoing_7day=Count('id', filter=Q(forward_date__gte=filter_7day)), + sum_outgoing_7day=Coalesce(Sum('amt_out_msat', filter=Q(forward_date__gte=filter_7day)), 0), + sum_fees_1day=Coalesce(Sum('fee', filter=Q(forward_date__gte=filter_1day)), 0.0), + sum_fees_7day=Coalesce(Sum('fee', filter=Q(forward_date__gte=filter_7day)), 0.0), + count_incoming_1day=Value(0), + sum_incoming_1day=Value(0), + count_incoming_7day=Value(0), + sum_incoming_7day=Value(0) + ).filter( + Q(count_outgoing_1day__gt=0) | + Q(sum_outgoing_1day__gt=0) | + Q(count_outgoing_7day__gt=0) | + Q(sum_outgoing_7day__gt=0) | + Q(sum_fees_1day__gt=0) | + Q(sum_fees_7day__gt=0) + ) + + summary_in = Forwards.objects.values(chan_id=F('chan_id_in')).annotate( + count_outgoing_1day=Value(0), + sum_outgoing_1day=Value(0), + count_outgoing_7day=Value(0), + sum_outgoing_7day=Value(0), + sum_fees_1day=Value(0), + sum_fees_7day=Value(0), + count_incoming_1day=Count('id', filter=Q(forward_date__gte=filter_1day)), + sum_incoming_1day=Coalesce(Sum('amt_in_msat', filter=Q(forward_date__gte=filter_1day)), 0), + count_incoming_7day=Count('id', filter=Q(forward_date__gte=filter_7day)), + sum_incoming_7day=Coalesce(Sum('amt_in_msat', filter=Q(forward_date__gte=filter_7day)), 0) + ).filter( + Q(count_incoming_1day__gt=0) | + Q(sum_incoming_1day__gt=0) | + Q(count_incoming_7day__gt=0) | + Q(sum_incoming_7day__gt=0) + ) + + return Response({'results': summary_out.union(summary_in)}) + +@api_view(['GET']) +@is_login_required(permission_classes([IsAuthenticated]), settings.LOGIN_REQUIRED) +def node_info(request): + stub = lnrpc.LightningStub(lnd_connect()) + node_info = stub.GetInfo(ln.GetInfoRequest()) + balances = stub.WalletBalance(ln.WalletBalanceRequest()) + pending_channels = stub.PendingChannels(ln.PendingChannelsRequest()) + + limbo_balance = pending_channels.total_limbo_balance + pending_open = None + pending_closed = None + pending_force_closed = None + waiting_for_close = None + pending_open_balance = 0 + pending_closing_balance = 0 + if pending_channels.pending_open_channels: + target_resp = pending_channels.pending_open_channels + peers = Peers.objects.all() + pending_changes = PendingChannels.objects.all() + pending_open = [] + inbound_setting = int(LocalSettings.objects.filter(key='AR-Inbound%')[0].value) if LocalSettings.objects.filter(key='AR-Inbound%').exists() else 90 + outbound_setting = int(LocalSettings.objects.filter(key='AR-Outbound%')[0].value) if LocalSettings.objects.filter(key='AR-Outbound%').exists() else 75 + amt_setting = float(LocalSettings.objects.filter(key='AR-Target%')[0].value) if LocalSettings.objects.filter(key='AR-Target%').exists() else 3 + cost_setting = int(LocalSettings.objects.filter(key='AR-MaxCost%')[0].value) if LocalSettings.objects.filter(key='AR-MaxCost%').exists() else 65 + auto_fees = int(LocalSettings.objects.filter(key='AF-Enabled')[0].value) if LocalSettings.objects.filter(key='AF-Enabled').exists() else 0 + for i in range(0,len(target_resp)): + item = {} + pending_open_balance += target_resp[i].channel.local_balance + funding_txid = target_resp[i].channel.channel_point.split(':')[0] + output_index = target_resp[i].channel.channel_point.split(':')[1] + updated = pending_changes.filter(funding_txid=funding_txid,output_index=output_index).exists() + item['alias'] = peers.filter(pubkey=target_resp[i].channel.remote_node_pub)[0].alias if peers.filter(pubkey=target_resp[i].channel.remote_node_pub).exists() else '' + item['remote_node_pub'] = target_resp[i].channel.remote_node_pub + item['channel_point'] = target_resp[i].channel.channel_point + item['funding_txid'] = funding_txid + item['output_index'] = output_index + item['capacity'] = target_resp[i].channel.capacity + item['local_balance'] = target_resp[i].channel.local_balance + item['remote_balance'] = target_resp[i].channel.remote_balance + item['local_chan_reserve_sat'] = target_resp[i].channel.local_chan_reserve_sat + item['remote_chan_reserve_sat'] = target_resp[i].channel.remote_chan_reserve_sat + item['initiator'] = target_resp[i].channel.initiator + item['commitment_type'] = target_resp[i].channel.commitment_type + item['commit_fee'] = target_resp[i].commit_fee + item['commit_weight'] = target_resp[i].commit_weight + item['fee_per_kw'] = target_resp[i].fee_per_kw + item['local_base_fee'] = pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].local_base_fee if updated else '' + item['local_fee_rate'] = pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].local_fee_rate if updated else '' + item['local_cltv'] = pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].local_cltv if updated else '' + item['auto_rebalance'] = pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].auto_rebalance if updated and pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].auto_rebalance != None else False + item['ar_amt_target'] = pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].ar_amt_target if updated and pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].ar_amt_target != None else int((amt_setting/100) * target_resp[i].channel.capacity) + item['ar_in_target'] = pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].ar_in_target if updated and pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].ar_in_target != None else inbound_setting + item['ar_out_target'] = pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].ar_out_target if updated and pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].ar_out_target != None else outbound_setting + item['ar_max_cost'] = pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].ar_max_cost if updated and pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].ar_max_cost != None else cost_setting + item['auto_fees'] = pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].auto_fees if updated and pending_changes.filter(funding_txid=funding_txid,output_index=output_index)[0].auto_fees != None else (False if auto_fees == 0 else True) + pending_open.append(item) + if pending_channels.pending_closing_channels: + target_resp = pending_channels.pending_closing_channels + pending_closed = [] + for i in range(0,len(target_resp)): + pending_item = {'remote_node_pub':target_resp[i].channel.remote_node_pub,'channel_point':target_resp[i].channel.channel_point,'capacity':target_resp[i].channel.capacity,'local_balance':target_resp[i].channel.local_balance,'remote_balance':target_resp[i].channel.remote_balance,'local_chan_reserve_sat':target_resp[i].channel.local_chan_reserve_sat, + 'remote_chan_reserve_sat':target_resp[i].channel.remote_chan_reserve_sat,'initiator':target_resp[i].channel.initiator,'commitment_type':target_resp[i].channel.commitment_type, 'local_commit_fee_sat': target_resp[i].commitments.local_commit_fee_sat,'limbo_balance':target_resp[i].limbo_balance,'closing_txid':target_resp[i].closing_txid} + pending_item.update(pending_channel_details(target_resp[i].channel.channel_point)) + pending_closed.append(pending_item) + if pending_channels.pending_force_closing_channels: + target_resp = pending_channels.pending_force_closing_channels + pending_force_closed = [] + for i in range(0,len(target_resp)): + pending_item = {'remote_node_pub':target_resp[i].channel.remote_node_pub,'channel_point':target_resp[i].channel.channel_point,'capacity':target_resp[i].channel.capacity,'local_balance':target_resp[i].channel.local_balance,'remote_balance':target_resp[i].channel.remote_balance,'initiator':target_resp[i].channel.initiator, + 'commitment_type':target_resp[i].channel.commitment_type,'closing_txid':target_resp[i].closing_txid,'limbo_balance':target_resp[i].limbo_balance,'maturity_height':target_resp[i].maturity_height,'blocks_til_maturity':target_resp[i].blocks_til_maturity if target_resp[i].blocks_til_maturity > 0 else find_next_block_maturity(target_resp[i]), + 'maturity_datetime':(datetime.now()+timedelta(minutes=(10*target_resp[i].blocks_til_maturity if target_resp[i].blocks_til_maturity > 0 else 10*find_next_block_maturity(target_resp[i]) )))} + pending_item.update(pending_channel_details(target_resp[i].channel.channel_point)) + pending_force_closed.append(pending_item) + if pending_channels.waiting_close_channels: + target_resp = pending_channels.waiting_close_channels + waiting_for_close = [] + for i in range(0,len(target_resp)): + pending_closing_balance += target_resp[i].limbo_balance + pending_item = {'remote_node_pub':target_resp[i].channel.remote_node_pub,'channel_point':target_resp[i].channel.channel_point,'capacity':target_resp[i].channel.capacity,'local_balance':target_resp[i].channel.local_balance,'remote_balance':target_resp[i].channel.remote_balance,'local_chan_reserve_sat':target_resp[i].channel.local_chan_reserve_sat, + 'remote_chan_reserve_sat':target_resp[i].channel.remote_chan_reserve_sat,'initiator':target_resp[i].channel.initiator,'commitment_type':target_resp[i].channel.commitment_type, 'local_commit_fee_sat': target_resp[i].commitments.local_commit_fee_sat, 'limbo_balance':target_resp[i].limbo_balance,'closing_txid':target_resp[i].closing_txid} + pending_item.update(pending_channel_details(target_resp[i].channel.channel_point)) + waiting_for_close.append(pending_item) + limbo_balance -= pending_closing_balance + try: + db_size = round(path.getsize(path.expanduser(settings.LND_DATABASE_PATH))*0.000000001, 3) + except: + db_size = 0 + return Response({ + 'num_peers': node_info.num_peers, + 'synced_to_graph': node_info.synced_to_graph, + 'synced_to_chain': node_info.synced_to_chain, + 'num_active_channels': node_info.num_active_channels, + 'num_inactive_channels': node_info.num_inactive_channels, + 'chains': [chain.chain+"-"+chain.network for chain in node_info.chains], + 'block': {'hash': node_info.block_hash, 'height': node_info.block_height}, + 'balance': { + 'limbo': limbo_balance, + 'onchain': balances.total_balance, + 'confirmed': balances.confirmed_balance, + 'unconfirmed': balances.unconfirmed_balance, + 'total': balances.total_balance + pending_open_balance + limbo_balance, + }, + 'pending_open': pending_open, + 'pending_closed': pending_closed, + 'pending_force_closed': pending_force_closed, + 'waiting_for_close': waiting_for_close, + 'db_size': db_size + }) + @api_view(['POST']) @is_login_required(permission_classes([IsAuthenticated]), settings.LOGIN_REQUIRED) def connect_peer(request): @@ -2288,10 +2441,37 @@ def connect_peer(request): elif peer_id.count('@') == 1 and len(peer_id.split('@')[0]) == 66: peer_pubkey, host = peer_id.split('@') else: - raise Exception('Invalid peer pubkey or connection string.') + return Response({'error': 'Invalid peer pubkey or connection string.'}) ln_addr = ln.LightningAddress(pubkey=peer_pubkey, host=host) - response = stub.ConnectPeer(ln.ConnectPeerRequest(addr=ln_addr)) - return Response({'message': 'Connection successful!' + str(response)}) + stub.ConnectPeer(ln.ConnectPeerRequest(addr=ln_addr)) + return Response({'message': 'Connection successful!'}) + except Exception as e: + error = str(e) + details_index = error.find('details =') + 11 + debug_error_index = error.find('debug_error_string =') - 3 + error_msg = error[details_index:debug_error_index] + return Response({'error': 'Connection request failed! Error: ' + error_msg}) + else: + return Response({'error': 'Invalid request!'}) + +@api_view(['POST']) +@is_login_required(permission_classes([IsAuthenticated]), settings.LOGIN_REQUIRED) +def disconnect_peer(request): + serializer = DisconnectPeerSerializer(data=request.data) + if serializer.is_valid(): + try: + stub = lnrpc.LightningStub(lnd_connect()) + peer_id = serializer.validated_data['peer_id'] + if len(peer_id) == 66: + peer_pubkey = peer_id + else: + return Response({'error': 'Invalid peer pubkey.'}) + stub.DisconnectPeer(ln.DisconnectPeerRequest(pub_key=peer_pubkey)) + if Peers.objects.filter(pubkey=peer_id).exists(): + db_peer = Peers.objects.filter(pubkey=peer_id)[0] + db_peer.connected = False + db_peer.save() + return Response({'message': 'Disconnection successful!'}) except Exception as e: error = str(e) details_index = error.find('details =') + 11 @@ -2697,7 +2877,7 @@ def broadcast_tx(request): if response.publish_error == '': return Response({'message': f'Successfully broadcast tx!'}) else: - return Response({'message': f'Error while broadcasting TX: {response.publish_error}'}) + return Response({'error': f'Error while broadcasting TX: {response.publish_error}'}) except Exception as e: error = str(e) details_index = error.find('details =') + 11 @@ -2705,4 +2885,63 @@ def broadcast_tx(request): error_msg = error[details_index:debug_error_index] return Response({'error': f'TX broadcast failed! Error: {error_msg}'}) else: - return Response({'error': 'Invalid request!'}) \ No newline at end of file + return Response({'error': 'Invalid request!'}) + +@api_view(['POST']) +@is_login_required(permission_classes([IsAuthenticated]), settings.LOGIN_REQUIRED) +def create_trade(request): + serializer = CreateTradeSerializer(data=request.data) + if serializer.is_valid(): + description = serializer.validated_data['description'] + price = serializer.validated_data['price'] + sale_type = serializer.validated_data['type'] + secret = serializer.validated_data['secret'] + expiry = serializer.validated_data['expiry'] + sale_limit = serializer.validated_data['sale_limit'] + trade_id = token_bytes(32).hex() + try: + new_trade = TradeSales(id=trade_id, description=description, price=price, secret=secret, expiry=expiry, sale_type=sale_type, sale_limit=sale_limit) + new_trade.save() + return Response({'message': f'Created trade: {description}', 'id': new_trade.id, 'description': new_trade.description, 'price': new_trade.price, 'expiry': new_trade.expiry, 'sale_type': new_trade.sale_type, 'secret': new_trade.secret, 'sale_count': new_trade.sale_count, 'sale_limit': new_trade.sale_limit}) + except Exception as e: + error = str(e) + return Response({'error': f'Error creating trade: {error}'}) + else: + return Response({'error': serializer.error_messages}) + +@api_view(['POST']) +@is_login_required(permission_classes([IsAuthenticated]), settings.LOGIN_REQUIRED) +def reset_api(request): + serializer = ResetSerializer(data=request.data) + if serializer.is_valid(): + table = serializer.validated_data['table'] + tables = { + 'Forwards': Forwards.objects.all(), + 'Payments': Payments.objects.all(), + 'PaymentHops': PaymentHops.objects.all(), + 'Invoices': Invoices.objects.all(), + 'Rebalancer': Rebalancer.objects.all(), + 'Closures': Closures.objects.all(), + 'Resolutions': Resolutions.objects.all(), + 'Peers': Peers.objects.all(), + 'Channels': Channels.objects.all(), + 'PendingChannels': PendingChannels.objects.all(), + 'Onchain': Onchain.objects.all(), + 'PendingHTLCs': PendingHTLCs.objects.all(), + 'FailedHTLCs': FailedHTLCs.objects.all(), + 'HistFailedHTLC': HistFailedHTLC.objects.all(), + 'Autopilot': Autopilot.objects.all(), + 'Autofees': Autofees.objects.all(), + 'AvoidNodes': AvoidNodes.objects.all(), + 'PeerEvents': PeerEvents.objects.all(), + 'LocalSettings': LocalSettings.objects.all() + } + try: + target_table = tables[table] + target_table.delete() + return Response({'message': f'Successfully deleted table: {table}'}) + except Exception as e: + error = str(e) + return Response({'error': f'Error deleting table: {error}'}) + else: + return Response({'error': serializer.error_messages}) \ No newline at end of file diff --git a/htlc_stream.py b/htlc_stream.py index e804b68d..e10d4ae6 100644 --- a/htlc_stream.py +++ b/htlc_stream.py @@ -1,4 +1,5 @@ import django +from datetime import datetime from gui.lnd_deps import router_pb2 as lnr from gui.lnd_deps import router_pb2_grpc as lnrouter from gui.lnd_deps.lnd_connect import lnd_connect @@ -9,38 +10,14 @@ from gui.models import Channels, FailedHTLCs def main(): - try: - connection = lnd_connect() - routerstub = lnrouter.RouterStub(connection) - all_forwards = {} - for response in routerstub.SubscribeHtlcEvents(lnr.SubscribeHtlcEventsRequest()): - if response.event_type == 3 and str(response.link_fail_event) != '': - in_chan_id = response.incoming_channel_id - out_chan_id = response.outgoing_channel_id - in_chan = Channels.objects.filter(chan_id=in_chan_id)[0] if Channels.objects.filter(chan_id=in_chan_id).exists() else None - out_chan = Channels.objects.filter(chan_id=out_chan_id)[0] if Channels.objects.filter(chan_id=out_chan_id).exists() else None - in_chan_alias = in_chan.alias if in_chan is not None else None - out_chan_alias = out_chan.alias if out_chan is not None else None - out_chan_liq = max(0, (out_chan.local_balance - out_chan.local_chan_reserve)) if out_chan is not None else None - out_chan_pending = out_chan.pending_outbound if out_chan is not None else None - amount = int(response.link_fail_event.info.outgoing_amt_msat/1000) - wire_failure = response.link_fail_event.wire_failure - failure_detail = response.link_fail_event.failure_detail - missed_fee = (response.link_fail_event.info.incoming_amt_msat - response.link_fail_event.info.outgoing_amt_msat)/1000 - FailedHTLCs(amount=amount, chan_id_in=in_chan_id, chan_id_out=out_chan_id, chan_in_alias=in_chan_alias, chan_out_alias=out_chan_alias, chan_out_liq=out_chan_liq, chan_out_pending=out_chan_pending, wire_failure=wire_failure, failure_detail=failure_detail, missed_fee=missed_fee).save() - elif response.event_type == 3 and str(response.forward_event) != '': - # Add forward_event - key = str(response.incoming_channel_id) + str(response.outgoing_channel_id) + str(response.incoming_htlc_id) + str(response.outgoing_htlc_id) - all_forwards[key] = response.forward_event - elif response.event_type == 3 and str(response.settle_event) != '': - # Delete forward_event - key = str(response.incoming_channel_id) + str(response.outgoing_channel_id) + str(response.incoming_htlc_id) + str(response.outgoing_htlc_id) - if key in all_forwards.keys(): - del all_forwards[key] - elif response.event_type == 3 and str(response.forward_fail_event) == '': - key = str(response.incoming_channel_id) + str(response.outgoing_channel_id) + str(response.incoming_htlc_id) + str(response.outgoing_htlc_id) - if key in all_forwards.keys(): - forward_event = all_forwards[key] + while True: + try: + print(f"{datetime.now().strftime('%c')} : [HTLC] : Starting failed HTLC stream...") + connection = lnd_connect() + routerstub = lnrouter.RouterStub(connection) + all_forwards = {} + for response in routerstub.SubscribeHtlcEvents(lnr.SubscribeHtlcEventsRequest()): + if response.event_type == 3 and str(response.link_fail_event) != '': in_chan_id = response.incoming_channel_id out_chan_id = response.outgoing_channel_id in_chan = Channels.objects.filter(chan_id=in_chan_id)[0] if Channels.objects.filter(chan_id=in_chan_id).exists() else None @@ -49,15 +26,41 @@ def main(): out_chan_alias = out_chan.alias if out_chan is not None else None out_chan_liq = max(0, (out_chan.local_balance - out_chan.local_chan_reserve)) if out_chan is not None else None out_chan_pending = out_chan.pending_outbound if out_chan is not None else None - amount = int(forward_event.info.incoming_amt_msat/1000) - wire_failure = 99 - failure_detail = 99 - missed_fee = (forward_event.info.incoming_amt_msat - forward_event.info.outgoing_amt_msat)/1000 + amount = int(response.link_fail_event.info.outgoing_amt_msat/1000) + wire_failure = response.link_fail_event.wire_failure + failure_detail = response.link_fail_event.failure_detail + missed_fee = (response.link_fail_event.info.incoming_amt_msat - response.link_fail_event.info.outgoing_amt_msat)/1000 FailedHTLCs(amount=amount, chan_id_in=in_chan_id, chan_id_out=out_chan_id, chan_in_alias=in_chan_alias, chan_out_alias=out_chan_alias, chan_out_liq=out_chan_liq, chan_out_pending=out_chan_pending, wire_failure=wire_failure, failure_detail=failure_detail, missed_fee=missed_fee).save() - del all_forwards[key] - except Exception as e: - print('Error while running failed HTLC stream: ' + str(e)) - sleep(20) + elif response.event_type == 3 and str(response.forward_event) != '': + # Add forward_event + key = str(response.incoming_channel_id) + str(response.outgoing_channel_id) + str(response.incoming_htlc_id) + str(response.outgoing_htlc_id) + all_forwards[key] = response.forward_event + elif response.event_type == 3 and str(response.settle_event) != '': + # Delete forward_event + key = str(response.incoming_channel_id) + str(response.outgoing_channel_id) + str(response.incoming_htlc_id) + str(response.outgoing_htlc_id) + if key in all_forwards.keys(): + del all_forwards[key] + elif response.event_type == 3 and str(response.forward_fail_event) == '': + key = str(response.incoming_channel_id) + str(response.outgoing_channel_id) + str(response.incoming_htlc_id) + str(response.outgoing_htlc_id) + if key in all_forwards.keys(): + forward_event = all_forwards[key] + in_chan_id = response.incoming_channel_id + out_chan_id = response.outgoing_channel_id + in_chan = Channels.objects.filter(chan_id=in_chan_id)[0] if Channels.objects.filter(chan_id=in_chan_id).exists() else None + out_chan = Channels.objects.filter(chan_id=out_chan_id)[0] if Channels.objects.filter(chan_id=out_chan_id).exists() else None + in_chan_alias = in_chan.alias if in_chan is not None else None + out_chan_alias = out_chan.alias if out_chan is not None else None + out_chan_liq = max(0, (out_chan.local_balance - out_chan.local_chan_reserve)) if out_chan is not None else None + out_chan_pending = out_chan.pending_outbound if out_chan is not None else None + amount = int(forward_event.info.incoming_amt_msat/1000) + wire_failure = 99 + failure_detail = 99 + missed_fee = (forward_event.info.incoming_amt_msat - forward_event.info.outgoing_amt_msat)/1000 + FailedHTLCs(amount=amount, chan_id_in=in_chan_id, chan_id_out=out_chan_id, chan_in_alias=in_chan_alias, chan_out_alias=out_chan_alias, chan_out_liq=out_chan_liq, chan_out_pending=out_chan_pending, wire_failure=wire_failure, failure_detail=failure_detail, missed_fee=missed_fee).save() + del all_forwards[key] + except Exception as e: + print(f"{datetime.now().strftime('%c')} : [HTLC] : Error while running failed HTLC stream: {str(e)}") + sleep(20) if __name__ == '__main__': main() \ No newline at end of file diff --git a/initialize.py b/initialize.py index 2b939bdf..2793a5c0 100644 --- a/initialize.py +++ b/initialize.py @@ -1,11 +1,11 @@ -import secrets, argparse, django, os +import os, secrets, argparse, django from pathlib import Path from django.core.management import call_command from django.contrib.auth import get_user_model from django.conf import settings BASE_DIR = Path(__file__).resolve().parent -def write_settings(node_ip, lnd_tls_path, lnd_macaroon_path, lnd_database_path, lnd_network, lnd_rpc_server, lnd_max_message, whitenoise, debug, csrftrusted, nologinrequired): +def write_settings(node_ip, lnd_tls_path, lnd_macaroon_path, lnd_database_path, lnd_network, lnd_rpc_server, lnd_max_message, whitenoise, debug, csrftrusted, nologinrequired, force_new, cookie_age): #Generate a unique secret to be used for your django site secret = secrets.token_urlsafe(64) if whitenoise: @@ -15,7 +15,7 @@ def write_settings(node_ip, lnd_tls_path, lnd_macaroon_path, lnd_database_path, wnl = '' if csrftrusted: csrf = """ -CSRF_TRUSTED_ORIGINS = [%s] +CSRF_TRUSTED_ORIGINS = ['%s'] """ % (csrftrusted) else: csrf = '' @@ -77,7 +77,6 @@ def write_settings(node_ip, lnd_tls_path, lnd_macaroon_path, lnd_database_path, 'gui', 'rest_framework', 'django_filters', - 'qr_code', ] MIDDLEWARE = [ @@ -171,19 +170,16 @@ def write_settings(node_ip, lnd_tls_path, lnd_macaroon_path, lnd_database_path, STATIC_URL = 'static/' STATIC_ROOT = os.path.join(BASE_DIR, 'gui/static/') DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField' -''' % (secret, debug, node_ip, csrf, lnd_tls_path, lnd_macaroon_path, lnd_database_path, lnd_network, lnd_rpc_server, lnd_max_message, not nologinrequired, wnl, api_login) - try: - f = open("lndg/settings.py", "x") - f.close() - except: - print('A settings file may already exist, skipping creation...') +SESSION_COOKIE_AGE = %s +''' % (secret, debug, node_ip, csrf, lnd_tls_path, lnd_macaroon_path, lnd_database_path, lnd_network, lnd_rpc_server, lnd_max_message, not nologinrequired, wnl, api_login, cookie_age) + if not force_new and Path("lndg/settings.py").exists(): + print('A settings file already exist, skipping creation...') return try: - f = open("lndg/settings.py", "w") - f.write(settings_file) - f.close() + with open("lndg/settings.py", "w") as f: + f.write(settings_file) except Exception as e: - print('Error creating the settings file:', e) + print('Error creating the settings file: ', str(e)) def write_supervisord_settings(sduser): supervisord_secret = secrets.token_urlsafe(16) @@ -212,94 +208,26 @@ def write_supervisord_settings(sduser): [rpcinterface:supervisor] supervisor.rpcinterface_factory=supervisor.rpcinterface:make_main_rpcinterface -[program:jobs] -command = sh -c "python jobs.py && sleep 15" -process_name = lndg-jobs -directory = %s -autorestart = true -redirect_stderr = true -stdout_logfile = /var/log/lndg-jobs.log -stdout_logfile_maxbytes = 150MB -stdout_logfile_backups = 15 - -[program:rebalancer] -command = sh -c "python rebalancer.py && sleep 15" -process_name = lndg-rebalancer -directory = %s -autorestart = true -redirect_stderr = true -stdout_logfile = /var/log/lndg-rebalancer.log -stdout_logfile_maxbytes = 150MB -stdout_logfile_backups = 15 - -[program:htlc-stream] -command = sh -c "python htlc_stream.py && sleep 15" -process_name = lndg-htlc-stream +[program:controller] +command = sh -c "python controller.py && sleep 15" +process_name = lndg-controller directory = %s autorestart = true redirect_stderr = true -stdout_logfile = /var/log/lndg-htlc-stream.log +stdout_logfile = %s/data/lndg-controller.log stdout_logfile_maxbytes = 150MB stdout_logfile_backups = 15 -''' % (sduser, supervisord_secret, supervisord_secret, BASE_DIR, BASE_DIR, BASE_DIR) - try: - f = open("/usr/local/etc/supervisord.conf", "x") - f.close() - except: - print('A supervisord settings file may already exist, skipping creation...') +''' % (sduser, supervisord_secret, supervisord_secret, BASE_DIR, BASE_DIR) + if Path("/usr/local/etc/supervisord.conf").exists(): + print('A supervisord settings file already exist, skipping creation...') return try: - f = open("/usr/local/etc/supervisord.conf", "w") - f.write(supervisord_settings_file) - f.close() + with open("/usr/local/etc/supervisord.conf", "w") as f: + f.write(supervisord_settings_file) except Exception as e: - print('Error creating the settings file:', str(e)) + print('Error creating the settings file: ', str(e)) -def main(): - help_msg = "LNDg Initializer" - parser = argparse.ArgumentParser(description = help_msg) - parser.add_argument('-ip', '--nodeip',help = 'IP that will be used to access the LNDg page', default='*') - parser.add_argument('-dir', '--lnddir',help = 'LND Directory for tls cert and admin macaroon paths', default=None) - parser.add_argument('-net', '--network', help = 'Network LND will run over', default='mainnet') - parser.add_argument('-server', '--rpcserver', help = 'Server address to use for rpc communications with LND', default='localhost:10009') - parser.add_argument('-maxmsg', '--maxmessage', help = 'Maximum message size for grpc communications (MB)', default='35') - parser.add_argument('-sd', '--supervisord', help = 'Setup supervisord to run jobs/rebalancer background processes', action='store_true') - parser.add_argument('-sdu', '--sduser', help = 'Configure supervisord with a non-root user', default='root') - parser.add_argument('-wn', '--whitenoise', help = 'Add whitenoise middleware (docker requirement for static files)', action='store_true') - parser.add_argument('-d', '--docker', help = 'Single option for docker container setup (supervisord + whitenoise)', action='store_true') - parser.add_argument('-dx', '--debug', help = 'Setup the django site in debug mode', action='store_true') - parser.add_argument('-u', '--adminuser', help = 'Setup a custom admin username', default='lndg-admin') - parser.add_argument('-pw', '--adminpw', help = 'Setup a custom admin password', default=None) - parser.add_argument('-csrf', '--csrftrusted', help = 'Set trusted CSRF origins', default=None) - parser.add_argument('-tls', '--tlscert', help = 'Set the path to a custom tls cert', default=None) - parser.add_argument('-mcrn', '--macaroon', help = 'Set the path to a custom macroon file', default=None) - parser.add_argument('-lnddb', '--lnddatabase', help = 'Set the path to the channel.db for monitoring', default=None) - parser.add_argument('-nologin', '--nologinrequired', help = 'By default, force all connections to be authenticated', action='store_true') - args = parser.parse_args() - node_ip = args.nodeip - lnd_dir_path = args.lnddir if args.lnddir else '~/.lnd' - lnd_network = args.network - lnd_rpc_server = args.rpcserver - lnd_max_message = args.maxmessage - setup_supervisord = args.supervisord - sduser = args.sduser - whitenoise = args.whitenoise - docker = args.docker - debug = args.debug - adminuser = args.adminuser - adminpw = args.adminpw - csrftrusted = args.csrftrusted - nologinrequired = args.nologinrequired - lnd_tls_path = args.tlscert if args.tlscert else lnd_dir_path + '/tls.cert' - lnd_macaroon_path = args.macaroon if args.macaroon else lnd_dir_path + '/data/chain/bitcoin/' + lnd_network + '/admin.macaroon' - lnd_database_path = args.lnddatabase if args.lnddatabase else lnd_dir_path + '/data/graph/' + lnd_network + '/channel.db' - if docker: - setup_supervisord = True - whitenoise = True - write_settings(node_ip, lnd_tls_path, lnd_macaroon_path, lnd_database_path, lnd_network, lnd_rpc_server, lnd_max_message, whitenoise, debug, csrftrusted, nologinrequired) - if setup_supervisord: - print('Supervisord setup requested...') - write_supervisord_settings(sduser) +def initialize_django(adminuser, adminpw): try: DATA_DIR = os.path.join(BASE_DIR, 'data') try: @@ -353,5 +281,56 @@ def main(): except Exception as e: print('Error initializing django:', str(e)) +def main(): + help_msg = "LNDg Initializer" + parser = argparse.ArgumentParser(description = help_msg) + parser.add_argument('-ip', '--nodeip',help = 'IP that will be used to access the LNDg page', default='*') + parser.add_argument('-dir', '--lnddir',help = 'LND Directory for tls cert and admin macaroon paths', default=None) + parser.add_argument('-net', '--network', help = 'Network LND will run over', default='mainnet') + parser.add_argument('-server', '--rpcserver', help = 'Server address to use for rpc communications with LND', default='localhost:10009') + parser.add_argument('-maxmsg', '--maxmessage', help = 'Maximum message size for grpc communications (MB)', default='35') + parser.add_argument('-sd', '--supervisord', help = 'Setup supervisord to run jobs/rebalancer background processes', action='store_true') + parser.add_argument('-sdu', '--sduser', help = 'Configure supervisord with a non-root user', default='root') + parser.add_argument('-wn', '--whitenoise', help = 'Add whitenoise middleware (docker requirement for static files)', action='store_true') + parser.add_argument('-d', '--docker', help = 'Single option for docker container setup (supervisord + whitenoise)', action='store_true') + parser.add_argument('-dx', '--debug', help = 'Setup the django site in debug mode', action='store_true') + parser.add_argument('-u', '--adminuser', help = 'Setup a custom admin username', default='lndg-admin') + parser.add_argument('-pw', '--adminpw', help = 'Setup a custom admin password', default=None) + parser.add_argument('-csrf', '--csrftrusted', help = 'Set trusted CSRF origins', default=None) + parser.add_argument('-tls', '--tlscert', help = 'Set the path to a custom tls cert', default=None) + parser.add_argument('-mcrn', '--macaroon', help = 'Set the path to a custom macroon file', default=None) + parser.add_argument('-lnddb', '--lnddatabase', help = 'Set the path to the channel.db for monitoring', default=None) + parser.add_argument('-nologin', '--nologinrequired', help = 'By default, force all connections to be authenticated', action='store_true') + parser.add_argument('-sessionage', '--sessioncookieage', help = 'Number of seconds before the login session expires', default='1209600') + parser.add_argument('-f', '--force', help = 'Force a new settings file to be created', action='store_true') + args = parser.parse_args() + node_ip = args.nodeip + lnd_dir_path = args.lnddir if args.lnddir else '~/.lnd' + lnd_network = args.network + lnd_rpc_server = args.rpcserver + lnd_max_message = args.maxmessage + setup_supervisord = args.supervisord + sduser = args.sduser + whitenoise = args.whitenoise + docker = args.docker + debug = args.debug + adminuser = args.adminuser + adminpw = args.adminpw + csrftrusted = args.csrftrusted + nologinrequired = args.nologinrequired + force_new = args.force + lnd_tls_path = args.tlscert if args.tlscert else lnd_dir_path + '/tls.cert' + lnd_macaroon_path = args.macaroon if args.macaroon else lnd_dir_path + '/data/chain/bitcoin/' + lnd_network + '/admin.macaroon' + lnd_database_path = args.lnddatabase if args.lnddatabase else lnd_dir_path + '/data/graph/' + lnd_network + '/channel.db' + cookie_age = int(args.sessioncookieage) + if docker: + setup_supervisord = True + whitenoise = True + write_settings(node_ip, lnd_tls_path, lnd_macaroon_path, lnd_database_path, lnd_network, lnd_rpc_server, lnd_max_message, whitenoise, debug, csrftrusted, nologinrequired, force_new, cookie_age) + if setup_supervisord: + print('Supervisord setup requested...') + write_supervisord_settings(sduser) + initialize_django(adminuser, adminpw) + if __name__ == '__main__': main() \ No newline at end of file diff --git a/jobs.py b/jobs.py index 0f248eaa..e685543a 100644 --- a/jobs.py +++ b/jobs.py @@ -1,4 +1,5 @@ import django +from time import sleep from django.db.models import Max, Sum, Avg, Count from django.db.models.functions import TruncDay from datetime import datetime, timedelta @@ -9,12 +10,11 @@ from gui.lnd_deps.lnd_connect import lnd_connect from lndg import settings from os import environ -from pandas import DataFrame from requests import get environ['DJANGO_SETTINGS_MODULE'] = 'lndg.settings' django.setup() from gui.models import Payments, PaymentHops, Invoices, Forwards, Channels, Peers, Onchain, Closures, Resolutions, PendingHTLCs, LocalSettings, FailedHTLCs, Autofees, PendingChannels, HistFailedHTLC, PeerEvents -from gui import af +import af def update_payments(stub): self_pubkey = stub.GetInfo(ln.GetInfoRequest()).identity_pubkey @@ -30,13 +30,12 @@ def update_payments(stub): last_index = Payments.objects.aggregate(Max('index'))['index__max'] if Payments.objects.exists() else 0 payments = stub.ListPayments(ln.ListPaymentsRequest(include_incomplete=True, index_offset=last_index, max_payments=100)).payments for payment in payments: - #print(f"{datetime.now().strftime('%c')} : Processing New {payment.payment_index=} {payment.status=} {payment.payment_hash=}") try: new_payment = Payments(creation_date=datetime.fromtimestamp(payment.creation_date), payment_hash=payment.payment_hash, value=round(payment.value_msat/1000, 3), fee=round(payment.fee_msat/1000, 3), status=payment.status, index=payment.payment_index) new_payment.save() except Exception as e: #Error inserting, try to update instead - print(f"{datetime.now().strftime('%c')} : Error processing {new_payment=} : {str(e)=}") + print(f"{datetime.now().strftime('%c')} : [Data] : Error processing {new_payment}: {str(e)}") update_payment(stub, payment, self_pubkey) def update_payment(stub, payment, self_pubkey): @@ -67,8 +66,6 @@ def update_payment(stub, payment, self_pubkey): if hop_count == total_hops: # Add additional HTLC information in last hop alias alias += f'[ {payment.status}-{attempt.status}-{attempt.failure.code}-{attempt.failure.failure_source_index} ]' - #if hop_count == total_hops: - #print(f"{datetime.now().strftime('%c')} : Debug Hop {attempt.attempt_id=} {attempt.route.total_amt=} {hop.mpp_record.payment_addr.hex()=} {hop.mpp_record.total_amt_msat=} {hop.amp_record=} {db_payment.payment_hash=}") if attempt.status == 1 or attempt.status == 0 or (attempt.status == 2 and attempt.failure.code in (1,2,12)): PaymentHops(payment_hash=db_payment, attempt_id=attempt.attempt_id, step=hop_count, chan_id=hop.chan_id, alias=alias, chan_capacity=hop.chan_capacity, node_pubkey=hop.pub_key, amt=round(hop.amt_to_forward_msat/1000, 3), fee=round(fee, 3), cost_to=round(cost_to, 3)).save() cost_to += fee @@ -91,7 +88,6 @@ def update_payment(stub, payment, self_pubkey): def update_invoices(stub): open_invoices = Invoices.objects.filter(state=0).order_by('index') for open_invoice in open_invoices: - #print(f"{datetime.now().strftime('%c')} : Processing open invoice {open_invoice.index=} {open_invoice.state=} {open_invoice.r_hash=}") invoice_data = stub.ListInvoices(ln.ListInvoiceRequest(index_offset=open_invoice.index-1, num_max_invoices=1)).invoices if len(invoice_data) > 0 and open_invoice.r_hash == invoice_data[0].r_hash.hex(): update_invoice(stub, invoice_data[0], open_invoice) @@ -119,7 +115,7 @@ def update_invoice(stub, invoice, db_invoice): try: valid = signerstub.VerifyMessage(lns.VerifyMessageReq(msg=(records[34349339]+bytes.fromhex(self_pubkey)+records[34349343]+records[34349334]), signature=records[34349337], pubkey=records[34349339])).valid except: - print(f"{datetime.now().strftime('%c')} : Unable to validate signature on invoice: {invoice.r_hash.hex()}") + print(f"{datetime.now().strftime('%c')} : [Data] : Unable to validate signature on invoice: {invoice.r_hash.hex()}") valid = False sender = records[34349339].hex() if valid == True else None try: @@ -159,11 +155,21 @@ def update_forwards(stub): outgoing_peer_alias = Channels.objects.filter(chan_id=forward.chan_id_out)[0].remote_pubkey[:12] if outgoing_peer_alias == '' else outgoing_peer_alias Forwards(forward_date=datetime.fromtimestamp(forward.timestamp), chan_id_in=forward.chan_id_in, chan_id_out=forward.chan_id_out, chan_in_alias=incoming_peer_alias, chan_out_alias=outgoing_peer_alias, amt_in_msat=forward.amt_in_msat, amt_out_msat=forward.amt_out_msat, fee=round(forward.fee_msat/1000, 3)).save() +def disconnectpeer(stub, peer): + try: + stub.DisconnectPeer(ln.DisconnectPeerRequest(pub_key=peer.pubkey)) + print(f"{datetime.now().strftime('%c')} : [Data] : Disconnected peer {peer.alias} {peer.pubkey}") + peer.connected = False + peer.save() + except Exception as e: + print(f"{datetime.now().strftime('%c')} : [Data] : Error disconnecting peer {peer.alias} {peer.pubkey}: {str(e)}") + def update_channels(stub): counter = 0 chan_list = [] channels = stub.ListChannels(ln.ListChannelsRequest()).channels PendingHTLCs.objects.all().delete() + block_height = stub.GetInfo(ln.GetInfoRequest()).block_height for channel in channels: if Channels.objects.filter(chan_id=channel.chan_id).exists(): #Update the channel record with the most current data @@ -220,6 +226,15 @@ def update_channels(stub): else: pending_out += htlc.amount htlc_counter += 1 + if htlc.expiration_height - block_height <= 13: # If htlc is expiring within 13 blocks, disconnect peer to help resolve the stuck htlc + peer = Peers.objects.filter(pubkey=channel.remote_pubkey)[0] if Peers.objects.filter(pubkey=channel.remote_pubkey).exists() else None + if peer and (not peer.last_reconnected or (int((datetime.now() - peer.last_reconnected).total_seconds() / 60) > 10)): + print(f"{datetime.now().strftime('%c')} : [Data] : HTLC expiring at {htlc.expiration_height} and within 13 blocks of {block_height}, disconnecting peer {channel.remote_pubkey} to resolve HTLC: {htlc.hash_lock.hex()} ") + disconnectpeer(stub, peer) + peer.last_reconnected = datetime.now() + peer.save() + else: + print(f"{datetime.now().strftime('%c')} : [Data] : Could not find peer {channel.remote_pubkey} with expiring HTLC: {htlc.hash_lock.hex()}") db_channel.pending_outbound = pending_out db_channel.pending_inbound = pending_in db_channel.htlc_count = htlc_counter @@ -290,20 +305,25 @@ def update_channels(stub): if db_channel.remote_max_htlc_msat != remote_policy.max_htlc_msat: PeerEvents(chan_id=db_channel.chan_id, peer_alias=db_channel.alias, event='MaxHTLC', old_value=db_channel.remote_max_htlc_msat, new_value=remote_policy.max_htlc_msat, out_liq=(db_channel.local_balance + db_channel.pending_outbound)).save() db_channel.remote_max_htlc_msat = remote_policy.max_htlc_msat - except: - old_fee_rate = None - db_channel.local_base_fee = -1 if db_channel.local_base_fee is None else db_channel.local_base_fee - db_channel.local_fee_rate = -1 if db_channel.local_fee_rate is None else db_channel.local_fee_rate - db_channel.local_cltv = -1 if db_channel.local_cltv is None else db_channel.local_cltv - db_channel.local_disabled = False if db_channel.local_disabled is None else db_channel.local_disabled - db_channel.local_min_htlc_msat = -1 if db_channel.local_min_htlc_msat is None else db_channel.local_min_htlc_msat - db_channel.local_max_htlc_msat = -1 if db_channel.local_max_htlc_msat is None else db_channel.local_max_htlc_msat - db_channel.remote_base_fee = -1 if db_channel.remote_base_fee is None else db_channel.remote_base_fee - db_channel.remote_fee_rate = -1 if db_channel.remote_fee_rate is None else db_channel.remote_fee_rate - db_channel.remote_cltv = -1 if db_channel.remote_cltv is None else db_channel.remote_cltv - db_channel.remote_disabled = False if db_channel.remote_disabled is None else db_channel.remote_disabled - db_channel.remote_min_htlc_msat = -1 if db_channel.remote_min_htlc_msat is None else db_channel.remote_min_htlc_msat - db_channel.remote_max_htlc_msat = -1 if db_channel.remote_max_htlc_msat is None else db_channel.remote_max_htlc_msat + except Exception as e: # LND has not found the channel on the graph + print(f"{datetime.now().strftime('%c')} : [Data] : Error getting graph data for channel {db_channel.chan_id}: {str(e)}") + if pending_channel: # skip adding new channel to the list, LND may not have added to the graph yet + print(f"{datetime.now().strftime('%c')} : [Data] : Waiting for pending channel {db_channel.chan_id} to be added to the graph...") + continue + else: + old_fee_rate = None + db_channel.local_base_fee = -1 if db_channel.local_base_fee is None else db_channel.local_base_fee + db_channel.local_fee_rate = -1 if db_channel.local_fee_rate is None else db_channel.local_fee_rate + db_channel.local_cltv = -1 if db_channel.local_cltv is None else db_channel.local_cltv + db_channel.local_disabled = False if db_channel.local_disabled is None else db_channel.local_disabled + db_channel.local_min_htlc_msat = -1 if db_channel.local_min_htlc_msat is None else db_channel.local_min_htlc_msat + db_channel.local_max_htlc_msat = -1 if db_channel.local_max_htlc_msat is None else db_channel.local_max_htlc_msat + db_channel.remote_base_fee = -1 if db_channel.remote_base_fee is None else db_channel.remote_base_fee + db_channel.remote_fee_rate = -1 if db_channel.remote_fee_rate is None else db_channel.remote_fee_rate + db_channel.remote_cltv = -1 if db_channel.remote_cltv is None else db_channel.remote_cltv + db_channel.remote_disabled = False if db_channel.remote_disabled is None else db_channel.remote_disabled + db_channel.remote_min_htlc_msat = -1 if db_channel.remote_min_htlc_msat is None else db_channel.remote_min_htlc_msat + db_channel.remote_max_htlc_msat = -1 if db_channel.remote_max_htlc_msat is None else db_channel.remote_max_htlc_msat # Check for pending settings to be applied if pending_channel: if pending_channel.local_base_fee or pending_channel.local_fee_rate or pending_channel.local_cltv: @@ -333,7 +353,7 @@ def update_channels(stub): db_channel.auto_fees = pending_channel.auto_fees pending_channel.delete() if old_fee_rate is not None and old_fee_rate != local_policy.fee_rate_milli_msat: - print(f"{datetime.now().strftime('%c')} : Ext Fee Change Detected {db_channel.chan_id=} {db_channel.alias=} {old_fee_rate=} {db_channel.local_fee_rate=}") + print(f"{datetime.now().strftime('%c')} : [Data] : Ext fee change detected on {db_channel.chan_id} for peer {db_channel.alias}: fee updated from {old_fee_rate} to {db_channel.local_fee_rate}") #External Fee change detected, update auto fee log db_channel.fees_updated = datetime.now() Autofees(chan_id=db_channel.chan_id, peer_alias=db_channel.alias, setting=(f"Ext"), old_value=old_fee_rate, new_value=db_channel.local_fee_rate).save() @@ -407,7 +427,7 @@ def get_tx_fees(txid): request_data = get(base_url + txid).json() fee = request_data['fee'] except Exception as e: - print(f"{datetime.now().strftime('%c')} : Error getting closure fees {txid=} {str(e)=}") + print(f"{datetime.now().strftime('%c')} : [Data] : Error getting closure fees for {txid}: {str(e)}") fee = 0 return fee @@ -427,7 +447,7 @@ def update_closures(stub): try: db_closure.save() except Exception as e: - print(f"{datetime.now().strftime('%c')} : Error inserting closure: {str(e)}") + print(f"{datetime.now().strftime('%c')} : [Data] : Error inserting closure: {str(e)}") Closures.objects.filter(funding_txid=txid,funding_index=index).delete() return if resolution_count > 0: @@ -447,25 +467,17 @@ def reconnect_peers(stub): if peers.filter(pubkey=inactive_peer).exists(): peer = peers.filter(pubkey=inactive_peer)[0] if peer.last_reconnected == None or (int((datetime.now() - peer.last_reconnected).total_seconds() / 60) > 2): - print(f"{datetime.now().strftime('%c')} : Reconnecting peer {peer.alias} {peer.pubkey=} {peer.last_reconnected=}") + print(f"{datetime.now().strftime('%c')} : [Data] : Reconnecting peer {peer.alias} {peer.pubkey}, last reconnected at {peer.last_reconnected}") if peer.connected == True: - print(f"{datetime.now().strftime('%c')} : ... Inactive channel is still connected to peer, disconnecting peer. {peer.alias=} {inactive_peer=}") - try: - response = stub.DisconnectPeer(ln.DisconnectPeerRequest(pub_key=inactive_peer)) - print(f"{datetime.now().strftime('%c')} : .... Disconnected peer {peer.alias} {inactive_peer=} {response=}") - peer.connected = False - peer.save() - except Exception as e: - print(f"{datetime.now().strftime('%c')} : .... Error disconnecting peer {peer.alias} {inactive_peer=} {str(e)=}") - + print(f"{datetime.now().strftime('%c')} : [Data] : Inactive channel is still connected to peer, disconnecting peer {peer.alias} {inactive_peer}") + disconnectpeer(stub, peer) try: node = stub.GetNodeInfo(ln.NodeInfoRequest(pub_key=inactive_peer, include_channels=False)).node host = node.addresses[0].addr except Exception as e: - print(f"{datetime.now().strftime('%c')} : ... Unable to find node info on graph, using last known value for {peer.alias} {peer.pubkey=} {peer.address=} {str(e)=}") + print(f"{datetime.now().strftime('%c')} : [Data] : Unable to find node info on graph, using last known value for {peer.alias} {peer.pubkey} at {peer.address}: {str(e)}") host = peer.address - #address = ln.LightningAddress(pubkey=inactive_peer, host=host) - print(f"{datetime.now().strftime('%c')} : ... Attempting connection to {peer.alias} {inactive_peer=} {host=}") + print(f"{datetime.now().strftime('%c')} : [Data] : Attempting connection to {peer.alias} {inactive_peer} at {host}") try: #try both the graph value and last know value stub.ConnectPeer(request = ln.ConnectPeerRequest(addr=ln.LightningAddress(pubkey=inactive_peer, host=host), perm=True, timeout=5)) @@ -476,7 +488,7 @@ def reconnect_peers(stub): details_index = error.find('details =') + 11 debug_error_index = error.find('debug_error_string =') - 3 error_msg = error[details_index:debug_error_index] - print(f"{datetime.now().strftime('%c')} : .... Error reconnecting {peer.alias} {inactive_peer=} {error_msg=}") + print(f"{datetime.now().strftime('%c')} : [Data] : Error reconnecting {peer.alias} {inactive_peer}: {error_msg}") peer.last_reconnected = datetime.now() peer.save() @@ -485,7 +497,6 @@ def clean_payments(stub): enabled = int(LocalSettings.objects.filter(key='LND-CleanPayments')[0].value) else: LocalSettings(key='LND-CleanPayments', value='0').save() - LocalSettings(key='LND-RetentionDays', value='30').save() enabled = 0 if enabled == 1: if LocalSettings.objects.filter(key='LND-RetentionDays').exists(): @@ -505,7 +516,7 @@ def clean_payments(stub): details_index = error.find('details =') + 11 debug_error_index = error.find('debug_error_string =') - 3 error_msg = error[details_index:debug_error_index] - print(f"{datetime.now().strftime('%c')} : Error {payment.index=} {payment.status=} {payment.payment_hash=} {error_msg=}") + print(f"{datetime.now().strftime('%c')} : [Data] : Error cleaning payment {payment.payment_hash} at index {payment.index} with payment status {payment.status}: {error_msg}") finally: payment.cleaned = True payment.save() @@ -526,7 +537,7 @@ def auto_fees(stub): update_df = update_df[update_df['adjustment']!=0] if not update_df.empty: for target_channel in update_df.to_dict(orient='records'): - print(f"{datetime.now().strftime('%c')} : Updating fees for channel {str(target_channel['chan_id'])} to a value of: {str(target_channel['new_rate'])}") + print(f"{datetime.now().strftime('%c')} : [Data] : Updating fees for channel {str(target_channel['chan_id'])} to a value of: {str(target_channel['new_rate'])}") channel = Channels.objects.filter(chan_id=target_channel['chan_id'])[0] channel_point = ln.ChannelPoint() channel_point.funding_txid_bytes = bytes.fromhex(channel.funding_txid) @@ -538,7 +549,7 @@ def auto_fees(stub): channel.save() Autofees(chan_id=channel.chan_id, peer_alias=channel.alias, setting=(f"AF [ {target_channel['net_routed_7day']}:{target_channel['in_percent']}:{target_channel['out_percent']} ]"), old_value=target_channel['local_fee_rate'], new_value=target_channel['new_rate']).save() except Exception as e: - print(f"{datetime.now().strftime('%c')} : Error processing auto_fees: {str(e)}") + print(f"{datetime.now().strftime('%c')} : [Data] : Error processing auto_fees: {str(e)}") def agg_htlcs(target_htlcs, category): @@ -569,7 +580,7 @@ def agg_htlcs(target_htlcs, category): htlc_itm.save() FailedHTLCs.objects.filter(id__in=target_ids, chan_id_in=htlc['chan_id_in'], chan_id_out=htlc['chan_id_out']).annotate(day=TruncDay('timestamp')).filter(day=htlc['day']).delete() except Exception as e: - print(f"{datetime.now().strftime('%c')} : Error processing agg_htlcs: {str(e)}") + print(f"{datetime.now().strftime('%c')} : [Data] : Error processing agg_htlcs: {str(e)}") def agg_failed_htlcs(): time_filter = datetime.now() - timedelta(days=30) @@ -578,22 +589,26 @@ def agg_failed_htlcs(): agg_htlcs(FailedHTLCs.objects.filter(timestamp__lte=time_filter).exclude(failure_detail__in=[6, 99])[:100], 'other') def main(): - try: - stub = lnrpc.LightningStub(lnd_connect()) - #Update data - update_peers(stub) - update_channels(stub) - update_invoices(stub) - update_payments(stub) - update_forwards(stub) - update_onchain(stub) - update_closures(stub) - reconnect_peers(stub) - clean_payments(stub) - auto_fees(stub) - agg_failed_htlcs() - except Exception as e: - print(f"{datetime.now().strftime('%c')} : Error processing background data: {str(e)}") + while True: + print(f"{datetime.now().strftime('%c')} : [Data] : Starting data execution...") + try: + stub = lnrpc.LightningStub(lnd_connect()) + #Update data + update_peers(stub) + update_channels(stub) + update_invoices(stub) + update_payments(stub) + update_forwards(stub) + update_onchain(stub) + update_closures(stub) + reconnect_peers(stub) + clean_payments(stub) + auto_fees(stub) + agg_failed_htlcs() + except Exception as e: + print(f"{datetime.now().strftime('%c')} : [Data] : Error processing background data: {str(e)}") + print(f"{datetime.now().strftime('%c')} : [Data] : Data execution completed...sleeping for 20 seconds") + sleep(20) if __name__ == '__main__': - main() \ No newline at end of file + main() diff --git a/keysend.py b/keysend.py index febc9aeb..9c7a9ac0 100644 --- a/keysend.py +++ b/keysend.py @@ -1,3 +1,4 @@ +import argparse import secrets, time, struct from hashlib import sha256 from gui.lnd_deps import lightning_pb2 as ln @@ -52,25 +53,19 @@ def keysend(target_pubkey, msg, amount, fee_limit, timeout, sign): error_msg = error[details_index:debug_error_index] print('Error while sending keysend payment! Error: ' + error_msg) -def main(): - #Ask user for variables - try: - target_pubkey = input('Enter the pubkey of the node you want to send a keysend payment to: ') - amount = int(input('Enter an amount in sats to be sent with the keysend payment (defaults to 1 sat): ') or '1') - fee_limit = int(input('Enter an amount in sats to be used as a max fee limit for sending (defaults to 1 sat): ') or '1') - msg = input('Enter an optional message to be included (leave this blank for no message): ') - if len(msg) > 0: - sign = input('Self sign the message? (defaults to sending anonymously) [y/N]: ') - sign = True if sign.lower() == 'yes' or sign.lower() == 'y' else False - else: - sign = False - except: - print('Invalid data entered, please try again.') - timeout = 10 - print('Sending keysend payment of %s to: %s' % (amount, target_pubkey)) +def main(pubkey, amount, fee, msg, sign): + print('Sending a %s sats payment to: %s with %s sats max-fee' % (amount, pubkey, fee)) if len(msg) > 0: - print('Attaching this message to the keysend payment:', msg) - keysend(target_pubkey, msg, amount, fee_limit, timeout, sign) + print('MESSAGE: %s' % msg) + timeout = 10 + keysend(pubkey, msg, amount, fee, timeout, sign) if __name__ == '__main__': - main() \ No newline at end of file + argParser = argparse.ArgumentParser(prog="python keysend.py") + argParser.add_argument("-pk", "--pubkey", help='Target public key', required=True) + argParser.add_argument("-a", "--amount", help='Amount in sats (default: 1)', nargs='?', default=1, type=int) + argParser.add_argument("-f", "--fee", help='Max fee to send this keysend (default: 1)', nargs='?', default=1, type=int) + argParser.add_argument("-m", "--msg", help='Message to be sent (default: "")', nargs='?', default='', type=str) + argParser.add_argument("--sign", help='Sign this message (default: send anonymously) - if [MSG] is provided', action='store_true') + args = vars(argParser.parse_args()) + main(**args) \ No newline at end of file diff --git a/lndg/urls.py b/lndg/urls.py index 77164087..d4962945 100644 --- a/lndg/urls.py +++ b/lndg/urls.py @@ -16,8 +16,10 @@ from django.urls import path, include from django.views.generic.base import RedirectView from django.contrib.staticfiles.storage import staticfiles_storage +from django.contrib.auth.views import LogoutView urlpatterns = [ path('favicon.ico', RedirectView.as_view(url=staticfiles_storage.url("favicon.ico"))), + path('logout/', LogoutView.as_view(next_page='/'), name='logout'), path('', include('gui.urls')), ] \ No newline at end of file diff --git a/nginx.sh b/nginx.sh index 100410bf..31a5c7dc 100644 --- a/nginx.sh +++ b/nginx.sh @@ -1,76 +1,89 @@ +#!/bin/bash + INSTALL_USER=${SUDO_USER:-${USER}} NODE_IP=$(hostname -I | cut -d' ' -f1) -RED='\033[0;31m' -NC='\033[0m' -if [ "$INSTALL_USER" == 'root' ] >/dev/null 2>&1; then +if [ "$INSTALL_USER" == 'root' ]; then HOME_DIR='/root' else HOME_DIR="/home/$INSTALL_USER" fi +LNDG_DIR="$HOME_DIR/lndg" + +function check_path() { + if [ -e $LNDG_DIR/lndg/wsgi.py ]; then + echo "Using LNDg installation found at $LNDG_DIR" + else + echo "LNDg installation not found at $LNDG_DIR/, please provide the correct path:" + read USR_DIR + if [ -e $USR_DIR/lndg/wsgi.py ]; then + LNDG_DIR=$USR_DIR + echo "Using LNDg installation found at $LNDG_DIR" + else + echo "LNDg installation still not found, exiting..." + exit 1 + fi + fi +} + +# Ensure the lndg directory exists +mkdir -p $LNDG_DIR +mkdir -p $LNDG_DIR/.venv/bin +mkdir -p /var/log/uwsgi function install_deps() { - apt install -y python3-dev >/dev/null 2>&1 - apt install -y build-essential python >/dev/null 2>&1 - apt install -y uwsgi >/dev/null 2>&1 - apt install -y nginx >/dev/null 2>&1 - $HOME_DIR/lndg/.venv/bin/python -m pip install uwsgi >/dev/null 2>&1 + apt-get update + apt-get install -y python3-dev build-essential python3-pip uwsgi nginx + python3 -m pip install uwsgi } function setup_uwsgi() { - cat << EOF > $HOME_DIR/lndg/lndg.ini + # Creating the lndg.ini file + cat << EOF > $LNDG_DIR/lndg.ini # lndg.ini file [uwsgi] - # Django-related settings -# the base directory (full path) -chdir = $HOME_DIR/lndg -# Django's wsgi file +chdir = $LNDG_DIR module = lndg.wsgi -# the virtualenv (full path) -home = $HOME_DIR/lndg/.venv -#location of log files +home = $LNDG_DIR/.venv logto = /var/log/uwsgi/%n.log # process-related settings -# master master = true -# maximum number of worker processes processes = 1 -# the socket (use the full path to be safe -socket = $HOME_DIR/lndg/lndg.sock -# ... with appropriate permissions - may be needed +socket = $LNDG_DIR/lndg.sock chmod-socket = 660 -# clear environment on exit vacuum = true EOF - cat <<\EOF > $HOME_DIR/lndg/uwsgi_params - -uwsgi_param QUERY_STRING $query_string; -uwsgi_param REQUEST_METHOD $request_method; -uwsgi_param CONTENT_TYPE $content_type; -uwsgi_param CONTENT_LENGTH $content_length; - -uwsgi_param REQUEST_URI "$request_uri"; -uwsgi_param PATH_INFO "$document_uri"; -uwsgi_param DOCUMENT_ROOT "$document_root"; -uwsgi_param SERVER_PROTOCOL "$server_protocol"; -uwsgi_param REQUEST_SCHEME "$scheme"; -uwsgi_param HTTPS "$https if_not_empty"; - -uwsgi_param REMOTE_ADDR "$remote_addr"; -uwsgi_param REMOTE_PORT "$remote_port"; -uwsgi_param SERVER_PORT "$server_port"; -uwsgi_param SERVER_NAME "$server_name"; + + # Creating the uwsgi_params file + cat << EOF > $LNDG_DIR/uwsgi_params +uwsgi_param QUERY_STRING \$query_string; +uwsgi_param REQUEST_METHOD \$request_method; +uwsgi_param CONTENT_TYPE \$content_type; +uwsgi_param CONTENT_LENGTH \$content_length; + +uwsgi_param REQUEST_URI "\$request_uri"; +uwsgi_param PATH_INFO "\$document_uri"; +uwsgi_param DOCUMENT_ROOT "\$document_root"; +uwsgi_param SERVER_PROTOCOL "\$server_protocol"; +uwsgi_param REQUEST_SCHEME "\$scheme"; +uwsgi_param HTTPS "\$https if_not_empty"; + +uwsgi_param REMOTE_ADDR "\$remote_addr"; +uwsgi_param REMOTE_PORT "\$remote_port"; +uwsgi_param SERVER_PORT "\$server_port"; +uwsgi_param SERVER_NAME "\$server_name"; EOF + + # Creating the uwsgi.service systemd unit file cat << EOF > /etc/systemd/system/uwsgi.service [Unit] Description=Lndg uWSGI app After=syslog.target [Service] -ExecStart=$HOME_DIR/lndg/.venv/bin/uwsgi \ ---ini $HOME_DIR/lndg/lndg.ini +ExecStart=$LNDG_DIR/.venv/bin/uwsgi --ini $LNDG_DIR/lndg.ini User=$INSTALL_USER Group=www-data Restart=on-failure @@ -80,87 +93,62 @@ StandardError=syslog NotifyAccess=all [Install] -WantedBy=sockets.target +WantedBy=multi-user.target EOF usermod -a -G www-data $INSTALL_USER } function setup_nginx() { - cat << EOF > /etc/nginx/sites-enabled/lndg -# the below setting can sometimes help resolve permission issues -# user $INSTALL_USER - + # Creating the Nginx configuration file + cat << EOF > /etc/nginx/sites-available/lndg upstream django { - server unix://$HOME_DIR/lndg/lndg.sock; # for a file socket + server unix://$LNDG_DIR/lndg.sock; # for a file socket } server { - # the port your site will be served on, use port 80 unless setting up ssl certs, then 443 listen 8889; - # optional settings for ssl setup - #ssl on; - #ssl_certificate //fullchain.pem; - #ssl_certificate_key //privkey.pem; - # the domain name it will serve for - server_name _; # you can substitute your node IP address or a custom domain like lndg.local (just make sure to update your local hosts file) + server_name _; charset utf-8; - - # max upload size - client_max_body_size 75M; # adjust to taste - - # max wait for django time + client_max_body_size 75M; proxy_read_timeout 180; - # Django media location /static { - alias $HOME_DIR/lndg/gui/static; # your Django project's static files - amend as required + alias $LNDG_DIR/gui/static; # your Django project's static files - amend as required } - # Finally, send all non-media requests to the Django server. location / { uwsgi_pass django; - include $HOME_DIR/lndg/uwsgi_params; # the uwsgi_params file + include $LNDG_DIR/uwsgi_params; # the uwsgi_params file } } EOF + + # Remove the default site and link the new site rm /etc/nginx/sites-enabled/default + ln -sf /etc/nginx/sites-available/lndg /etc/nginx/sites-enabled/ } function start_services() { touch /var/log/uwsgi/lndg.log - touch $HOME_DIR/lndg/lndg.sock + touch $LNDG_DIR/lndg.sock chgrp www-data /var/log/uwsgi/lndg.log - chgrp www-data $HOME_DIR/lndg/lndg.sock + chgrp www-data $LNDG_DIR/lndg.sock chmod 660 /var/log/uwsgi/lndg.log systemctl start uwsgi systemctl restart nginx - sudo systemctl enable uwsgi - sudo systemctl enable nginx + systemctl enable uwsgi + systemctl enable nginx } function report_information() { - echo -e "" - echo -e "================================================================================================================================" - echo -e "Nginx service setup using user account $INSTALL_USER and an address of $NODE_IP:8889." - echo -e "You can update the IP or port used by modifying this configuration file and restarting nginx: /etc/nginx/sites-enabled/lndg" - echo -e "" - echo -e "uWSGI Status: ${RED}sudo systemctl status uwsgi.service${NC}" - echo -e "Nginx Status: ${RED}sudo systemctl status nginx.service${NC}" - echo -e "" - echo -e "To disable your webserver, use the following commands." - echo -e "Disable uWSGI: ${RED}sudo systemctl disable uwsgi.service${NC}" - echo -e "Disable Nginx: ${RED}sudo systemctl disable nginx.service${NC}" - echo -e "Stop uWSGI: ${RED}sudo systemctl stop uwsgi.service${NC}" - echo -e "Stop Nginx: ${RED}sudo systemctl stop nginx.service${NC}" - echo -e "" - echo -e "To re-enable these services, simply replace the disable/stop commands with enable/start." - echo -e "================================================================================================================================" + echo "Nginx and uWSGI have been set up with user $INSTALL_USER at $NODE_IP:8889." } ##### Main ##### echo -e "Setting up, this may take a few minutes..." +check_path install_deps setup_uwsgi setup_nginx start_services -report_information +report_information \ No newline at end of file diff --git a/p2p.py b/p2p.py new file mode 100644 index 00000000..e09abe5e --- /dev/null +++ b/p2p.py @@ -0,0 +1,50 @@ +import django, multiprocessing +from datetime import datetime +from gui.lnd_deps import lightning_pb2_grpc as lnrpc +from gui.lnd_deps.lnd_connect import lnd_connect +from os import environ +from time import sleep +environ['DJANGO_SETTINGS_MODULE'] = 'lndg.settings' +django.setup() +from gui.models import LocalSettings +from trade import serve_trades + +def trade(): + stub = lnrpc.LightningStub(lnd_connect()) + serve_trades(stub) + +def check_setting(): + if LocalSettings.objects.filter(key='LND-ServeTrades').exists(): + return int(LocalSettings.objects.filter(key='LND-ServeTrades')[0].value) + else: + LocalSettings(key='LND-ServeTrades', value='0').save() + return 0 + +def main(): + while True: + current_value = None + try: + while True: + db_value = check_setting() + if current_value != db_value: + if db_value == 1: + print(f"{datetime.now().strftime('%c')} : [P2P] : Starting the p2p service...") + django.db.connections.close_all() + p2p_thread = multiprocessing.Process(target=trade) + p2p_thread.start() + else: + if 'p2p_thread' in locals() and p2p_thread.is_alive(): + print(f"{datetime.now().strftime('%c')} : [P2P] : Stopping the p2p service...") + p2p_thread.terminate() + current_value = db_value + sleep(2) # polling interval + except Exception as e: + print(f"{datetime.now().strftime('%c')} : [P2P] : Error running p2p service: {str(e)}") + finally: + if 'p2p_thread' in locals() and p2p_thread.is_alive(): + print(f"{datetime.now().strftime('%c')} : [P2P] : Removing any remaining processes...") + p2p_thread.terminate() + sleep(20) + +if __name__ == '__main__': + main() \ No newline at end of file diff --git a/postgres.md b/postgres.md new file mode 100644 index 00000000..b7bf47c0 --- /dev/null +++ b/postgres.md @@ -0,0 +1,64 @@ +# LNDg Postgres Setup + +- **You will need to fill in the proper password below in the paths marked ``.** + +## Postgres Installation +Install postgres +`sudo apt install postgresql` + +Switch to postgres user and open the client +`sudo su - postgres` +`psql` + +Create the LNDg Database - Be sure to replace the password with your own +`create user lndg;create database lndg LOCALE 'en_US.UTF-8';alter role lndg with password '';grant all privileges on database lndg to lndg;alter database lndg owner to lndg;` + +Exit the postgres client and user +`\q` +`exit` + +## Setting up LNDg with postgres +Enter the LNDg installation folder +`cd lndg` + +Upgrade setuptools and get required packages +`.venv/bin/pip install --upgrade setuptools` +`sudo apt install gcc python3-dev libpq-dev` + +Build psycopg2 for postgres connection +`.venv/bin/pip install psycopg2` + +Update the `DATABASES` section of `lndg/lndg/settings.py`, be sure to replace the password +``` +DATABASES = { + 'default': { + 'ENGINE': 'django.db.backends.postgresql_psycopg2', + 'NAME': 'lndg', + 'USER': 'lndg', + 'PASSWORD': '', + 'HOST': 'localhost', + 'PORT': '5432', + } +} +``` +Initialize the postgres database +`.venv/bin/python manage.py migrate` + +## OPTIONAL: Migrating An Existing Database +Stop the LNDg services controller and web service +`sudo systemctl stop lndg-controller.service` +`sudo systemctl stop uwsgi.service` + +Dump current data to json format +`.venv/bin/python manage.py dumpdata gui.channels gui.peers gui.rebalancer gui.localsettings gui.autopilot gui.autofees gui.failedhtlcs gui.avoidnodes gui.peerevents > dump.json` + +Import the dump +`.venv/bin/python manage.py loaddata dump.json` + +## Finishing Setup +Recreate a login user - any username and password may be used +`.venv/bin/python manage.py createsuperuser` + +Restart the LNDg controller and web service +`sudo systemctl restart lndg-controller.service` +`sudo systemctl restart uwsgi.service` \ No newline at end of file diff --git a/rebalancer.py b/rebalancer.py index 7773295a..dd0e7d9d 100644 --- a/rebalancer.py +++ b/rebalancer.py @@ -1,4 +1,5 @@ import django, json, secrets, asyncio +from time import sleep from asgiref.sync import sync_to_async from django.db.models import Sum, F from datetime import datetime, timedelta @@ -19,21 +20,21 @@ def get_out_cans(rebalance, auto_rebalance_channels): try: return list(auto_rebalance_channels.filter(auto_rebalance=False, percent_outbound__gte=F('ar_out_target')).exclude(remote_pubkey=rebalance.last_hop_pubkey).values_list('chan_id', flat=True)) except Exception as e: - print(f"{datetime.now().strftime('%c')} : Error getting outbound cands: {str(e)}") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : Error getting outbound cands: {str(e)}") @sync_to_async def save_record(record): try: record.save() except Exception as e: - print(f"{datetime.now().strftime('%c')} : Error saving database record: {str(e)}") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : Error saving database record: {str(e)}") @sync_to_async def inbound_cans_len(inbound_cans): try: return len(inbound_cans) except Exception as e: - print(f"{datetime.now().strftime('%c')} : Error getting inbound cands: {str(e)}") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : Error getting inbound cands: {str(e)}") async def run_rebalancer(rebalance, worker): try: @@ -41,7 +42,7 @@ async def run_rebalancer(rebalance, worker): auto_rebalance_channels = Channels.objects.filter(is_active=True, is_open=True, private=False).annotate(percent_outbound=((Sum('local_balance')+Sum('pending_outbound')-rebalance.value)*100)/Sum('capacity')).annotate(inbound_can=(((Sum('remote_balance')+Sum('pending_inbound'))*100)/Sum('capacity'))/Sum('ar_in_target')) outbound_cans = await get_out_cans(rebalance, auto_rebalance_channels) if len(outbound_cans) == 0 and rebalance.manual == False: - print(f"{datetime.now().strftime('%c')} : No outbound_cans") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : No outbound_cans") rebalance.status = 406 rebalance.start = datetime.now() rebalance.stop = datetime.now() @@ -57,9 +58,8 @@ async def run_rebalancer(rebalance, worker): chan_ids = json.loads(rebalance.outgoing_chan_ids) timeout = rebalance.duration * 60 invoice_response = stub.AddInvoice(ln.Invoice(value=rebalance.value, expiry=timeout)) - print(f"{datetime.now().strftime('%c')} : {worker} starting rebalance for: {rebalance.target_alias} {rebalance.last_hop_pubkey=} {rebalance.value=} {rebalance.duration=} {chan_ids=}") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : {worker} starting rebalance for {rebalance.target_alias} {rebalance.last_hop_pubkey} for {rebalance.value} sats and duration {rebalance.duration}, using {len(chan_ids)} outbound channels") async for payment_response in routerstub.SendPaymentV2(lnr.SendPaymentRequest(payment_request=str(invoice_response.payment_request), fee_limit_msat=int(rebalance.fee_limit*1000), outgoing_chan_ids=chan_ids, last_hop_pubkey=bytes.fromhex(rebalance.last_hop_pubkey), timeout_seconds=(timeout-5), allow_self_payment=True), timeout=(timeout+60)): - #print(f"{datetime.now().strftime('%c')} : DEBUG {worker} got a payment response: {payment_response.status=} {payment_response.failure_reason=} {payment_response.payment_hash=}") if payment_response.status == 1 and rebalance.status == 0: #IN-FLIGHT rebalance.payment_hash = payment_response.payment_hash @@ -94,11 +94,11 @@ async def run_rebalancer(rebalance, worker): rebalance.status = 408 else: rebalance.status = 400 - print(f"{datetime.now().strftime('%c')} : Error while sending payment: {str(e)}") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : Error while sending payment: {str(e)}") finally: rebalance.stop = datetime.now() await save_record(rebalance) - print(f"{datetime.now().strftime('%c')} : {worker} completed payment attempts for: {rebalance.payment_hash=}") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : {worker} completed payment attempts for: {rebalance.payment_hash}") original_alias = rebalance.target_alias inc=1.21 dec=2 @@ -111,7 +111,7 @@ async def run_rebalancer(rebalance, worker): if await inbound_cans_len(inbound_cans) > 0 and len(outbound_cans) > 0: next_rebalance = Rebalancer(value=int(rebalance.value*inc), fee_limit=round(rebalance.fee_limit*inc, 3), outgoing_chan_ids=str(outbound_cans).replace('\'', ''), last_hop_pubkey=rebalance.last_hop_pubkey, target_alias=original_alias, duration=1) await save_record(next_rebalance) - print(f"{datetime.now().strftime('%c')} : RapidFire up {next_rebalance.target_alias=} {next_rebalance.value=} {rebalance.value=}") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : RapidFire increase for {next_rebalance.target_alias} from {next_rebalance.value} to {rebalance.value}") else: next_rebalance = None # For failed rebalances, try in rapid fire with reduced balances until give up. @@ -129,14 +129,14 @@ async def run_rebalancer(rebalance, worker): if await inbound_cans_len(inbound_cans) > 0 and len(outbound_cans) > 0: next_rebalance = Rebalancer(value=int(next_value), fee_limit=round(rebalance.fee_limit/(rebalance.value/next_value), 3), outgoing_chan_ids=str(outbound_cans).replace('\'', ''), last_hop_pubkey=rebalance.last_hop_pubkey, target_alias=original_alias, duration=1) await save_record(next_rebalance) - print(f"{datetime.now().strftime('%c')} : RapidFire Down {next_rebalance.target_alias=} {next_rebalance.value=} {rebalance.value=}") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : RapidFire decrease for {next_rebalance.target_alias} from {next_rebalance.value} to {rebalance.value}") else: next_rebalance = None else: next_rebalance = None return next_rebalance except Exception as e: - print(f"{datetime.now().strftime('%c')} : Error running rebalance attempt: {str(e)}") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : Error running rebalance attempt: {str(e)}") @sync_to_async def estimate_liquidity( payment ): @@ -146,16 +146,12 @@ def estimate_liquidity( payment ): attempt = None for attempt in payment.htlcs: total_hops=len(attempt.route.hops) - #Failure Codes https://github.com/lightningnetwork/lnd/blob/9f013f5058a7780075bca393acfa97aa0daec6a0/lnrpc/lightning.proto#L4200 - #print(f"{datetime.now().strftime('%c')} : DEBUG Liquidity Estimation {attempt.attempt_id=} {attempt.status=} {attempt.failure.code=} {attempt.failure.failure_source_index=} {total_hops=} {attempt.route.total_amt=} {payment.value_msat/1000=} {estimated_liquidity=} {payment.payment_hash=}") if attempt.failure.failure_source_index == total_hops: #Failure from last hop indicating liquidity available estimated_liquidity = attempt.route.total_amt if attempt.route.total_amt > estimated_liquidity else estimated_liquidity - chan_id=attempt.route.hops[len(attempt.route.hops)-1].chan_id - #print(f"{datetime.now().strftime('%c')} : DEBUG Liquidity Estimation {attempt.attempt_id=} {attempt.status=} {attempt.failure.code=} {chan_id=} {attempt.route.total_amt=} {payment.value_msat/1000=} {estimated_liquidity=} {payment.payment_hash=}") - print(f"{datetime.now().strftime('%c')} : Estimated Liquidity {estimated_liquidity=} {payment.payment_hash=} {payment.status=} {payment.failure_reason=}") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : Estimated Liquidity {estimated_liquidity} for payment {payment.payment_hash} with status {payment.status} and reason {payment.failure_reason}") except Exception as e: - print(f"{datetime.now().strftime('%c')} : Error estimating liquidity: {str(e)}") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : Error estimating liquidity: {str(e)}") estimated_liquidity = 0 return estimated_liquidity @@ -176,7 +172,7 @@ def update_channels(stub, incoming_channel, outgoing_channel): db_channel.remote_balance = channel.remote_balance db_channel.save() except Exception as e: - print(f"{datetime.now().strftime('%c')} : Error updating channel balances: {str(e)}") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : Error updating channel balances: {str(e)}") @sync_to_async def auto_schedule() -> List[Rebalancer]: @@ -198,7 +194,7 @@ def auto_schedule() -> List[Rebalancer]: if not LocalSettings.objects.filter(key='AR-Outbound%').exists(): LocalSettings(key='AR-Outbound%', value='75').save() if not LocalSettings.objects.filter(key='AR-Inbound%').exists(): - LocalSettings(key='AR-Inbound%', value='100').save() + LocalSettings(key='AR-Inbound%', value='90').save() outbound_cans = list(auto_rebalance_channels.filter(auto_rebalance=False, percent_outbound__gte=F('ar_out_target')).values_list('chan_id', flat=True)) already_scheduled = Rebalancer.objects.exclude(last_hop_pubkey='').filter(status=0).values_list('last_hop_pubkey') inbound_cans = auto_rebalance_channels.filter(auto_rebalance=True, inbound_can__gte=1).exclude(remote_pubkey__in=already_scheduled).order_by('-inbound_can') @@ -208,8 +204,8 @@ def auto_schedule() -> List[Rebalancer]: if LocalSettings.objects.filter(key='AR-MaxFeeRate').exists(): max_fee_rate = int(LocalSettings.objects.filter(key='AR-MaxFeeRate')[0].value) else: - LocalSettings(key='AR-MaxFeeRate', value='100').save() - max_fee_rate = 100 + LocalSettings(key='AR-MaxFeeRate', value='500').save() + max_fee_rate = 500 if LocalSettings.objects.filter(key='AR-Variance').exists(): variance = int(LocalSettings.objects.filter(key='AR-Variance')[0].value) else: @@ -221,7 +217,7 @@ def auto_schedule() -> List[Rebalancer]: LocalSettings(key='AR-WaitPeriod', value='30').save() wait_period = 30 if not LocalSettings.objects.filter(key='AR-Target%').exists(): - LocalSettings(key='AR-Target%', value='5').save() + LocalSettings(key='AR-Target%', value='3').save() if not LocalSettings.objects.filter(key='AR-MaxCost%').exists(): LocalSettings(key='AR-MaxCost%', value='65').save() for target in inbound_cans: @@ -242,17 +238,15 @@ def auto_schedule() -> List[Rebalancer]: last_rebalance = Rebalancer.objects.filter(last_hop_pubkey=target.remote_pubkey).exclude(status=0).order_by('-id')[0] if not (last_rebalance.status == 2 or (last_rebalance.status > 2 and (int((datetime.now() - last_rebalance.stop).total_seconds() / 60) > wait_period)) or (last_rebalance.status == 1 and ((int((datetime.now() - last_rebalance.start).total_seconds() / 60) - last_rebalance.duration) > wait_period))): continue - print(f"{datetime.now().strftime('%c')} : Creating Auto Rebalance Request for: {target.chan_id}") - print(f"{datetime.now().strftime('%c')} : Request routing through: {outbound_cans}") - print(f"{datetime.now().strftime('%c')} : {target_value} / {target.ar_amt_target}") - print(f"{datetime.now().strftime('%c')} : {target_fee}") - print(f"{datetime.now().strftime('%c')} : {target_time}") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : Creating Auto Rebalance Request for: {target.chan_id}") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : Value: {target_value} / {target.ar_amt_target} | Fee: {target_fee} | Duration: {target_time}") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : Request routing outbound via: {outbound_cans}") new_rebalance = Rebalancer(value=target_value, fee_limit=target_fee, outgoing_chan_ids=str(outbound_cans).replace('\'', ''), last_hop_pubkey=target.remote_pubkey, target_alias=target.alias, duration=target_time) new_rebalance.save() to_schedule.append(new_rebalance) return to_schedule except Exception as e: - print(f"{datetime.now().strftime('%c')} : Error scheduling rebalances: {str(e)}") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : Error scheduling rebalances: {str(e)}") return to_schedule @sync_to_async @@ -282,10 +276,9 @@ def auto_enable(): iapD = 0 if routed_in_apday == 0 else int(forwards.filter(chan_id_in__in=chan_list).aggregate(Sum('amt_in_msat'))['amt_in_msat__sum']/10000000)/100 oapD = 0 if routed_out_apday == 0 else int(forwards.filter(chan_id_out__in=chan_list).aggregate(Sum('amt_out_msat'))['amt_out_msat__sum']/10000000)/100 for peer_channel in lookup_channels.filter(chan_id__in=chan_list): - #print('Processing: ', peer_channel.alias, ' : ', peer_channel.chan_id, ' : ', oapD, " : ", iapD, ' : ', outbound_percent, ' : ', inbound_percent) if peer_channel.ar_out_target == 100 and peer_channel.auto_rebalance == True: #Special Case for LOOP, Wos, etc. Always Auto Rebalance if enabled to keep outbound full. - print(f"{datetime.now().strftime('%c')} : Skipping AR enabled and 100% oTarget channel... {peer_channel.alias=} {peer_channel.chan_id=}") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : Skipping AR enabled and 100% oTarget channel: {peer_channel.alias} {peer_channel.chan_id}") pass elif oapD > (iapD*1.10) and outbound_percent > 75: #print('Case 1: Pass') @@ -295,13 +288,13 @@ def auto_enable(): peer_channel.auto_rebalance = True peer_channel.save() Autopilot(chan_id=peer_channel.chan_id, peer_alias=peer_channel.alias, setting='Enabled', old_value=0, new_value=1).save() - print(f"{datetime.now().strftime('%c')} : Auto Pilot Enabled: {peer_channel.alias=} {peer_channel.chan_id=} {oapD=} {iapD=}") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : Auto Pilot Enabled for {peer_channel.alias} {peer_channel.chan_id}: {oapD} {iapD}") elif oapD < (iapD*1.10) and outbound_percent > 75 and peer_channel.auto_rebalance == True: #print('Case 3: Disable AR - o7D < i7D AND Outbound Liq > 75%') peer_channel.auto_rebalance = False peer_channel.save() Autopilot(chan_id=peer_channel.chan_id, peer_alias=peer_channel.alias, setting='Enabled', old_value=1, new_value=0).save() - print(f"{datetime.now().strftime('%c')} : Auto Pilot Disabled (3): {peer_channel.alias=} {peer_channel.chan_id=} {oapD=} {iapD=}" ) + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : Auto Pilot Disabled for {peer_channel.alias} {peer_channel.chan_id}: {oapD} {iapD}" ) elif oapD < (iapD*1.10) and inbound_percent > 75: #print('Case 4: Pass') pass @@ -309,7 +302,7 @@ def auto_enable(): #print('Case 5: Pass') pass except Exception as e: - print(f"{datetime.now().strftime('%c')} : Error during auto channel enabling: {str(e)}") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : Error during auto channel enabling: {str(e)}") @sync_to_async def get_pending_rebals(): @@ -317,50 +310,49 @@ def get_pending_rebals(): rebalances = Rebalancer.objects.filter(status=0).order_by('id') return rebalances, len(rebalances) except Exception as e: - print(f"{datetime.now().strftime('%c')} : Error getting pending rebalances: {str(e)}") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : Error getting pending rebalances: {str(e)}") -shutdown_rebalancer = False -scheduled_rebalances = [] -active_rebalances = [] async def async_queue_manager(rebalancer_queue): global scheduled_rebalances, active_rebalances, shutdown_rebalancer - print(f"{datetime.now().strftime('%c')} : Queue manager is starting...") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : Queue manager is starting...") try: while True: - print(f"{datetime.now().strftime('%c')} : Queue currently has {rebalancer_queue.qsize()} items...") - print(f"{datetime.now().strftime('%c')} : There are currently {len(active_rebalances)} tasks in progress...") - print(f"{datetime.now().strftime('%c')} : Queue manager is checking for more work...") + if shutdown_rebalancer == True: + return + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : Queue currently has {rebalancer_queue.qsize()} items...") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : There are currently {len(active_rebalances)} tasks in progress...") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : Queue manager is checking for more work...") pending_rebalances, rebal_count = await get_pending_rebals() if rebal_count > 0: for rebalance in pending_rebalances: if rebalance.id not in (scheduled_rebalances + active_rebalances): - print(f"{datetime.now().strftime('%c')} : Found a pending job to schedule with id: {rebalance.id}") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : Found a pending job to schedule with id: {rebalance.id}") scheduled_rebalances.append(rebalance.id) await rebalancer_queue.put(rebalance) await auto_enable() scheduled = await auto_schedule() if len(scheduled) > 0: - print(f"{datetime.now().strftime('%c')} : Scheduling {len(scheduled)} more jobs...") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : Scheduling {len(scheduled)} more jobs...") for rebalance in scheduled: scheduled_rebalances.append(rebalance.id) await rebalancer_queue.put(rebalance) elif rebalancer_queue.qsize() == 0 and len(active_rebalances) == 0: - print(f"{datetime.now().strftime('%c')} : Queue is still empty, stoping the rebalancer...") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : Queue is still empty, stoping the rebalancer...") shutdown_rebalancer = True return await asyncio.sleep(30) except Exception as e: - print(f"{datetime.now().strftime('%c')} : Queue manager exception: {str(e)}") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : Queue manager exception: {str(e)}") shutdown_rebalancer = True finally: - print(f"{datetime.now().strftime('%c')} : Queue manager has shut down...") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : Queue manager has shut down...") async def async_run_rebalancer(worker, rebalancer_queue): + global scheduled_rebalances, active_rebalances, shutdown_rebalancer while True: - global scheduled_rebalances, active_rebalances, shutdown_rebalancer - if not rebalancer_queue.empty(): + if not rebalancer_queue.empty() and not shutdown_rebalancer: rebalance = await rebalancer_queue.get() - print(f"{datetime.now().strftime('%c')} : {worker} is starting a new request...") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : {worker} is starting a new request...") active_rebalance_id = None if rebalance != None: active_rebalance_id = rebalance.id @@ -370,7 +362,7 @@ async def async_run_rebalancer(worker, rebalancer_queue): rebalance = await run_rebalancer(rebalance, worker) if active_rebalance_id != None: active_rebalances.remove(active_rebalance_id) - print(f"{datetime.now().strftime('%c')} : {worker} completed its request...") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : {worker} completed its request...") else: if shutdown_rebalancer == True: return @@ -381,22 +373,48 @@ async def start_queue(worker_count=1): manager = asyncio.create_task(async_queue_manager(rebalancer_queue)) workers = [asyncio.create_task(async_run_rebalancer("Worker " + str(worker_num+1), rebalancer_queue)) for worker_num in range(worker_count)] await asyncio.gather(manager, *workers) - print(f"{datetime.now().strftime('%c')} : Manager and workers have stopped...") + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : Manager and workers have stopped...") + +@sync_to_async +def get_worker_count(): + if LocalSettings.objects.filter(key='AR-Workers').exists(): + return int(LocalSettings.objects.filter(key='AR-Workers')[0].value) + else: + return 1 + +async def update_worker_count(): + global worker_count, shutdown_rebalancer + while True: + updated_worker_count = await get_worker_count() + if updated_worker_count != worker_count: + worker_count = updated_worker_count + shutdown_rebalancer = True + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : New worker count detected...restarting rebalancer") + await asyncio.sleep(20) def main(): - if Rebalancer.objects.filter(status=1).exists(): - unknown_errors = Rebalancer.objects.filter(status=1) - for unknown_error in unknown_errors: - unknown_error.status = 400 - unknown_error.stop = datetime.now() - unknown_error.save() + global scheduled_rebalances, active_rebalances, shutdown_rebalancer, worker_count if LocalSettings.objects.filter(key='AR-Workers').exists(): worker_count = int(LocalSettings.objects.filter(key='AR-Workers')[0].value) else: LocalSettings(key='AR-Workers', value='1').save() worker_count = 1 - asyncio.run(start_queue(worker_count)) - print(f"{datetime.now().strftime('%c')} : Rebalancer successfully shutdown...") + loop = asyncio.new_event_loop() + asyncio.set_event_loop(loop) + loop.create_task(update_worker_count()) + while True: + shutdown_rebalancer = False + scheduled_rebalances = [] + active_rebalances = [] + if Rebalancer.objects.filter(status=1).exists(): + unknown_errors = Rebalancer.objects.filter(status=1) + for unknown_error in unknown_errors: + unknown_error.status = 400 + unknown_error.stop = datetime.now() + unknown_error.save() + loop.run_until_complete(start_queue(worker_count)) + print(f"{datetime.now().strftime('%c')} : [Rebalancer] : Rebalancer successfully exited...sleeping for 20 seconds") + sleep(20) if __name__ == '__main__': - main() + main() \ No newline at end of file diff --git a/requirements.txt b/requirements.txt index 5f1505c7..4f5c5ca4 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,10 +1,11 @@ Django djangorestframework django-filter -django-qr-code grpcio protobuf pytz pandas requests -asyncio \ No newline at end of file +asyncio +bech32 +cryptography \ No newline at end of file diff --git a/systemd.md b/systemd.md index b8f2dc2b..642657fc 100644 --- a/systemd.md +++ b/systemd.md @@ -1,123 +1,32 @@ -# Systemd Setup For Backend Refreshes And Rebalancer Tools +# Systemd Setup For Backend Tools -You will need to fill in the proper username below in the paths marked ``. This assumes you have installed lndg on the home directory of the user. For example, user `mynode` will have a home directory of `/home/mynode/`. +- **You will need to fill in the proper username below in the paths marked ``.** +- **This assumes you have installed lndg on the home directory of the user. For example, user `mynode` will have a home directory of `/home/mynode/`.** -## Backend Refreshes -Create a simple bash file to call `jobs.py`, copying the contents below to the file and filling in your user. -`nano /home//lndg/jobs.sh` -``` -#!/bin/bash - -/home//lndg/.venv/bin/python /home//lndg/jobs.py -``` -Create a service file for `jobs.sh`, copying the contents below to the file and filling in the user you would like this to run under. -`nano /etc/systemd/system/jobs-lndg.service` -``` -[Unit] -Description=Run Jobs For Lndg -[Service] -User= -Group= -ExecStart=/usr/bin/bash /home//lndg/jobs.sh -StandardError=append:/var/log/lnd_jobs_error.log -``` - -Create a timer file for `jobs.sh`, copying the contents below to the file and change the 20s refresh if you like.. -`nano /etc/systemd/system/jobs-lndg.timer` -``` -[Unit] -Description=Run Lndg Jobs Every 20 Seconds -[Timer] -OnBootSec=300 -OnUnitActiveSec=20 -AccuracySec=1 -[Install] -WantedBy=timers.target -``` -Enable the timer to run the jobs service file at the specified interval. -`sudo systemctl enable jobs-lndg.timer` -`sudo systemctl start jobs-lndg.timer` - -## Rebalancer Runs -Create a simple bash file to call `rebalancer.py`, copying the contents below to the file and filling in your user. -`nano /home//lndg/rebalancer.sh` -``` -#!/bin/bash - -/home//lndg/.venv/bin/python /home//lndg/rebalancer.py -``` -Create a service file for `rebalancer.sh`, copying the contents below to the file and filling in the user you would like this to run under. -`nano /etc/systemd/system/rebalancer-lndg.service` +## Backend Controller Setup +Create a service file for `controller.py`, copying the contents below to the file and filling in the user you would like this to run under. +`nano /etc/systemd/system/lndg-controller.service` ``` [Unit] -Description=Run Rebalancer For Lndg +Description=Backend Controller For Lndg [Service] +Environment=PYTHONUNBUFFERED=1 User= Group= -ExecStart=/usr/bin/bash /home//lndg/rebalancer.sh -StandardError=append:/var/log/lnd_rebalancer_error.log -RuntimeMaxSec=3600 -``` - -Create a timer file for `rebalancer.sh`, copying the contents below to the file and change the 20s refresh if you like. -`nano /etc/systemd/system/rebalancer-lndg.timer` -``` -[Unit] -Description=Run Lndg Rebalancer Every 20 Seconds -[Timer] -OnBootSec=315 -OnUnitActiveSec=20 -AccuracySec=1 -[Install] -WantedBy=timers.target -``` -Enable and start the timer to run the rebalancer service file at the specified interval. -`sudo systemctl enable rebalancer-lndg.timer` -`sudo systemctl start rebalancer-lndg.timer` - -## HTLC Failure Stream Data -Create a simple bash file to call `htlc_stream.py`, copying the contents below to the file and filling in your user. -`nano /home//lndg/htlc_stream.sh` -``` -#!/bin/bash - -/home//lndg/.venv/bin/python /home//lndg/htlc_stream.py -``` -Create a service file for `htlc_stream.sh`, copying the contents below to the file and filling in the user you would like this to run under. -`nano /etc/systemd/system/htlc-stream-lndg.service` -``` -[Unit] -Description=Run HTLC Stream For Lndg -[Service] -User= -Group= -ExecStart=/usr/bin/bash /home//lndg/htlc_stream.sh -StandardError=append:/var/log/lnd_htlc_stream_error.log +ExecStart=/home//lndg/.venv/bin/python /home//lndg/controller.py +StandardOuput=append:/var/log/lndg-controller.log +StandardError=append:/var/log/lndg-controller.log Restart=always RestartSec=60s [Install] WantedBy=multi-user.target ``` -Enable and start the service to run the htlc failure stream service file. -`sudo systemctl enable htlc-stream-lndg.service` -`sudo systemctl start htlc-stream-lndg.service` - -## Status Checks -You can check on the status of your timers. -`sudo systemctl status jobs-lndg.timer` -`sudo systemctl status rebalancer-lndg.timer` - -You can also check to make sure the last run of the service files triggered by the timers were successful. -`sudo systemctl status jobs-lndg.service` -`sudo systemctl status rebalancer-lndg.service` - -You can disable your timers as well if you would like them to stop. -`sudo systemctl disable jobs-lndg.timer` -`sudo systemctl stop jobs-lndg.timer` -`sudo systemctl disable rebalancer-lndg.timer` -`sudo systemctl stop rebalancer-lndg.timer` +Enable and start the service to run the backend controller. +`sudo systemctl enable lndg-controller.service` +`sudo systemctl start lndg-controller.service` -You can also check on and disable your htlc stream data. -`sudo systemctl status htlc-stream-lndg.service` -`sudo systemctl disable htlc-stream-lndg.service` -`sudo systemctl stop htlc-stream-lndg.service` +## Additional Commands +You can also check on the status, disable or stop the backend controller. +`sudo systemctl status lndg-controller.service` +`sudo systemctl disable lndg-controller.service` +`sudo systemctl stop lndg-controller.service` diff --git a/systemd.sh b/systemd.sh index 51c0da51..c774df67 100644 --- a/systemd.sh +++ b/systemd.sh @@ -8,81 +8,35 @@ if [ "$INSTALL_USER" == 'root' ] >/dev/null 2>&1; then else HOME_DIR="/home/$INSTALL_USER" fi +LNDG_DIR="$HOME_DIR/lndg" -function configure_jobs() { - cat << EOF > $HOME_DIR/lndg/jobs.sh -#!/bin/bash - -$HOME_DIR/lndg/.venv/bin/python $HOME_DIR/lndg/jobs.py -EOF - - cat << EOF > /etc/systemd/system/jobs-lndg.service -[Unit] -Description=Run Jobs For Lndg -[Service] -User=$INSTALL_USER -Group=$INSTALL_USER -ExecStart=/usr/bin/bash $HOME_DIR/lndg/jobs.sh -StandardError=append:/var/log/lnd_jobs_error.log -EOF - - cat << EOF > /etc/systemd/system/jobs-lndg.timer -[Unit] -Description=Run Lndg Jobs Every $REFRESH Seconds -[Timer] -OnBootSec=300 -OnUnitActiveSec=$REFRESH -AccuracySec=1 -[Install] -WantedBy=timers.target -EOF -} - -function configure_rebalancer() { - cat << EOF > $HOME_DIR/lndg/rebalancer.sh -#!/bin/bash - -$HOME_DIR/lndg/.venv/bin/python $HOME_DIR/lndg/rebalancer.py -EOF - - cat << EOF > /etc/systemd/system/rebalancer-lndg.service -[Unit] -Description=Run Rebalancer For Lndg -[Service] -User=$INSTALL_USER -Group=$INSTALL_USER -ExecStart=/usr/bin/bash $HOME_DIR/lndg/rebalancer.sh -StandardError=append:/var/log/lnd_rebalancer_error.log -RuntimeMaxSec=3600 -EOF - - cat << EOF > /etc/systemd/system/rebalancer-lndg.timer -[Unit] -Description=Run Lndg Rebalancer Every $REFRESH Seconds -[Timer] -OnBootSec=315 -OnUnitActiveSec=$REFRESH -AccuracySec=1 -[Install] -WantedBy=timers.target -EOF +function check_path() { + if [ -e $LNDG_DIR/lndg/wsgi.py ]; then + echo "Using LNDg installation found at $LNDG_DIR" + else + echo "LNDg installation not found at $LNDG_DIR/, please provide the correct path:" + read USR_DIR + if [ -e $USR_DIR/lndg/wsgi.py ]; then + LNDG_DIR=$USR_DIR + echo "Using LNDg installation found at $LNDG_DIR" + else + echo "LNDg installation still not found, exiting..." + exit 1 + fi + fi } -function configure_htlc_stream() { - cat << EOF > $HOME_DIR/lndg/htlc_stream.sh -#!/bin/bash - -$HOME_DIR/lndg/.venv/bin/python $HOME_DIR/lndg/htlc_stream.py -EOF - - cat << EOF > /etc/systemd/system/htlc-stream-lndg.service +function configure_controller() { + cat << EOF > /etc/systemd/system/lndg-controller.service [Unit] -Description=Run HTLC Stream For Lndg +Description=Run Backend Controller For Lndg [Service] +Environment=PYTHONUNBUFFERED=1 User=$INSTALL_USER Group=$INSTALL_USER -ExecStart=/usr/bin/bash $HOME_DIR/lndg/htlc_stream.sh -StandardError=append:/var/log/lnd_htlc_stream_error.log +ExecStart=$LNDG_DIR/.venv/bin/python $LNDG_DIR/controller.py +StandardOutput=append:/var/log/lndg-controller.log +StandardError=append:/var/log/lndg-controller.log Restart=always RestartSec=60s [Install] @@ -93,12 +47,8 @@ EOF function enable_services() { systemctl daemon-reload sleep 3 - systemctl start jobs-lndg.timer - systemctl enable jobs-lndg.timer >/dev/null 2>&1 - systemctl start rebalancer-lndg.timer - systemctl enable rebalancer-lndg.timer >/dev/null 2>&1 - systemctl start htlc-stream-lndg.service - systemctl enable htlc-stream-lndg.service >/dev/null 2>&1 + systemctl start lndg-controller.service + systemctl enable lndg-controller.service >/dev/null 2>&1 } function report_information() { @@ -106,20 +56,11 @@ function report_information() { echo -e "================================================================================================================================" echo -e "Backend services and rebalancer setup for LNDg via systemd using user account $INSTALL_USER and a refresh interval of $REFRESH seconds." echo -e "" - echo -e "Jobs Timer Status: ${RED}sudo systemctl status jobs-lndg.timer${NC}" - echo -e "Rebalancer Timer Status: ${RED}sudo systemctl status rebalancer-lndg.timer${NC}" - echo -e "HTLC Stream Status: ${RED}sudo systemctl status htlc-stream-lndg.service${NC}" - echo -e "" - echo -e "Last Jobs Status: ${RED}sudo systemctl status jobs-lndg.service${NC}" - echo -e "Last Rebalancer Status: ${RED}sudo systemctl status rebalancer-lndg.service${NC}" + echo -e "Backend Controller Status: ${RED}sudo systemctl status lndg-controller.service${NC}" echo -e "" echo -e "To disable your backend services, use the following commands." - echo -e "Disable Jobs Timer: ${RED}sudo systemctl disable jobs-lndg.timer${NC}" - echo -e "Disable Rebalancer Timer: ${RED}sudo systemctl disable rebalancer-lndg.timer${NC}" - echo -e "Disable HTLC Stream: ${RED}sudo systemctl disable htlc-stream-lndg.service${NC}" - echo -e "Stop Jobs Timer: ${RED}sudo systemctl stop jobs-lndg.timer${NC}" - echo -e "Stop Rebalancer Timer: ${RED}sudo systemctl stop rebalancer-lndg.timer${NC}" - echo -e "Stop HTLC Stream: ${RED}sudo systemctl stop htlc-stream-lndg.service${NC}" + echo -e "Disable Backend Controller: ${RED}sudo systemctl disable lndg-controller.service${NC}" + echo -e "Stop Backend Controller: ${RED}sudo systemctl stop lndg-controller.service${NC}" echo -e "" echo -e "To re-enable these services, simply replace the disable/stop commands with enable/start." echo -e "================================================================================================================================" @@ -127,8 +68,7 @@ function report_information() { ##### Main ##### echo -e "Setting up, this may take a few minutes..." -configure_jobs -configure_rebalancer -configure_htlc_stream +check_path +configure_controller enable_services report_information \ No newline at end of file diff --git a/trade.py b/trade.py new file mode 100644 index 00000000..5ee64756 --- /dev/null +++ b/trade.py @@ -0,0 +1,1059 @@ +import django, base64, secrets, re, asyncio, json +from django.db.models import Sum, IntegerField, Count, F, Q +from django.db.models.functions import Round +from time import time +from bech32 import bech32_decode, bech32_encode, convertbits +from datetime import datetime, timedelta +from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes +from cryptography.hazmat.backends import default_backend +from decimal import Decimal +from lndg import settings +from gui.lnd_deps import lightning_pb2 as ln +from gui.lnd_deps import lightning_pb2_grpc as lnrpc +from gui.lnd_deps import signer_pb2 as lns +from gui.lnd_deps import signer_pb2_grpc as lnsigner +from gui.lnd_deps import router_pb2 as lnr +from gui.lnd_deps import router_pb2_grpc as lnrouter +from gui.lnd_deps.lnd_connect import lnd_connect, async_lnd_connect +from os import environ +environ['DJANGO_SETTINGS_MODULE'] = 'lndg.settings' +django.setup() +from gui.models import TradeSales, Payments, PaymentHops, Forwards, Peers + +def is_hex(n): + return len(n) % 2 == 0 and all(c in '0123456789ABCDEFabcdef' for c in n) + +def hex_as_utf8(hex_str): + return bytes.fromhex(hex_str).decode('utf-8') + +def utf8_as_hex(utf8_str): + return bytes(utf8_str, 'utf-8').hex() + +def decode_basic_trade(records): + if not isinstance(records, list): + raise ValueError('ExpectedArrayOfRecordsToDecodeBasicTrade') + + description_record = next((record for record in records if record['type'] == '2'), None) + + if not description_record: + raise ValueError('ExpectedDescriptionRecordToDecodeBasicTrade') + + id_record = next((record for record in records if record['type'] == '1'), None) + + if not id_record: + raise ValueError('ExpectedIdRecordToDecodeBasicTradeDetails') + + return { + 'description': hex_as_utf8(description_record['value']), + 'id': id_record['value'] + } + +def encode_as_bigsize(number: int): + max_8_bit_number = 252 + max_16_bit_number = 65535 + max_32_bit_number = 4294967295 + + def tag_as_uint8(n): + return format(n, '02x') + + def tag_as_uint16(n): + return 'fd' + format(n, '04x') + + def tag_as_uint32(n): + return 'fe' + format(n, '08x') + + def tag_as_uint64(n): + return 'ff' + format(n, '016x') + + number = int(number) + + if number <= max_8_bit_number: + return tag_as_uint8(number) + elif number <= max_16_bit_number: + return tag_as_uint16(number) + elif number <= max_32_bit_number: + return tag_as_uint32(number) + else: + return tag_as_uint64(number) + +def decode_as_bigsize(encoded_string: str): + def read_uint8(encoded_string): + return int(encoded_string[:2], 16), encoded_string[2:] + + def read_uint16(encoded_string): + return int(encoded_string[2:6], 16), encoded_string[6:] + + def read_uint32(encoded_string): + return int(encoded_string[2:10], 16), encoded_string[10:] + + def read_uint64(encoded_string): + return int(encoded_string[2:18], 16), encoded_string[18:] + + if encoded_string.startswith('ff'): + value, remaining = read_uint64(encoded_string) + elif encoded_string.startswith('fe'): + value, remaining = read_uint32(encoded_string) + elif encoded_string.startswith('fd'): + value, remaining = read_uint16(encoded_string) + else: + value, remaining = read_uint8(encoded_string) + + return value + +def decode_big_size(encoded): + max_8bit_number = 0xfc + max_16bit_number = 0xffff + max_32bit_number = 0xffffffff + uint8_length = 1 + uint16 = 0xfd + uint16_length = 3 + uint32 = 0xfe + uint32_length = 5 + uint64_length = 9 + + if not bool(encoded) and is_hex(encoded): + raise ValueError('ExpectedHexEncodedBigSizeValueToDecode') + + bytes_data = bytes.fromhex(encoded) + + size = bytes_data[0] + + if size <= max_8bit_number: + return {'decoded': str(size), 'length': uint8_length} + + byte_length = len(bytes_data) + + if size == uint16: + if byte_length < uint16_length: + raise ValueError('ExpectedMoreBytesToDecodeUint16BigSize') + + uint16_number = int.from_bytes(bytes_data[1:3], 'big') + + if uint16_number <= max_8bit_number: + raise ValueError('ExpectedLargerNumberToDecodeUint16BigSize') + + return {'decoded': str(uint16_number), 'length': uint16_length} + elif size == uint32: + if byte_length < uint32_length: + raise ValueError('ExpectedMoreBytesToDecodeUint32BigSize') + + uint32_number = int.from_bytes(bytes_data[1:5], 'big') + + if uint32_number <= max_16bit_number: + raise ValueError('ExpectedLargerNumberToDecodeUint32BigSize') + + return {'decoded': str(uint32_number), 'length': uint32_length} + else: + if byte_length < uint64_length: + raise ValueError('ExpectedMoreBytesToDecodeUint64BigSize') + + uint64_number = int.from_bytes(bytes_data[1:9], 'big') + + if uint64_number <= max_32bit_number: + raise ValueError('ExpectedLargerNumberToDecodeUint64BigSize') + + return {'decoded': str(uint64_number), 'length': uint64_length} + +def decode_tlv_record(data): + def hex_len(byte_length): + return 0 if not byte_length else byte_length * 2 + + def read(from_pos, hex_str, to_pos=None): + if to_pos is None: + return hex_str[from_pos:] + else: + return hex_str[from_pos:to_pos + from_pos] + encoded = data['encoded'] + offset = data.get('offset', 0) + + start = hex_len(offset) + + type_record = decode_big_size(read(start, encoded)) + + size_start = start + hex_len(type_record['length']) + + bytes_record = decode_big_size(read(size_start, encoded)) + + meta_bytes = type_record['length'] + bytes_record['length'] + + if not int(bytes_record['decoded']): + return {'length': meta_bytes, 'type': type_record['decoded'], 'value': ''} + + value_start = start + hex_len(meta_bytes) + total_bytes = meta_bytes + int(bytes_record['decoded']) + + if start + hex_len(total_bytes) > len(encoded): + raise ValueError('ExpectedAdditionalValueBytesInTlvRecord') + + return { + 'length': total_bytes, + 'type': type_record['decoded'], + 'value': read(value_start, encoded, hex_len(int(bytes_record['decoded']))) + } + +def decode_tlv_stream(encoded): + + if not is_hex(encoded): + raise ValueError('ExpectedHexEncodedTlvStreamToDecode') + + if not encoded: + return [] + + total_bytes = len(encoded) // 2 + stream = {'offset': 0, 'records': []} + + while stream['offset'] < total_bytes: + stream['record'] = decode_tlv_record({'encoded': encoded, 'offset': stream['offset']}) + + stream['offset'] += stream['record']['length'] + stream['records'].append(stream['record']) + + return stream['records'] + +def parse_response_code(data): + encoded = data['encoded'] + + if not encoded: + raise ValueError('ExpectedResponseCodeValueToParseResponseCode') + + records = decode_tlv_stream(encoded) + code_record = next((record for record in records if record['type'] == '0'), None) + + if not code_record: + raise ValueError('ExpectedCodeRecordToParseResponseCode') + + code = int(decode_big_size(code_record['value'])['decoded']) + if code > 2 ** 53 - 1: + raise ValueError('UnexpectedlyLargeResponseCodeInResponse') + + if code < 100: + raise ValueError('UnexpectedlySmallResponseCodeInResponse') + + if not code > 400: + return {} + + message_record = next((record for record in records if record['type'] == '1'), None) or {'value': ''} + + return {'failure': [code, hex_as_utf8(message_record['value'])]} + +def parse_peer_request_message(message): + records = decode_tlv_stream(message[len('626f73ff'):]) + version = next((record for record in records if record['type'] == '0'), None) + + if version is not None: + raise ValueError('UnexpectedVersionNumberOfRequestMessage') + + id_record = next((record for record in records if record['type'] == '1'), None) + + if id_record is None or len(id_record['value']) != 64: + raise ValueError('ExpectedRequestIdInRequestMessage') + + records_record = next((record for record in records if record['type'] == '5'), None) or {'value':''} + + response_code_record = next((record for record in records if record['type'] == '2'), None) + type_record = next((record for record in records if record['type'] == '3'), None) + + if not type_record and not response_code_record: + raise ValueError('ExpectedEitherRequestParametersOrResponseCode') + + if response_code_record is not None: + response_code = parse_response_code({'encoded': response_code_record['value']}) + failure = response_code['failure'] if 'failure' in response_code else None + + return { + 'response': { + 'failure': failure, + 'id': id_record['value'], + 'records': [{'type': record['type'], 'value': record['value']} for record in decode_tlv_stream(records_record['value'])], + }, + } + else: + return { + 'request': { + 'id': id_record['value'], + 'records': [{'type': record['type'], 'value': record['value']} for record in decode_tlv_stream(records_record['value'])], + 'type': decode_big_size(type_record['value'])['decoded'], + }, + } + +def decode_basic_trade(records): + if not isinstance(records, list): + raise ValueError('ExpectedArrayOfRecordsToDecodeBasicTrade') + + description_record = next((record for record in records if record['type'] == '2'), None) + + if not description_record: + raise ValueError('ExpectedDescriptionRecordToDecodeBasicTrade') + + id_record = next((record for record in records if record['type'] == '1'), None) + + if not id_record: + raise ValueError('ExpectedIdRecordToDecodeBasicTradeDetails') + + return { + 'description': hex_as_utf8(description_record['value']), + 'id': id_record['value'] + } + +def decode_anchored_trade_data(encoded): + anchor_prefix = 'anchor-trade-secret:' + + if not encoded.startswith(anchor_prefix): + return {} + + encoded_data = encoded[len(anchor_prefix):] + + try: + decoded_data = decode_tlv_stream(base64.b64decode(encoded_data).hex()) + except Exception as e: + return {} + + records = decoded_data + channel_record = next((record for record in records if record['type'] == '3'), None) + description_record = next((record for record in records if record['type'] == '1'), None) + secret_record = next((record for record in records if record['type'] == '0'), None) + price_record = next((record for record in records if record['type'] == '2'), None) + + if channel_record: + try: + channel_value = int(decode_big_size({'encoded': channel_record['value']})['decoded']) + except ValueError: + return {} + + return { + 'channel': channel_value, + 'price': hex_as_utf8(price_record['value']) if price_record else None, + } + + if description_record and secret_record: + return { + 'description': hex_as_utf8(description_record['value']), + 'price': hex_as_utf8(price_record['value']) if price_record else None, + 'secret': hex_as_utf8(secret_record['value']), + } + return None + +def encode_tlv_record(data): + def byte_length_of(hex_string): + return len(hex_string) // 2 + + def encode(type, length, val): + return f"{type}{length}{val}" + + type_number = data["type"] + value_hex = data["value"] + + data_length = encode_as_bigsize(byte_length_of(value_hex)) + encoded_tlv_record = encode(encode_as_bigsize(type_number), data_length, value_hex) + + return encoded_tlv_record + +def encode_peer_request(data): + id = data['id'] + records = data.get('records') + request_type = data['type'] + if not id: + raise ValueError('ExpectedRequestIdHexStringToEncodePeerRequest') + + if records is not None and not isinstance(records, list): + raise ValueError('ExpectedRecordsArrayToEncodePeerRequest') + + if not request_type: + raise ValueError('ExpectedRequestTypeToEncodePeerRequest') + + peer_response = [{'type': '1', 'value': id},{'type': '3', 'value': encode_as_bigsize(request_type)},{'type': '5', 'value': records}] + + peer_response = [ + {'type': item['type'], 'value': item['value']} if not isinstance(item['value'], list) else {'type': item['type'], 'value': ''.join([encode_tlv_record(record) for record in item['value']])} + for item in peer_response + ] + + peer_response = [{'type': item['type'], 'value': str(item['value'])} for item in peer_response] + + return '626f73ff' + ''.join([encode_tlv_record(record) for record in peer_response]) + +def encode_response_code(data): + code_for_success = 200 + failure = data.get('failure') + + if not failure: + return ''.join([encode_tlv_record(record) for record in [{'type': '0', 'value': encode_as_bigsize(code_for_success)}]]) + + if not isinstance(failure, list): + raise ValueError('ExpectedFailureArrayToEncodeResponseCode') + + code, message = failure + + if not code: + raise ValueError('ExpectedErrorCodeToEncodeResponseCode') + + records = [{'value': encode_as_bigsize(code), 'type': '0'}, {'value': utf8_as_hex(message), 'type': '1'}] + + return ''.join([encode_tlv_record(record) for record in records]) + +def encode_peer_response(data): + failure = data.get('failure') + id = data['id'] + records = data.get('records') + + code = encode_response_code({'failure': failure}) + encoded = ''.join([encode_tlv_record(record) for record in records]) if records is not None else None + + peer_response = [{'type': '1', 'value': id}, {'type': '2', 'value': code}, {'type': '5', 'value': encoded}] + + return '626f73ff' + ''.join([encode_tlv_record(record) for record in peer_response]) + +def get_legacy_trades(stub): + trades = [] + for invoice in stub.ListInvoices(ln.ListInvoiceRequest(pending_only=True)).invoices: + if invoice.is_keysend == False: + trade = decode_anchored_trade_data(invoice.memo) + if trade: + trade['price'] = invoice.value + trade['id'] = invoice.r_hash.hex() + trade['expiry'] = invoice.expiry + trade['creation_date'] = invoice.creation_date + trades.append(trade) + return trades + +def get_trades(trade_id=None): + trades = TradeSales.objects.filter(Q(expiry__isnull=True) | Q(expiry__gt=datetime.now())).filter(Q(sale_limit__isnull=True) | Q(sale_count__lt=F('sale_limit'))) + if trade_id: + trades = trades.filter(id=trade_id) + return trades + +def decodePrefix(prefix): + bech32CurrencyCodes={"bc": "bitcoin","bcrt": "regtest","ltc": "litecoin","tb": "testnet","tbs": "signet","sb": "simnet"} + matches = re.compile(r'^ln(\S+?)(\d*)([a-zA-Z]?)$').match(prefix) + + if not matches or not matches.groups(): + raise ValueError('InvalidPaymentRequestPrefix') + + _, _, type = matches.groups() + + prefixElements = re.compile(r'^ln(\S+)$').match(prefix) if not type else matches + + currency, amount, units = prefixElements.groups() + + network = bech32CurrencyCodes.get(currency) + + if not network: + raise ValueError('UnknownCurrencyCodeInPaymentRequest') + + return amount, network, units + +def parseHumanReadableValue(data): + amountMultiplierPattern = r'^[^munp0-9]$' + divisibilityMarkerLen = 1 + divisibilityPattern = r'^[munp]$' + + amount = data.get('amount') + units = data.get('units') + + hrp = f"{amount}{units}" + + if re.match(divisibilityPattern, hrp[-divisibilityMarkerLen:]): + return { + 'divisor': hrp[-divisibilityMarkerLen:], + 'value': hrp[:-(divisibilityMarkerLen)], + } + + if re.match(amountMultiplierPattern, hrp[-divisibilityMarkerLen:]): + raise ValueError('InvalidAmountMultiplier') + + return {'value': hrp} + +def hrpAsMtokens(amount, units): + divisors ={"m": "1000","n": "1000000000","p": "1000000000000","u": "1000000"} + + if not amount: + return {} + + result = parseHumanReadableValue({'amount': amount, 'units': units}) + divisor = result['divisor'] + value = result['value'] + + if not bool(re.match(r'^\d+$', value)): + raise ValueError('ExpectedValidNumericAmountToParseHrpAsMtokens') + + val = Decimal(value) + + if not divisor: + return {'mtokens': str(int(val * Decimal(1e11)))} + + div = Decimal(divisors.get(divisor, 1)) + + return str(int(val * Decimal(1e11) / div)) + +def mtokensAsHrp(mtokens): + amount = int(mtokens) + hrp = None + + multipliers = { + 'n': 100, + 'u': 100000, + 'm': 100000000, + '': 100000000000, + } + + for letter, value in multipliers.items(): + value = int(value) + if amount % value == 0: + if letter == 'u': + hrp = f"{amount // value}u" + elif letter == 'm': + hrp = f"{amount // value}m" + elif letter == 'n': + hrp = f"{amount // value}n" + elif letter == '': + hrp = f"{amount // value}" + + if not hrp: + return str(amount * 10) + 'p' + return hrp + +def decodeBech32Words(words): + inBits = 5 + outBits = 8 + + bits = 0 + maxV = (1 << outBits) - 1 + result = [] + value = 0 + + for word in words: + value = (value << inBits) | word + bits += inBits + while bits >= outBits: + bits -= outBits + result.append((value >> bits) & maxV) + + if bits: + result.append((value << (outBits - bits)) & maxV) + + return bytes(result) + +def byteEncodeRequest(request): + if not request: + raise ValueError('ExpectedPaymentRequestToByteEncode') + + if request[:2].lower() != 'ln': + raise ValueError('ExpectedLnPrefixToByteEncodePaymentRequest') + + (prefix, words) = bech32_decode(request) + + (amount, network, units) = decodePrefix(prefix) + + mtokens = hrpAsMtokens(amount, units) + + encoded = decodeBech32Words(words).hex() + + return {'encoded': encoded, 'network': network, 'mtokens': mtokens, 'words': len(words)} + +def byteDecodeRequest(encoded, mtokens, network, words): + if not is_hex(encoded): + raise ValueError('ExpectedHexEncodedPaymentRequestDataToDecodeRequest') + if not network: + raise ValueError('ExpectedNetworkToDecodeByteEncodedRequest') + if not words: + raise ValueError('ExpectedWordsCountToDecodeByteEncodedRequest') + + if network == 'bitcoin': + prefix = 'bc' + elif network == 'testnet': + prefix = 'tb' + elif network == 'regtest': + prefix = 'bcrt' + elif network == 'signet': + prefix = 'tbs' + else: + raise ValueError('ExpectedKnownNetworkToDecodeByteEncodedRequest') + + prefix = 'ln' + prefix + mtokensAsHrp(mtokens) + five_bit = convertbits(bytes.fromhex(encoded), 8, 5)[:words] + return bech32_encode(prefix, five_bit) + +def encode_request_as_records(request): + if not request: + raise ValueError('ExpectedRequestToEncodeAsRequestRecords') + + records = [] + + result = byteEncodeRequest(request) + encoded = result['encoded'] + mtokens = result['mtokens'] + words = result['words'] + + records.append({'type': '1', 'value': encoded}) + records.append({'type': '0', 'value': encode_as_bigsize(words)}) + + if mtokens: + records.append({'type': '2', 'value': encode_as_bigsize(mtokens)}) + + return ''.join([encode_tlv_record(record) for record in records]), result['network'] + +def decode_records_as_request(encoded, network): + if not encoded: + raise ValueError('ExpectectedEncodedPaymentRequestRecordsToDecode') + + if not network: + raise ValueError('ExpectedNetworkNameToDeriveRequestFromRequestRecords') + + records = decode_tlv_stream(encoded) + word_count = next((record for record in records if record['type'] == '0'), None) + if not word_count: + raise ValueError('ExpectedWordCountRecordInPaymentTlvRecord') + try: + words = int(decode_as_bigsize(word_count['value'])) + except: + raise ValueError('ExpectedPaymentRequestWordCountInRequestRecords') + + details = next((record for record in records if record['type'] == '1'), None) + if not details: + raise ValueError('ExpectedEncodedPaymentDetailsInPaymentTlvRecord') + + amount = next((record for record in records if record['type'] == '2'), None) + if not amount: + raise ValueError('ExpectedPaymentRequestTokensInPaymentRecords') + mtokens = decode_as_bigsize(amount['value']) + + try: + request = byteDecodeRequest(details['value'], mtokens, network, words) + except: + raise ValueError('ExpectedValidPaymentRequestDetailsToDecodeRecords') + + return request + +def encode_final_trade(auth, payload, request): + if not request: + raise ValueError('ExpectedPaymentRequestToDeriveNetworkRecord') + + encoded_request, network = encode_request_as_records(request) + trade_records = [{'type': '2', 'value': encoded_request}] + + if settings.LND_NETWORK != 'mainnet': + network_value = '02' if network == 'regtest' else '01' + trade_records.append({'type': '1','value': network_value}) + + encryption_records = ''.join([encode_tlv_record(record) for record in [{'type': '0', 'value': payload}, {'type': '1', 'value': auth}]]) + details_records = ''.join([encode_tlv_record(record) for record in [{'type': '0', 'value': encryption_records}]]) + trade_records.append({'type': '3', 'value': details_records}) + + return '626f73ff' + ''.join([encode_tlv_record(record) for record in trade_records]) + +def decode_final_trade(network, request, details): + details = decode_tlv_stream(details['value']) + + encrypted = next((record for record in details if record['type'] == '0'), None) + if not encrypted: + raise ValueError('ExpectedEncryptedRecordToDecodeTrade') + encrypted_records = decode_tlv_stream(encrypted['value']) + + encrypted_data = next((record for record in encrypted_records if record['type'] == '0'), None) + if not encrypted_data: + raise ValueError('ExpectedEncryptedDataRecordToDecodeTrade') + + auth = next((record for record in encrypted_records if record['type'] == '1'), None) + if not auth: + raise ValueError('ExpectedAuthDataRecordToDecodeTrade') + + return decode_records_as_request(request['value'], network), auth['value'], encrypted_data['value'] + +def getSecret(stub, sale_type): + if sale_type == 1: # routing data + try: + filter_30day = datetime.now() - timedelta(days=30) + incoming_nodes = Forwards.objects.filter(forward_date__gte=filter_30day).values('chan_id_in').annotate(ppm=Round((Sum('fee')/Sum('amt_in_msat'))*1000000000, output_field=IntegerField()), score=Round((Round(Count('id')/1, output_field=IntegerField())+Round(Sum('amt_in_msat')/100000, output_field=IntegerField()))/10, output_field=IntegerField())).exclude(score=0).order_by('-score', '-ppm')[:5] + outgoing_nodes = Forwards.objects.filter(forward_date__gte=filter_30day).values('chan_id_out').annotate(ppm=Round((Sum('fee')/Sum('amt_out_msat'))*1000000000, output_field=IntegerField()), score=Round((Round(Count('id')/1, output_field=IntegerField())+Round(Sum('amt_out_msat')/100000, output_field=IntegerField()))/10, output_field=IntegerField())).exclude(score=0).order_by('-score', '-ppm')[:5] + secret = json.dumps({"incoming_nodes":list(incoming_nodes.values('chan_id_in', 'score', 'ppm')), "outgoing_nodes":list(outgoing_nodes.values('chan_id_out', 'score', 'ppm'))}) + except Exception as e: + print(f"{datetime.now().strftime('%c')} : [P2P] : Error getting secret: {str(e)}") + secret = None + finally: + return secret + elif sale_type == 2: # payment data + try: + self_pubkey = stub.GetInfo(ln.GetInfoRequest()).identity_pubkey + filter_30day = datetime.now() - timedelta(days=30) + # exlcude_list = AvoidNodes.objects.values_list('pubkey') + payments_30day = Payments.objects.filter(creation_date__gte=filter_30day, status=2).values_list('payment_hash') + payment_nodes = PaymentHops.objects.filter(payment_hash__in=payments_30day).exclude(node_pubkey=self_pubkey).values('node_pubkey').annotate(ppm=Round((Sum('fee')/Sum('amt'))*1000000, output_field=IntegerField()), score=Round((Round(Count('id')/1, output_field=IntegerField())+Round(Sum('amt')/100000, output_field=IntegerField()))/10, output_field=IntegerField())).exclude(score=0).order_by('-score', 'ppm')[:10] + secret = json.dumps({"payment_nodes": list(payment_nodes.values('node_pubkey', 'score', 'ppm'))}) + except Exception as e: + print(f"{datetime.now().strftime('%c')} : [P2P] : Error getting secret: {str(e)}") + secret = None + finally: + return secret + else: + return None + +def serve_trades(stub): + print(f"{datetime.now().strftime('%c')} : [P2P] : Serving trades...") + for trade in get_trades(): + print(f"{datetime.now().strftime('%c')} : [P2P] : Serving trade: {trade.id}") + for response in stub.SubscribeCustomMessages(ln.SubscribeCustomMessagesRequest()): + if response.type == 32768: + from_peer = response.peer + msg_type = response.type + message = response.data.hex() + if msg_type == 32768 and message.lower().startswith('626f73ff'): + msg_response = parse_peer_request_message(message) + if 'request' in msg_response: + request = msg_response['request'] + if 'type' in request: + req_type = request['type'] + if req_type == '8050005': # request a seller to finalize a trade or give all open trades + print(f"{datetime.now().strftime('%c')} : [P2P] : SELLER ACTION", '|', 'ID:', request['id'], '|', 'Records:', request['records']) + select_trade = next((record for record in request['records'] if record['type'] == '0'), None) + request_trade = next((record for record in request['records'] if record['type'] == '1'), None) + if request_trade: + trades = get_trades() + for trade in trades: + trade_data = encode_peer_request({'id':secrets.token_bytes(32).hex(), 'type':'8050006', 'records':[{'type': '1', 'value': trade.id}, {'type': '2', 'value': utf8_as_hex(trade.description)}, {'type': '0', 'value': request_trade['value']}]}) + stub.SendCustomMessage(ln.SendCustomMessageRequest(peer=from_peer, type=32768, data=bytes.fromhex(trade_data))) + ack_data = encode_peer_response({'failure':None, 'id':request['id'], 'records':[]}) + for trade in trades: + stub.SendCustomMessage(ln.SendCustomMessageRequest(peer=from_peer, type=32768, data=bytes.fromhex(ack_data))) + if select_trade: + selected_trade = get_trades(trade.id) + if selected_trade: + trade_details = selected_trade[0] + if not trade_details.secret: + secret = getSecret(stub, trade_details.sale_type) + else: + secret = trade_details.secret + if not secret: + print(f"{datetime.now().strftime('%c')} : [P2P] : Failed to get secret for:", trade_details.id) + continue + signerstub = lnsigner.SignerStub(lnd_connect()) + shared_key = signerstub.DeriveSharedKey(lns.SharedKeyRequest(ephemeral_pubkey=from_peer)).shared_key + preimage = secrets.token_bytes(32) + shared_secret = bytes(x ^ y for x, y in zip(shared_key, preimage)) + cipher = Cipher(algorithms.AES(shared_secret), modes.GCM(bytes(16)), backend=default_backend()) + encryptor = cipher.encryptor() + ciphertext = (encryptor.update(secret.encode('utf-8')) + encryptor.finalize()).hex() + auth_tag = encryptor.tag.hex() + time_to_expiry = (trade_details.expiry-datetime.now()).seconds if trade_details.expiry else None + default_expiry = 30*60 + inv_expiry = default_expiry if time_to_expiry is None or time_to_expiry > default_expiry else time_to_expiry + if inv_expiry > 0: + final_invoice = stub.AddInvoice(ln.Invoice(memo=trade_details.description, value=trade_details.price, expiry=inv_expiry, r_preimage=preimage)) + trade_data = encode_peer_response({'failure': None, 'id':request['id'], 'records': [{'type':'1', 'value':encode_final_trade(auth_tag, ciphertext, final_invoice.payment_request)}]}) + trade_details.sale_count = F('sale_count') + 1 + trade_details.save() + stub.SendCustomMessage(ln.SendCustomMessageRequest(peer=from_peer, type=32768, data=bytes.fromhex(trade_data))) + else: + print(f"{datetime.now().strftime('%c')} : [P2P] : Expected request type in message:", request['id']) + if 'response' in msg_response: + request = msg_response['response'] + if 'failure' in request and request['failure'] != None: + # failure message returned + print(f"{datetime.now().strftime('%c')} : [P2P] : Failure:", request['failure']) + else: + if len(request['records']) == 0: + # message acknowledgements + print(f"{datetime.now().strftime('%c')} : [P2P] : ACK", '|', 'ID:', request['id']) + +async def get_open_trades(astub, results): + try: + async for response in astub.SubscribeCustomMessages(ln.SubscribeCustomMessagesRequest()): + if response.type == 32768: + # from_peer = response.peer + msg_type = response.type + message = response.data.hex() + if msg_type == 32768 and message.lower().startswith('626f73ff'): + msg_response = parse_peer_request_message(message) + if 'request' in msg_response: + request = msg_response['request'] + if 'type' in request: + req_type = request['type'] + if req_type == '8050006': # request a buyer to select the trade provided + # select a trade from the records + trade = decode_basic_trade(request['records']) + print('BUYER ACTION', '|', 'ID:', request['id'], '|', 'Trade:', trade) + results.append({'id': request['id'], 'trade': trade}) + else: + raise ValueError('ExpectedRequestTypeInRequestMessage') + if 'response' in msg_response: + request = msg_response['response'] + if 'failure' in request and request['failure'] != None: + # failure message returned + print('Failure:', request['failure']) + return + else: + if len(request['records']) == 0: + # message acknowledgements + print('ACK', '|', 'ID:', request['id']) + else: + # buyer to pay invoice and get secret their secret from preimage + print('BUYER FINALIZE', '|', 'ID:', request['id'], '|', 'Records:', request['records']) + return request['records'] + except asyncio.CancelledError: + pass + except Exception as e: + print('Error runnig task:', str(e)) + +def encode_nodes_data(data, network_value): + records = [{'type': '0', 'value': '01'}] + if network_value: + records.append({'type': '1', 'value': network_value}) + + node_records = [] + for idx, node in enumerate(data): + node_record = [{ 'type': '2', 'value': node['id'] }] + encoded_node_record = ''.join([encode_tlv_record(record) for record in node_record]) + node_records.append({'type': str(idx), 'value': encoded_node_record}) + records.append({'type': '4', 'value': ''.join([encode_tlv_record(record) for record in node_records])}) + nodes_encoded = '626f73ff' + ''.join([encode_tlv_record(record) for record in records]) + return nodes_encoded + +def decode_node_record(encoded): + channel_hex_length = 16 + if not encoded: + raise ValueError('ExpectedEncodedNodeRecordToGetNodePointer') + + records = decode_tlv_stream(encoded['value']) + high_key_record = next((n for n in records if n['type'] == '1'), None) + + if high_key_record and len(high_key_record['value']) != channel_hex_length: + raise ValueError('ExpectedChannelIdInHighKeyRecord') + if high_key_record: + return {'high_channel': high_key_record['value']} + + low_key_record = next((n for n in records if n['type'] == '0'), None) + if low_key_record and len(low_key_record['value']) != channel_hex_length: + raise ValueError('ExpectedChannelIdInLowKeyRecord') + if low_key_record: + return {'low_channel': low_key_record['value']} + + id_record = next((n for n in records if n['type'] == '2'), None) + if not id_record: + raise ValueError('ExpectedNodeIdRecordToMapNodeRecordToNodePointer') + + if not (bool(id_record['value']) and re.match(r'^0[2-3][0-9A-F]{64}$', id_record['value'], re.I)): + raise ValueError('ExpectedNodeIdPublicKeyToMapNodeRecordToNodePointer') + + return {'node': {'id': id_record['value']}} + +def decode_open_trade(network, records): + if not network: + raise ValueError('ExpectedNetworkNameToDecodeOpenTrade') + if not isinstance(records, list): + raise ValueError('ExpectedArrayOfRecordsToDecodeOpenTrade') + + nodes_record = next((n for n in records if n['type'] == '4'), None) + id_record = next((n for n in records if n['type'] == '5'), None) + if not nodes_record: + raise ValueError('ExpectedNodesRecordToDecodeOpenTradeDetails') + + try: + decode_tlv_stream(nodes_record['value']) + except: + raise ValueError('ExpectedValidNodesTlvStreamToDecodeOpenTradeDetails') + + node_records = decode_tlv_stream(nodes_record['value']) + if not node_records: + raise ValueError('ExpectedNodeRecordsForOpenTrade') + + return network, id_record['value'] if id_record else None, [decode_node_record(value) for value in node_records] + +def decode_trade_data(encoded): + if not encoded.lower().startswith('626f73ff'): + raise ValueError('UnexpectedFormatOfTradeToDecode') + try: + decoded_trade = decode_tlv_stream(encoded[8:]) + except: + raise ValueError('ExpectedValidTlvStreamForTradeData') + records = decoded_trade + + network_value = next((record for record in records if record['type'] == '1'), None) + if network_value: + if network_value['value'] == '01': + if settings.LND_NETWORK == 'testnet': + network = 'testnet' + else: + raise ValueError('TradeRequestForAnotherNetwork') + elif network_value['value'] =='02': + if settings.LND_NETWORK == 'regtest': + network = 'regtest' + else: + raise ValueError('TradeRequestForAnotherNetwork') + else: + raise ValueError('UnknownNetworkNameToDeriveNetworkRecordFor') + else: + if settings.LND_NETWORK == 'mainnet': + network = 'bitcoin' + else: + raise ValueError('TradeRequestForAnotherNetwork') + + request = next((record for record in records if record['type'] == '2'), None) + details = next((record for record in records if record['type'] == '3'), None) + swap = next((record for record in records if record['type'] == '6'), None) + + if request and details: + return {'secret': decode_final_trade(network, request, details)} + elif request: + pass # just a payment + elif swap: + pass # swap offer + else: + return {'connect': decode_open_trade(network, records)} + +def encode_trade(description, price, secret): + anchorPrefix = 'anchor-trade-secret:' + elements = [ + {'value': secret, 'type': '0'}, + {'value': description, 'type': '1'}, + {'value': price, 'type': '2'}, + ] + + records = [ + {'type': record['type'], 'value': bytes(record['value'], 'utf-8').hex()} + for record in elements if record + ] + encoded_data = ''.join([encode_tlv_record(record) for record in records]) + encoded = anchorPrefix + base64.b64encode(bytes.fromhex(encoded_data)).decode() + + return encoded + +def create_trade_details(stub): + nodes = [{'id':stub.GetInfo(ln.GetInfoRequest()).identity_pubkey}] + lnd_network = settings.LND_NETWORK + if lnd_network == 'mainnet': + network_value = None + elif lnd_network == 'testnet': + network_value = '01' + elif lnd_network == 'regtest': + network_value = '02' + else: + raise ValueError('UnsupportedNetworkForTrades') + return encode_nodes_data(nodes, network_value) + +def create_trade_anchor(stub, description, price, secret, expiry): + try: + encoded_trade = encode_trade(description, price, secret) + stub.AddInvoice(ln.Invoice(value=int(price), expiry=int(expiry)*60*60*24, memo=encoded_trade)) + except Exception as e: + print('Error creating trade:', str(e)) + print('Trade Anchor:', encoded_trade) + +async def request_trades(to_peer): + results = [] + start_time = time() + astub = lnrpc.LightningStub(async_lnd_connect()) + task = asyncio.create_task(get_open_trades(astub, results)) + asyncio.gather(task) + trade_data = encode_peer_request({'id':secrets.token_bytes(32).hex(), 'type':'8050005', 'records':[{'type': '1', 'value': secrets.token_bytes(32).hex()}]}) + astub.SendCustomMessage(ln.SendCustomMessageRequest(peer=bytes.fromhex(to_peer), type=32768, data=bytes.fromhex(trade_data))) + while len(results) == 0: + if (time() - start_time) < 30: + await asyncio.sleep(1) + else: + print('Timeout waiting for trade records from peer.') + task.cancel() + return None + await asyncio.sleep(1) + if len(results) == 1: + ack_data = encode_peer_response({'failure':None, 'id': results[0]['id'], 'records':[]}) + astub.SendCustomMessage(ln.SendCustomMessageRequest(peer=bytes.fromhex(to_peer), type=32768, data=bytes.fromhex(ack_data))) + choice = 1 + else: + print('Select a trade to buy:') + for idx, trade in enumerate(results, start=1): + ack_data = encode_peer_response({'failure':None, 'id': trade['id'], 'records':[]}) + astub.SendCustomMessage(ln.SendCustomMessageRequest(peer=bytes.fromhex(to_peer), type=32768, data=bytes.fromhex(ack_data))) + print(f"{idx}. {trade['trade']['description']}") + while True: + try: + choice = int(input("Enter the number of your choice: ")) + if 1 <= choice <= len(results): + break + else: + print("Invalid input. Please enter a valid option.") + except ValueError: + print("Invalid input. Please enter a number for your selection.") + selected_trade = results[choice - 1] + trade_data = encode_peer_request({'id':secrets.token_bytes(32).hex(), 'type':'8050005', 'records':[{'type': '0', 'value': selected_trade['trade']['id']}]}) + astub.SendCustomMessage(ln.SendCustomMessageRequest(peer=bytes.fromhex(to_peer), type=32768, data=bytes.fromhex(trade_data))) + return await task + +def decrypt_secret(stub, decoded_trade): + invoice, auth, payload = decoded_trade['secret'] + decoded_invoice = stub.DecodePayReq(ln.PayReqString(pay_req=invoice)) + print('Invoice for secret decoded.') + print('Destination:', decoded_invoice.destination) + print('Amount:', decoded_invoice.num_satoshis) + print('Description:', decoded_invoice.description) + ask_pay = input('Pay the invoice and decrypt the secret? [y/N]: ') + if ask_pay.lower() == 'y': + routerstub = lnrouter.RouterStub(lnd_connect()) + for response in routerstub.SendPaymentV2(lnr.SendPaymentRequest(payment_request=invoice, timeout_seconds=60)): + if response.status == 2: + print('Payment paid!') + preimage = bytes.fromhex(response.payment_preimage) + if response.status > 2: + print('Payment failed. Please try again.') + return + signerstub = lnsigner.SignerStub(lnd_connect()) + shared_key = signerstub.DeriveSharedKey(lns.SharedKeyRequest(ephemeral_pubkey=bytes.fromhex(decoded_invoice.destination))).shared_key + shared_secret = bytes(x ^ y for x, y in zip(shared_key, preimage)) + cipher = Cipher(algorithms.AES(shared_secret), modes.GCM(bytes(16), bytes.fromhex(auth)), backend=default_backend()) + decryptor = cipher.decryptor() + decrypted = decryptor.update(bytes.fromhex(payload)) + decryptor.finalize() + print('Successfully decrypted the secret:', decrypted.decode('utf-8')) + +def main(): + options = ["Buy A Trade", "Setup A Sale", "Serve Trades"] + print("Select an option:") + for idx, option in enumerate(options, start=1): + print(f"{idx}. {option}") + while True: + try: + choice = int(input("Enter the number of your choice: ")) + if 1 <= choice <= len(options): + break + else: + print("Invalid input. Please enter a valid option.") + except ValueError: + print("Invalid input. Please enter a number for your selection.") + selected_option = options[choice - 1] + stub = lnrpc.LightningStub(lnd_connect()) + if selected_option == 'Setup A Sale': + description = input('Enter a description for the trade: ') + price = input('Enter the price to charge in sats: ') + expiry = input('Enter desired days until trade expiry: ') + secret = input('Enter the secret you want to sell: ') + create_trade_anchor(stub, description, price, secret, expiry) + ask_serve = input('Start serving trades? y/N: ') + if ask_serve.lower() == 'y': + print('Generic Trade Link:', create_trade_details(stub)) + serve_trades(stub) + if selected_option == 'Serve Trades': + print('Generic Trade Link:', create_trade_details(stub)) + serve_trades(stub) + if selected_option == 'Buy A Trade': + trade = input('Enter an encoded trade: ') + decoded_trade = decode_trade_data(trade) + if 'secret' in decoded_trade: + decrypt_secret(stub, decoded_trade) + if 'connect' in decoded_trade: + network, id, connection = decoded_trade['connect'] + if 'node' in connection[0]: + try: + to_peer = connection[0]['node']['id'] + if not (Peers.objects.filter(pubkey=to_peer).exists() and Peers.objects.filter(pubkey=to_peer)[0].connected == True): + host = stub.GetNodeInfo(ln.NodeInfoRequest(pub_key=to_peer, include_channels=False)).node.addresses[0].addr + stub.ConnectPeer(ln.ConnectPeerRequest(addr=ln.LightningAddress(pubkey=to_peer, host=host), timeout=60)) + except: + raise ValueError('PeerConnectionError') + else: + raise ValueError('NoPeerFoundInConnectionData') + trade = asyncio.run(request_trades(to_peer)) + if trade: + trade_data = next((record for record in trade if record['type'] == '1'), None) + if trade_data: + decoded_trade = decode_trade_data(trade_data['value']) + decrypt_secret(stub, decoded_trade) + +if __name__ == '__main__': + main() \ No newline at end of file
Requested