Skip to content
This repository has been archived by the owner on Jan 22, 2019. It is now read-only.

Run the default gnarly reducer for a few days and report on usage statistics #13

Open
shrugs opened this issue Jun 1, 2018 · 0 comments
Labels
good first issue Good for newcomers

Comments

@shrugs
Copy link
Member

shrugs commented Jun 1, 2018

The current version of gnarly-bin (commit 999b9e3) is configured to index CryptoKitties NFT transfers, Transfer events, and block metadata.

Build yourself a docker container, push it to your own username in the docker hub, and then run this container from the beginning on cryptokitties. We want to see how it performs in your experience!

Requirements

  • you must own your own ethereum node. infura is too unreliable at the moment, when using gnarly (perhaps due to rate-limiting!). Our tests are against a fully-synced parity archive node, but theoretically a fast/warp-synced node will work, but slower, since it requests information on-demand from the network
  • a VPS or server instance. gnarly should be running in a relatively production-level environment, so knowing. no minimum requirements, but make sure you don't run out of space for the blockchain you're downloading (1TB+ for parity archive). Run gnarly on that same machine or a co-located machine to cut down on round-trip time.

Extra Usage Details

For running the gnarly instance docker container, you can use a command like:

sudo docker run --name gnarly --net=host -d \
  -e "DEBUG=*,-sequelize*,-gnarly-core:store:*,-gnarly-core:api,-gnarly-core:blockstream:fast-forward,-gnarly-core:ourbit:notifyPatches" \
  -e "GNARLY_RESET=true" \
  -e "LATEST_BLOCK_HASH=0x62f9bc4786f6f8101dc9510aba5744a4ca5ed94d0e47d9b625d5f09afd7e842a" \
  -e "NODE_ENDPOINT=http://127.0.0.1:8545" \
  -e "CONNECTION_STRING=postgres:////" \
  shrugs/gnarly-test:latest

# and then save your logs with something like
sudo docker logs gnarly -f &> gnarly.log &

(remembering to configure the postgres connection string correctly, as well as, if gnarly happens to crash (report that bug please ❤️), removing the GNARLY_RESET=true env var. That var tells gnarly that it should nuke your data store to nothingness, so avoid using it if there's something in your postgres database that you want to keep!)

That hash is the blockhash of the block before cryptokitties was deployed, so we'll be indexing every cryptokitty since the beginning.

Report on:

  • what does your network topology look like?
    • what instance size(s) are you using? resources (mem, cpu) available?
    • what specs does your database have? for
  • any specific failure cases or bug reports noted
  • the results of the following sql query, ideally over time, whenever you remember to record them
SELECT EXTRACT(EPOCH FROM (f."createdAt" - s."createdAt")) as duration, bc.block_count, bc."block_count" / EXTRACT(EPOCH FROM (f."createdAt" - s."createdAt")) as blocks_per_second FROM (
    select "createdAt" from "transactions" limit 1
) as s JOIN (
    select "createdAt" from "transactions" order by "createdAt" DESC limit 1
) as f ON 1=1 JOIN (
    select count(*) as block_count from "transactions"
) as bc ON 1=1;

SELECT ckt."tokenId" as latest_token_id, b."timestamp" as block_time, b."number" as block_number
FROM (select max("tokenId"::integer)::text as "tokenId" from "cryptoKitties_tokens") as lt
JOIN "cryptoKitties_tokens" ckt ON lt."tokenId" = ckt."tokenId"
JOIN "patches" p ON p."id" = ckt."patchId"
JOIN "transactions" t ON t."id" = p."transactionId"
JOIN "blocks" b ON b.hash = t."blockHash"
WHERE ckt."tokenId" = lt."tokenId";

Output should be like

duration,block_count,blocks_per_second
297313.879,109786,0.369259586431887

latest_token_id,block_time,block_number
250419,2017-12-11 20:16:46+00,4715906

(which shows that I've been running this instance over ~110k blocks for 3.4 days and indexed 250419 kitties up to block 4715906 with an average blocks per second of 0.36)

Notes

If there are any questions, mention me in this thread and I'll be around to help!

@shrugs shrugs added the good first issue Good for newcomers label Jun 1, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

1 participant