Releases: smartcontractkit/chainlink
v1.2.0
Added
- Added support for the Nethermind Ethereum client.
- Added support for batch sending telemetry to the ingress server to improve performance.
- Added v2 P2P networking support (alpha)
New ENV vars:
ADVISORY_LOCK_CHECK_INTERVAL
(default: 1s) - when advisory locking mode is enabled, this controls how often Chainlink checks to make sure it still holds the advisory lock. It is recommended to leave this at the default.ADVISORY_LOCK_ID
(default: 1027321974924625846) - when advisory locking mode is enabled, the application advisory lock ID can be changed using this env var. All instances of Chainlink that might run on a particular database must share the same advisory lock ID. It is recommended to leave this at the default.LOG_FILE_DIR
(default: chainlink root directory) - ifLOG_TO_DISK
is enabled, this env var allows you to override the output directory for logging.SHUTDOWN_GRACE_PERIOD
(default: 5s) - when node is shutting down gracefully and exceeded this grace period, it terminates immediately (trying to close DB connection) to avoid being SIGKILLed.SOLANA_ENABLED
(default: false) - set to true to enable Solana supportTERRA_ENABLED
(default: false) - set to true to enable Terra supportBLOCK_HISTORY_ESTIMATOR_EIP1559_FEE_CAP_BUFFER_BLOCKS
- if EIP1559 mode is enabled, this optional env var controls the buffer blocks to add to the current base fee when sending a transaction. By default, the gas bumping threshold + 1 block is used. It is not recommended to change this unless you know what you are doing.TELEMETRY_INGRESS_BUFFER_SIZE
(default: 100) - the number of telemetry messages to buffer before dropping new onesTELEMETRY_INGRESS_MAX_BATCH_SIZE
(default: 50) - the maximum number of messages to batch into one telemetry requestTELEMETRY_INGRESS_SEND_INTERVAL
(default: 500ms) - the cadence on which batched telemetry is sent to the ingress serverTELEMETRY_INGRESS_USE_BATCH_SEND
(default: true) - toggles sending telemetry using the batch client to the ingress server
Bootstrap job
Added a new bootstrap
job type. This job removes the need for every job to implement their own bootstrapping logic.
OCR2 jobs with isBootstrapPeer=true
are automatically migrated to the new format.
The spec parameters are similar to a basic OCR2 job, an example would be:
type = "bootstrap"
name = "bootstrap"
relay = "evm"
schemaVersion = 1
contractID = "0xAb5801a7D398351b8bE11C439e05C5B3259aeC9B"
[relayConfig]
chainID = 4
Removed
deleteuser
CLI command.
Changed
EVM_DISABLED
has been deprecated and replaced by EVM_ENABLED
for consistency with other feature flags.
ETH_DISABLED
has been deprecated and replaced by EVM_RPC_ENABLED
for consistency, and because this was confusingly named. In most cases you want to set EVM_ENABLED=false
and not EVM_RPC_ENABLED=false
.
Log colorization is now disabled by default because it causes issues when piped to text files. To re-enable log colorization, set LOG_COLOR=true
.
Polygon/matic defaults changed
Due to increasingly hostile network conditions on Polygon we have had to increase a number of default limits. This is to work around numerous and very deep re-orgs, high mempool pressure and a failure by the network to propagate transactions properly. These new limits are likely to increase load on both your Chainlink node and database, so please be sure to monitor CPU and memory usage on both and make sure they are adequately specced to handle the additional load.
v1.1.1
Added
BLOCK_HISTORY_ESTIMATOR_EIP1559_FEE_CAP_BUFFER_BLOCKS
- if EIP1559 mode is enabled, this optional env var controls the buffer blocks to add to the current base fee when sending a transaction. By default, the gas bumping threshold + 1 block is used. It is not recommended to change this unless you know what you are doing.EVM_GAS_FEE_CAP_DEFAULT
- if EIP1559 mode is enabled, and FixedPrice gas estimator is used, this env var controls the fixed initial fee cap.
Fixed
Fixed issues with EIP-1559 related to gas bumping. Due to go-ethereum's implementation which introduces additional restrictions on top of the EIP-1559 spec, we must bump the FeeCap at least 10% each time in order for the gas bump to be accepted.
The new EIP-1559 implementation works as follows:
If you are using FixedPriceEstimator:
- With gas bumping disabled, it will submit all transactions with
feecap=ETH_MAX_GAS_PRICE_WEI
andtipcap=EVM_GAS_TIP_CAP_DEFAULT
- With gas bumping enabled, it will submit all transactions initially with
feecap=EVM_GAS_FEE_CAP_DEFAULT
andtipcap=EVM_GAS_TIP_CAP_DEFAULT
.
If you are using BlockHistoryEstimator (default for most chains):
- With gas bumping disabled, it will submit all transactions with
feecap=ETH_MAX_GAS_PRICE_WEI
andtipcap=<calculated using past blocks>
- With gas bumping enabled (default for most chains) it will submit all transactions initially with
feecap=current block base fee * (1.125 ^ N)
where N is configurable by setting BLOCK_HISTORY_ESTIMATOR_EIP1559_FEE_CAP_BUFFER_BLOCKS but defaults togas bump threshold+1
andtipcap=<calculated using past blocks>
Bumping works as follows:
- Increase tipcap by
max(tipcap * (1 + ETH_GAS_BUMP_PERCENT), tipcap + ETH_GAS_BUMP_WEI)
- Increase feecap by
max(feecap * (1 + ETH_GAS_BUMP_PERCENT), feecap + ETH_GAS_BUMP_WEI)
v1.1.0
[1.1.0] - 2022-01-25
Added
- Added support for Sentry error reporting. Set
SENTRY_DSN
at compile- or run-time to enable reporting. - Added Prometheus counters:
log_warn_count
,log_error_count
,log_critical_count
,log_panic_count
andlog_fatal_count
representing the corresponding number of warning/error/critical/panic/fatal messages in the log. - The new prometheus metric
tx_manager_tx_attempt_count
is a Prometheus Gauge that should represent the total number of Transactions attempts that awaiting confirmation for this node. - The new prometheus metric
version
that displays the node software version (tag) as well as the corresponding commit hash. - CLI command
keys eth list
is updated to display key specific max gas prices. - CLI command
keys eth create
now supports optionalmaxGasPriceGWei
parameter. - CLI command
keys eth update
is added to update key specific parameters likemaxGasPriceGWei
. - Add partial support for Moonriver chain
Two new log levels have been added.
[crit]
: Critical level logs are more severe than[error]
and require quick action from the node operator.[debug] [trace]
: Trace level logs contain extra[debug]
information for development, and must be compiled in via-tags trace
.
[Beta] Multichain support added
As a beta feature, Chainlink now supports connecting to multiple different EVM chains simultaneously.
This means that one node can run jobs on Goerli, Kovan, BSC and Mainnet (for example). Note that you can still have as many eth keys as you like, but each eth key is pegged to one chain only.
Extensive efforts have been made to make migration for existing nops as seamless as possible. Generally speaking, you should not have to make any changes when upgrading your existing node to this version. All your jobs will continue to run as before.
The overall summary of changes is such:
Chains/Ethereum Nodes
EVM chains are now represented as a first class object within the chainlink node. You can create/delete/list them using the CLI or API.
At least one primary node is required in order for a chain to connect. You may additionally specify zero or more send-only nodes for a chain. It is recommended to use the CLI/API or GUI to add nodes to chain.
Creation
chainlink chains evm create -id 42 # creates an evm chain with chain ID 42 (see: https://chainlist.org/)
chainlink nodes create -chain-id 42 -name 'my-primary-kovan-full-node' -type primary -ws-url ws://node.example/ws -http-url http://node.example/rpc # http-url is optional but recommended for primaries
chainlink nodes create -chain-id 42 -name 'my-send-only-backup-kovan-node' -type sendonly -http-url http://some-public-node.example/rpc
Listing
chainlink chains evm list
chainlink nodes list
Deletion
chainlink nodes delete 'my-send-only-backup-kovan-node'
chainlink chains evm delete 42
Legacy eth ENV vars
The old way of specifying chains using environment variables is still supported but discouraged. It works as follows:
If you specify ETH_URL
then the values of ETH_URL
, ETH_CHAIN_ID
, ETH_HTTP_URL
and ETH_SECONDARY_URLS
will be used to create/update chains and nodes representing these values in the database. If an existing chain/node is found it will be overwritten. This behavior is used mainly to ease the process of upgrading, and on subsequent runs (once your old settings have been written to the database) it is recommended to unset these ENV vars and use the API commands exclusively to administer chains and nodes.
Jobs/tasks
By default, all jobs/tasks will continue to use the default chain (specified by ETH_CHAIN_ID
). However, the following jobs now allow an additional evmChainID
key in their TOML:
- VRF
- DirectRequest
- Keeper
- OCR
- Fluxmonitor
You can pin individual jobs to a particular chain by specifying the evmChainID
explicitly. Here is an example job to demonstrate:
type = "keeper"
evmChainID = 3
schemaVersion = 1
name = "example keeper spec"
contractAddress = "0x9E40733cC9df84636505f4e6Db28DCa0dC5D1bba"
externalJobID = "0EEC7E1D-D0D2-476C-A1A8-72DFB6633F49"
fromAddress = "0xa8037A20989AFcBC51798de9762b351D63ff462e"
The above keeper job will always run on chain ID 3 (Ropsten) regardless of the ETH_CHAIN_ID
setting. If no chain matching this ID has been added to the chainlink node, the job cannot be created (you must create the chain first).
In addition, you can also specify evmChainID
on certain pipeline tasks. This allows for cross-chain requests, for example:
type = "directrequest"
schemaVersion = 1
evmChainID = 42
name = "example cross chain spec"
contractAddress = "0x613a38AC1659769640aaE063C651F48E0250454C"
externalJobID = "0EEC7E1D-D0D2-476C-A1A8-72DFB6633F90"
observationSource = """
decode_log [type=ethabidecodelog ... ]
...
submit [type=ethtx to="0x613a38AC1659769640aaE063C651F48E0250454C" data="$(encode_tx)" minConfirmations="2" evmChainID="3"]
decode_log-> ... ->submit;
"""
In the example above (which excludes irrelevant pipeline steps for brevity) a log can be read from the chain with ID 42 (Kovan) and a transaction emitted on chain with ID 3 (Ropsten).
Tasks that support the evmChainID
parameter are as follows:
ethcall
estimategaslimit
ethtx
Defaults
If the job- or task-specific evmChainID
is not given, the job/task will simply use the default as specified by the ETH_CHAIN_ID
env variable.
Generally speaking, the default config values for each chain are good enough. But in some cases it is necessary to be able to override the defaults on a per-chain basis.
This used to be done via environment variables e.g. MINIMUM_CONTRACT_PAYMENT_LINK_JUELS
.
These still work, but if set they will override that value for all chains. This may not always be what you want. Consider a node that runs both Matic and Mainnet. You may want to set a higher value for MINIMUM_CONTRACT_PAYMENT
on Mainnet, due to the more expensive gas costs. However, setting MINIMUM_CONTRACT_PAYMENT_LINK_JUELS
using env variables will set that value for all chains including matic.
To help you work around this, Chainlink now supports setting per-chain configuration options.
Examples
To set initial configuration when creating a chain, pass in the full json string as an optional parameter at the end:
chainlink evm chains create -id 42 '{"BlockHistoryEstimatorBlockDelay": "100"}'
To set configuration on an existing chain, specify key values pairs as such:
chainlink evm chains configure -id 42 BlockHistoryEstimatorBlockDelay=100 GasEstimatorMode=FixedPrice
The full list of chain-specific configuration options can be found by looking at the ChainCfg
struct in core/chains/evm/types/types.go
.
Async support in external adapters
External Adapters making async callbacks can now error job runs. This required a slight change to format, the correct way to callback from an asynchronous EA is using the following JSON:
SUCCESS CASE:
{
"value": < any valid json object >
}
ERROR CASE:
{
"error": "some error string"
}
This only applies to EAs using the X-Chainlink-Pending
header to signal that the result will be POSTed back to the Chainlink node sometime 'later'. Regular synchronous calls to EAs work just as they always have done.
(NOTE: Official documentation for EAs needs to be updated)
New optional VRF v2 field: requestedConfsDelay
Added a new optional field for VRF v2 jobs called requestedConfsDelay
, which configures a
number of blocks to wait in addition to the request specified requestConfirmations
before servicing
the randomness request, i.e the Chainlink node will wait max(nodeMinConfs, requestConfirmations + requestedConfsDelay)
blocks before servicing the request.
It can be used in the following way:
type = "vrf"
externalJobID = "123e4567-e89b-12d3-a456-426655440001"
schemaVersion = 1
name = "vrf-v2-secondary"
coordinatorAddress = "0xABA5eDc1a551E55b1A570c0e1f1055e5BE11eca7"
requestedConfsDelay = 10
# ... rest of job spec ...
Use of this field requires a database migration.
New locking mode: 'lease'
Chainlink now supports a new environment variable DATABASE_LOCKING_MODE
. It can be set to one of the following values:
dual
(the default - uses both locking types for backwards and forwards compatibility)advisorylock
(advisory lock only)lease
(lease lock only)none
(no locking at all - useful for advanced deployment environments when you can be sure that only one instance of chainlink will ever be running)
The database lock ensures that only one instance of Chainlink can be run on the database at a time. Running multiple instances of Chainlink on a single database at the same time would likely to lead to strange errors and possibly even data integrity failures and should not be allowed.
Ideally, node operators would be using a container orchestration system (e.g. Kubernetes) that ensures that only one instance of Chainlink ever runs on a particular postgres database.
However, we are aware that many node operators do not have the technical capacity to do this. So a common use case is to run multiple Chainlink instances in failover mode (as recommended by our official documentation, although this will be changing in future). The first instance will take some kind of lock on the database and subsequent instances will wait trying to take this lock in case the first instance disappears or dies.
Traditionally Chainlink has used an advisory lock to manage this. However, advisory locks come with several prob...
v1.0.1
v1.0.0
Added
chainlink node db status
will now display a table of applied and pending migrations.- Add support for OKEx/ExChain.
Changed
Legacy job pipeline (JSON specs) are no longer supported
This version will refuse to migrate the database if job specs are still present. You must manually delete or migrate all V1 job specs before upgrading.
For more information on migrating, see the docs.
This release will DROP legacy job tables so please take a backup before upgrading.
New env vars
LAYER_2_TYPE
- For layer 2 chains only. Configure the type of chain, either Arbitrum
or Optimism
.
Misc
- Head sampling can now be optionally disabled by setting
ETH_HEAD_TRACKER_SAMPLING_INTERVAL = "0s"
- this will result in every new head being delivered to running jobs,
regardless of the head frequency from the chain. - When creating new FluxMonitor jobs, the validation logic now checks that only one of: drumbeat ticker or idle timer is enabled.
- Added a new Prometheus metric:
uptime_seconds
which measures the number of seconds the node has been running. It can be helpful in detecting potential crashes.
Fixed
Fixed a regression whereby the BlockHistoryEstimator would use a bumped value on old gas price even if the new current price was larger than the bumped value.
v0.10.15
v0.10.14
Added
-
FMv2 spec now contains DrumbeatRandomDelay parameter that can be used to introduce variation between round of submits of different oracles, if drumbeat ticker is enabled.
-
OCR Hibernation
Requesters/MinContractPaymentLinkJuels
V2 direct request specs now support two additional keys:
- "requesters" key which allows to whitelist requesters
- "minContractPaymentLinkJuels" key which allows to specify a job-specific minimum contract payment.
For example:
type = "directrequest"
schemaVersion = 1
requesters = ["0xaaaa1F8ee20f5565510B84f9353F1E333E753B7a", "0xbbbb70F0e81C6F3430dfdC9fa02fB22BdD818C4e"] # optional
minContractPaymentLinkJuels = "100000000000000" # optional
name = "example eth request event spec with requesters"
contractAddress = "..."
externalJobID = "..."
observationSource = """
...
"""
v0.10.13
v0.10.12
v0.10.11
A new configuration variable, BLOCK_BACKFILL_SKIP
, can be optionally set to "true" in order to strongly limit the depth of the log backfill.
This is useful if the node has been offline for a longer time and after startup should not be concerned with older events from the chain.
- Fixes the logging configuration form not displaying the current values
- Updates the design of the configuration cards to be easier on the eyes
- View Coordinator Service Authentication keys in the Operator UI. This is hidden
behind a feature flag until usage is enabled.
Changed
The legacy job pipeline (JSON specs) has been officially deprecated and support for these jobs will be dropped in an upcoming release.
Any node operators still running jobs with JSON specs should migrate their jobs to TOML format instead.
The format for V2 Webhook job specs has changed. They now allow specifying 0 or more external initiators. Example below:
type = "webhook"
schemaVersion = 1
externalInitiators = [
{ name = "foo-ei", spec = '{"foo": 42}' },
{ name = "bar-ei", spec = '{"bar": 42}' }
]
observationSource = """
ds [type=http method=GET url="https://chain.link/ETH-USD"];
ds_parse [type=jsonparse path="data,price"];
ds_multiply [type=multiply times=100];
ds -> ds_parse -> ds_multiply;
"""
These external initiators will be notified with the given spec after the job is created, and also at deletion time.
Only the External Initiators listed in the toml spec may trigger a run for that job. Logged in users can always trigger a run for any job.
Migrating Jobs
-
OCR
All OCR jobs are already using v2 pipeline by default - no need to do anything here. -
Flux Monitor v1
We have created a tool to help you automigrate flux monitor specs in JSON format to the new TOML format. You can migrate a job like this:
chainlink jobs migrate <job id>
This can be automated by using the API like so:
POST http://yournode.example/v2/migrate/<job id>
-
VRF v1
Automigration is not supported for VRF jobs. They must be manually converted into v2 format. -
Ethlog/Runlog/Cron/web
All other job types must also be manually converted into v2 format.
Technical details
Why are we doing this?
To give some background, the legacy job pipeline has been around since before Chainlink went to mainnet and is getting quite long in the tooth. The code is brittle and difficult to understand and maintain. For a while now we have been developing a v2 job pipeline in parallel which uses the TOML format. The new job pipeline is simpler, more performant and more powerful. Every job that can be represented in the legacy pipeline should be able to be represented in the v2 pipeline - if it can't be, that's a bug, so please let us know ASAP.
The v2 pipeline has now been extensively tested in production and proved itself reliable. So, we made the decision to drop V1 support entirely in favour of focusing developer effort on new features like native multichain support, EIP1559-compatible fees, further gas saving measures and support for more blockchains. By dropping support for the old pipeline, we can deliver these features faster and better support our community.