By default Lithops works on Localhost if no configuration is provided. To run workloads on the Cloud, you must configure both a compute and a storage backend. Failing to configure them properly will prevent Lithops to submit workloads. Lithops configuration can be provided either in a configuration file or in runtime via a Python dictionary.
To configure Lithops through a configuration file you have multiple options:
-
Create a new file called
config
in the~/.lithops
folder (i.e:~/.lithops/config
). -
Create a new file called
.lithops_config
in the root directory of your project from where you will execute your Lithops scripts. -
Create a new file called
config
in the/etc/lithops/
folder (i.e:/etc/lithops/config
). Useful for sharing the config file on multi-user machines. -
Create the config file in any other location and configure the
LITHOPS_CONFIG_FILE
system environment variable:LITHOPS_CONFIG_FILE=<CONFIG_FILE_LOCATION>
An alternative mode of configuration is to use a python dictionary. This option allows to pass all the configuration details as part of the Lithops invocation in runtime. An entire list of sections and keys is here
Choose your compute and storage engines from the table below
Compute Backends |
Storage Backends |
---|---|
Serverless (FaaS) Backends: Serverless (CaaS) Backends: Standalone Backends: |
Object Storage:
In-Memory Storage: |
Test if Lithops is working properly:
import lithops
def hello_world(name):
return f'Hello {name}!'
if __name__ == '__main__':
fexec = lithops.FunctionExecutor()
fexec.call_async(hello_world, 'World')
print(fexec.get_result())
Example of providing configuration keys for IBM Code Engine and IBM Cloud Object Storage
import lithops
config = {
'lithops': {
'backend': 'code_engine',
'storage': 'ibm_cos'
},
'ibm': {
'region': 'REGION',
'iam_api_key': 'IAM_API_KEY',
'resource_group_id': 'RESOURCE_GROUP_ID'
},
'ibm_cos': {
'storage_bucket': 'STORAGE_BUCKET'
}
}
def hello_world(name):
return f'Hello {name}!'
if __name__ == '__main__':
fexec = lithops.FunctionExecutor(config=config)
fexec.call_async(hello_world, 'World')
print(fexec.get_result())
Group | Key | Default | Mandatory | Additional info |
---|---|---|---|---|
lithops | backend | aws_lambda | no | Compute backend implementation. localhost is the default if no config or config file is provided |
lithops | storage | aws_s3 | no | Storage backend implementation. localhost is the default if no config or config file is provided |
lithops | data_cleaner | True | no | If set to True, then the cleaner will automatically delete all the temporary data that was written into storage_bucket/lithops.jobs |
lithops | monitoring | storage | no | Monitoring system implementation. One of: storage or rabbitmq |
lithops | monitoring_interval | 2 | no | Monitoring check interval in seconds in case of storage monitoring |
lithops | data_limit | 4 | no | Max (iter)data size (in MB). Set to False for unlimited size |
lithops | execution_timeout | 1800 | no | Functions will be automatically killed if they exceed this execution time (in seconds). Alternatively, it can be set in the call_async() , map() or map_reduce() calls using the timeout parameter. |
lithops | include_modules | [] | no | Explicitly pickle these dependencies. All required dependencies are pickled if default empty list. No one dependency is pickled if it is explicitly set to None |
lithops | exclude_modules | [] | no | Explicitly keep these modules from pickled dependencies. It is not taken into account if you set include_modules |
lithops | log_level | INFO | no | Logging level. One of: WARNING, INFO, DEBUG, ERROR, CRITICAL, Set to None to disable logging |
lithops | log_format | "%(asctime)s [%(levelname)s] %(name)s -- %(message)s" | no | Logging format string |
lithops | log_stream | ext://sys.stderr | no | Logging stream. eg.: ext://sys.stderr, ext://sys.stdout |
lithops | log_filename | no | Path to a file. log_filename has preference over log_stream. |