Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for Python 3.12 #1277

Merged
merged 2 commits into from
Mar 9, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
## [v3.1.3.dev1]

### Added
-
- [Lithops] Addded support for Python 3.12

### Changed
- [Standalone] Use redis in the master VM to store all the relevant data about jobs and workers
Expand Down
51 changes: 43 additions & 8 deletions docs/source/worker_granularity.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,14 +21,6 @@ This allows you to leverage the resource flexibility provided by CaaS without at
Understanding these distinctions between FaaS and CaaS platforms is crucial for optimizing the performance and efficient
resource utilization of your Lithops-based applications.

In addition to supporting FaaS and CaaS platforms, Lithops also extends its compatibility to Virtual Machine (VM) backends,
such as EC2. Similar to CaaS environments, VMs offer a high degree of resource customization. When utilizing VMs with Lithops,
you gain the ability to tailor your VM instance with the appropriate resources, including CPU cores. In scenarios where
parallelism is crucial, it may be more efficient to configure a VM with a higher core count, such as 8 CPUs, rather than
attempting to manage and coordinate eight separate VM instances with single cores each. This approach simplifies resource
management and optimizes the performance of your Lithops-based applications running on VM backends. As with CaaS,
understanding the flexibility VMs provide is essential for effectively utilizing your compute resources.

How to customize worker granularity?
------------------------------------

Expand Down Expand Up @@ -82,3 +74,46 @@ To customize the ``chunksize`` parameter, you have to edit your ``map()`` or ``m
fexec = lithops.FunctionExecutor(worker_processes=4)
fexec.map(my_map_function, range(200), chunksize=8)
print(fexec.get_result())


Worker granularity in the standalone mode using VMs
---------------------------------------------------

In addition to supporting FaaS and CaaS platforms, Lithops also extends its compatibility to Virtual Machine (VM) backends,
such as EC2. Similar to CaaS environments, VMs offer a high degree of resource customization. When utilizing VMs with Lithops,
you gain the ability to tailor your VM instance with the appropriate resources, including CPU cores. In scenarios where
parallelism is crucial, it may be more efficient to configure a VM with a higher core count, such as 16 CPUs, rather than
attempting to manage and coordinate eight separate VM instances with single cores each. This approach simplifies resource
management and optimizes the performance of your Lithops-based applications running on VM backends. As with CaaS,
understanding the flexibility VMs provide is essential for effectively utilizing your compute resources.

Unlike FaaS and CaaS platforms, when deploying Lithops on Virtual Machine backends, such as EC2, a master-worker architecture
is adopted. In this paradigm, the master node holds a work queue containing tasks for a specific job, and workers pick up and
process tasks one by one. In this sense, the chunksize parameter, which determines the number of functions allocated
to each worker for parallel processing, is not applicable in this context.Consequently, the worker granularity is inherently
determined by the number of worker processess in the VM setup. Adjusting the number of VM instances or the configuration of
each VM, such as the CPU core count, becomes crucial for optimizing performance and resource utilization in this master-worker
approach.

In this scenario, specifying either the ``worker_instance_type`` or ``worker_processes`` config parameter is enough to achieve
the desired parallelism inside worker VMs. By default, Lithops determines the total number of worker processes based on the
number of CPUs in the specified instance type. For example, an AWS EC2 instance of type ``t2.medium``, with 2 CPUs, would set
``worker_processes`` to 2. Additionally, users have the flexibility to manually adjust parallelism by setting a different
value for ``worker_processes``. Depenidng on the use case, it would be conveneint to set more ``worker_processes`` than CPUs,
or less ``worker_processes`` than CPUs. For example, we can use a ``t2.medium`` instance types that has 2 CPUs, but
set ``worker_processes`` to 4:

.. code:: python

import lithops


def my_map_function(id, x):
print(f"I'm activation number {id}")
return x + 7


if __name__ == "__main__":
fexec = lithops.FunctionExecutor(worker_instance_type='t2.medium', worker_processes=4)
fexec.map(my_map_function, range(50))
print(fexec.get_result())
3 changes: 2 additions & 1 deletion lithops/serverless/backends/aws_lambda/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,8 @@
'3.8': 'python3.8',
'3.9': 'python3.9',
'3.10': 'python3.10',
'3.11': 'python3.11'
'3.11': 'python3.11',
'3.12': 'python3.12'
}

USER_RUNTIME_PREFIX = 'lithops.user_runtimes'
Expand Down
3 changes: 2 additions & 1 deletion lithops/serverless/backends/gcp_functions/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,8 @@
'3.8': 'python38',
'3.9': 'python39',
'3.10': 'python310',
'3.11': 'python311'
'3.11': 'python311',
'3.12': 'python312'
}

USER_RUNTIMES_PREFIX = 'lithops.user_runtimes'
Expand Down
3 changes: 2 additions & 1 deletion lithops/serverless/backends/ibm_cf/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,8 @@
'3.8': 'docker.io/lithopscloud/ibmcf-python-v38',
'3.9': 'docker.io/lithopscloud/ibmcf-python-v39',
'3.10': 'docker.io/lithopscloud/ibmcf-python-v310',
'3.11': 'docker.io/lithopscloud/ibmcf-python-v311'
'3.11': 'docker.io/lithopscloud/ibmcf-python-v311',
'3.12': 'docker.io/lithopscloud/ibmcf-python-v312'
}

DEFAULT_CONFIG_KEYS = {
Expand Down
3 changes: 2 additions & 1 deletion lithops/serverless/backends/openwhisk/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,8 @@
'3.8': 'docker.io/lithopscloud/openwhisk-python-v38',
'3.9': 'docker.io/lithopscloud/openwhisk-python-v39',
'3.10': 'docker.io/lithopscloud/openwhisk-python-v310',
'3.11': 'docker.io/lithopscloud/openwhisk-python-v311'
'3.11': 'docker.io/lithopscloud/openwhisk-python-v311',
'3.12': 'docker.io/lithopscloud/openwhisk-python-v312'
}

DEFAULT_CONFIG_KEYS = {
Expand Down
1 change: 0 additions & 1 deletion lithops/standalone/master.py
Original file line number Diff line number Diff line change
Expand Up @@ -168,7 +168,6 @@ def get_workers():

for worker in workers:
worker_data = redis_client.hgetall(worker)
logger.debug(worker_data)
if worker_data['instance_type'] == worker_instance_type \
and worker_data['runtime'] == runtime_name \
and int(worker_data['worker_processes']) == int(worker_processes):
Expand Down
1 change: 1 addition & 0 deletions runtime/openwhisk/Dockerfile.slim
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
#FROM python:3.9-slim-buster
#FROM python:3.10-slim-buster
FROM python:3.11-slim-buster
#FROM python:3.12-slim-bookworm

ENV FLASK_PROXY_PORT 8080

Expand Down
1 change: 1 addition & 0 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -97,6 +97,7 @@
'Programming Language :: Python :: 3.9',
'Programming Language :: Python :: 3.10',
'Programming Language :: Python :: 3.11',
'Programming Language :: Python :: 3.12',
'Topic :: Software Development :: Libraries :: Python Modules',
'Topic :: Scientific/Engineering',
'Topic :: System :: Distributed Computing',
Expand Down
Loading