Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Generic embedding #12

Merged
merged 3 commits into from
Jun 28, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
83 changes: 83 additions & 0 deletions qadence2_platforms/embedding.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
from __future__ import annotations

from importlib import import_module
from logging import getLogger
from typing import Callable

from numpy.typing import ArrayLike

logger = getLogger(__name__)

ARRAYLIKE_FN_MAP = {
"torch": ("torch", "tensor"),
"jax": ("jax.numpy", "array"),
"numpy": ("numpy", "array"),
}


def ConcretizedCallable(
call_name: str,
abstract_args: list[str | float | int],
instruction_mapping: dict[str, Callable] = dict(),
engine_name: str = "torch",
) -> Callable[[dict, dict], ArrayLike]:
"""Convert a generic abstract function call and
a list of symbolic or constant parameters
into a concretized Callable in a particular engine.
which can be evaluated using
a vparams and inputs dict.

Arguments:
call_name: The name of the function
abstract_args: A list of strings (in the case of parameters) and numeric constants
denoting the arguments for `call_name`
instruction_mapping: A dict mapping from an abstract call_name to its name in an engine.
engine_name: The engine to use to create the callable.

Example:
```
In [11]: call = ConcretizedCallable('sin', ['x'], engine_name='numpy')
In [12]: call({'x': 0.5})
Out[12]: 0.479425538604203

In [13]: call = ConcretizedCallable('sin', ['x'], engine_name='torch')
In [14]: call({'x': torch.rand(1)})
Out[14]: tensor([0.5531])

In [15]: call = ConcretizedCallable('sin', ['x'], engine_name='jax')
In [16]: call({'x': 0.5})
Out[16]: Array(0.47942555, dtype=float32, weak_type=True)
```
"""
engine_call = None
engine = None
try:
engine_name, fn_name = ARRAYLIKE_FN_MAP[engine_name]
engine = import_module(engine_name)
arraylike_fn = getattr(engine, fn_name)
except (ModuleNotFoundError, ImportError) as e:
logger.error(f"Unable to import {engine_call} due to {e}.")

try:
engine_call = getattr(engine, call_name)
except ImportError:
pass
if engine_call is None:
try:
engine_call = instruction_mapping[call_name]
except KeyError as e:
logger.error(
f"Requested function {call_name} can not be imported from {engine_name} and is\
not in instruction_mapping {instruction_mapping} due to {e}."
)

def evaluate(params: dict = dict(), inputs: dict = dict()) -> ArrayLike:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the story about params and inputs?
Could a single dictionary or a keyword list like def evaluate(**params_and_inputs) -> ArratLike create an issue?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah i agree, essentially for torch it doesnt matter. its just a the legacy-way of how we handled trainable and non-trainable params before

arraylike_args = []
for symbol_or_numeric in abstract_args:
if isinstance(symbol_or_numeric, (float, int)):
arraylike_args.append(arraylike_fn(symbol_or_numeric))
elif isinstance(symbol_or_numeric, str):
arraylike_args.append({**params, **inputs}[symbol_or_numeric])
return engine_call(*arraylike_args) # type: ignore[misc]

return evaluate
Loading