Skip to content

Commit

Permalink
workign demo
Browse files Browse the repository at this point in the history
  • Loading branch information
watakandai committed Jul 25, 2024
1 parent d516c29 commit a46c9ef
Show file tree
Hide file tree
Showing 17 changed files with 192 additions and 341 deletions.
44 changes: 23 additions & 21 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,29 +1,31 @@
ARG PYTHON_VERSION=3.11
ARG PYTHON_VERSION=3.8
FROM python:${PYTHON_VERSION}


RUN apt-get update && \
apt-get install -y software-properties-common &&\
apt update && \
apt install -y graphviz
# add-apt-repository universe && \

# https://python-poetry.org/docs#ci-recommendations
ENV POETRY_VERSION=1.7.0
# ENV POETRY_HOME=/opt/poetry
ENV POETRY_VENV=/opt/poetry-venv

# Tell Poetry where to place its cache and virtual environment
ENV POETRY_CACHE_DIR=/opt/.cache

# Creating a virtual environment just for poetry and install it with pip
RUN python3 -m venv $POETRY_VENV \
&& $POETRY_VENV/bin/pip install -U pip setuptools \
&& $POETRY_VENV/bin/pip install poetry==${POETRY_VERSION}

# Add Poetry to PATH
ENV PATH="${PATH}:${POETRY_VENV}/bin"

ENV POETRY_VIRTUALENVS_IN_PROJECT=true
ENV POETRY_VERSION=1.7.0 \
# Poetry home directory
POETRY_HOME='/usr/local' \
# Add Poetry's bin folder to the PATH
PATH="/usr/local/bin:$PATH" \
# Avoids any interactions with the terminal
POETRY_NO_INTERACTION=1 \
# This avoids poetry from creating a virtualenv
# Instead, it directly installs the dependencies in the system's python environment
POETRY_VIRTUALENVS_CREATE=false

# System deps:
RUN curl -sSL https://install.python-poetry.org | python3 -

# Copy the project files
WORKDIR /home/specless
COPY pyproject.toml poetry.lock /home/specless/

# Project initialization and conditionally install cvxopt if on x86 architecture
RUN poetry install --no-interaction
# RUN poetry install --no-interaction && \
# if [ "$(uname -m)" = "x86_64" ]; then poetry add cvxopt; fi

CMD ["bash"]
16 changes: 5 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,15 +75,10 @@ You can use the `specless` package in two ways: as a library, and as a CLI tool.
... ["e1", "e4", "e2", "e3", "e5"], # trace 2
... ["e1", "e2", "e4", "e3", "e5"], # trace 3
... ]
>>> dataset = sl.ArrayDataset(demonstrations, columns=["symbol"])

# # or load from a file
# >>> csv_filename = "examples/readme/example.csv"
# >>> dataset = sl.BaseDataset(pd.read_csv(csv_filename))

# Run the inference
>>> inference = sl.POInferenceAlgorithm()
>>> specification = inference.infer(dataset) # returns a Specification
>>> specification = inference.infer(demonstrations) # returns a Specification

# prints the specification
>>> print(specification) # doctest: +ELLIPSIS
Expand Down Expand Up @@ -118,10 +113,6 @@ The environment is based on the OpenAI Gym library (or more specifically, [Petti
... num=10,
... timeout=1000,
... )

# Convert them to a Dataset Class
>>> demonstrations = sl.ArrayDataset(demonstrations, columns=["timestamp", "label"])

```

- Once the specification is obtained, synthesize a strategy:
Expand Down Expand Up @@ -178,6 +169,9 @@ synthesize -d <path/to/demo> OR -s <LTLf formula> AND -e <Gym env> AND -p <path/
```


## Docker + VSCode
Use Dev Container.


## Development

Expand All @@ -191,7 +185,7 @@ If you want to contribute, set up your development environment as follows:

To run all tests: `tox`

To run only the code tests: `tox -e py39` or `tox -e py310`
To run only the code tests: `tox -e py38` (or py39, py310, py311)

To run doctests, `tox -e doctest`

Expand Down
10 changes: 4 additions & 6 deletions examples/AircraftTurnaround/main.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Goal: Define task events & associated locations & costs"
"# Goal: Define task events & associated locations & costs\n"
]
},
{
Expand Down Expand Up @@ -71,7 +71,7 @@
"Paper: https://www.tandfonline.com/doi/full/10.1080/21680566.2017.1325784\n",
"\n",
"Table 2 (Renamed to `ground_services_by_operations.csv`): https://www.tandfonline.com/action/downloadTable?id=T0002&doi=10.1080%2F21680566.2017.1325784&downloadType=CSV\n",
"Table 3 (Renamed to `duration.csv`): https://www.tandfonline.com/action/downloadTable?id=T0003&doi=10.1080%2F21680566.2017.1325784&downloadType=CSV"
"Table 3 (Renamed to `duration.csv`): https://www.tandfonline.com/action/downloadTable?id=T0003&doi=10.1080%2F21680566.2017.1325784&downloadType=CSV\n"
]
},
{
Expand Down Expand Up @@ -157,9 +157,7 @@
],
"source": [
"inference = sl.TPOInferenceAlgorithm()\n",
"columns: list = [\"timestamp\", \"symbol\"]\n",
"timedtrace_dataset = sl.ArrayDataset(demonstrations, columns)\n",
"specification: sl.Specification = inference.infer(timedtrace_dataset)\n",
"specification: sl.Specification = inference.infer(demonstrations)\n",
"\n",
"filepath = os.path.join(LOG_DIR, \"tpo.png\")\n",
"sl.draw_graph(specification, filepath)\n",
Expand Down Expand Up @@ -238,7 +236,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Associate the name and the obervation label"
"### Associate the name and the obervation label\n"
]
},
{
Expand Down
45 changes: 45 additions & 0 deletions examples/demo/learning.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
import specless as sl # or load from specless.inference import TPOInference


def main():

### Partial Order Inference

# Manually prepare a list of demonstrations
demonstrations = [
["e1", "e2", "e3", "e4", "e5"], # trace 1
["e1", "e4", "e2", "e3", "e5"], # trace 2
["e1", "e2", "e4", "e3", "e5"], # trace 3
]

# Run the inference
inference = sl.POInferenceAlgorithm()
specification = inference.infer(demonstrations) # returns a Specification

# prints the specification
print(specification) # doctest: +ELLIPSIS

# exports the specification to a file

# drawws the specification to a file
sl.draw_graph(specification, filepath='spec')

### Timed Partial Order Inference

# Manually prepare a list of demonstrations
demonstrations: list = [
[[1, "a"], [2, "b"], [3, "c"]],
[[4, "d"], [5, "e"], [6, "f"]],
]
columns: list = ["timestamp", "symbol"]

timedtrace_dataset = sl.ArrayDataset(demonstrations, columns)

# Timed Partial Order Inference
inference = sl.TPOInferenceAlgorithm()
specification: sl.Specification = inference.infer(timedtrace_dataset)


if __name__ == "__main__":
main()

Loading

0 comments on commit a46c9ef

Please sign in to comment.