Enhancemenent of coding assistants through integration of feedback from LLM querying. LLMCoder fetches and retrieves API information and documentation based on error report obtained from four different analyzers. These aim to prevent instances of type errors, incorrect API signature usage and LLM-induced hallucinations. These have been implemented after fine-tuning GPT-3.5 aligning with a evaluated scale of difficulty levels.
- Python >= 3.10
pip
>= 21.3
git clone https://github.com/pvs-hd-tea/23ws-LLMcoder
cd 23ws-LLMcoder
conda create -n llmcoder python=3.11 [ipykernel]
conda activate llmcoder
pip install -e .
CLI:
llmcoder [command] [options]
Python API:
from llmcoder import LLMcoder
llmcoder = LLMcoder(
analyzers=[
"mypy_analyzer_v1", # Detect and fix type errors
"signature_analyzer_v1", # Augment type errors with signatures
"gpt_score_analyzer_v1"], # Score and find best completion
feedback_variant="coworker", # Make context available to all analyzers
max_iter=3, # Maximum number of feedback iterations
n_procs=4 # Complete and analyze in parallel
backtracking=True, # Enable backtracking in the tree of completions
verbose=True # Print progress
)
# Define an incomplete code snippet
code = "print("
# Complete the code
result = llmcoder.complete(code, n=4)
To compile a dataset from input-output-pairs to a conversations.jsonl
file, run
llmcoder export -n name/of/dataset
on a dataset stored in /data/name/of/dataset
.
pip install -r data/name/of/dataset/requirements.txt
To evaluate LLMcoder on all configs in /configs
, run
llmcoder evaluate
To evaluate LLMcoder on a specific config, run
llmcoder evaluate -c my_config.yaml
where my_config.yaml
is a configuration file from /configs
.
The following files will be created for each config and run:
/data/name/of/dataset/eval/<config_name>/<run_id>/results.json
withmessages
: the message historyanalyzer_results
: analyzer results for each steplog
: a log of the runtime
: the time it took to complete the run
/data/name/of/dataset/eval/<config_name>/<run_id>/readable_logs/<example_id>.txt
with- a human-readable log of the run
for each example in the dataset
After running the evaluation, compute the metrics for all configs in /configs
with
llmcoder metrics
To compute the metrics for a specific config, run
llmcoder metrics -c my_config.yaml
where my_config.yaml
is a configuration file from /configs
.#
This will create the following files for each config and run:
/data/name/of/dataset/eval/<config_name>/<run_id>/metrics.csv
To set up the development environment, run the following commands:
pip install -e .[dev]
pre-commit install
For further information, see CONTRIBUTING.md.
@software{llmcoder-hd-24,
author = {Ana Carsi and Kushal Gaywala and Paul Saegert},
title = {LLMcoder: Feedback-Based Code Assistant},
month = mar,
year = 2024,
publisher = {GitHub},
version = {0.4.0},
url = {https://github.com/pvs-hd-tea/23ws-LLMcoder}
}