diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index c22a4400a..3bc3a4e92 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -13,7 +13,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
- python-version: ['3.8', '3.9', '3.10', '3.11']
+ python-version: ['3.9', '3.10', '3.11']
steps:
- uses: actions/checkout@v4
diff --git a/README.md b/README.md
index 942ba1f65..db6f5e7cb 100644
--- a/README.md
+++ b/README.md
@@ -8,7 +8,16 @@ You specify what kind of an app you want to build. Then, GPT Pilot asks clarifyi
* [🔌 Requirements](#-requirements)
* [🚦How to start using gpt-pilot?](#how-to-start-using-gpt-pilot)
-* [🧑💻️ Other arguments](#%EF%B8%8F-other-arguments)
+* [🐳 How to start gpt-pilot in docker?](#how-to-start-gpt-pilot-in-docker)
+* [🧑💻️ CLI arguments](#%EF%B8%8F-cli-arguments)
+ * [`app_id` and `workspace`](#app_id-and-workspace)
+ * [`user_id`, `email` and `password`](#user_id-email-and-password)
+ * [`app_type` and `name`](#app_type-and-name)
+ * [`step`](#step)
+ * [`skip_until_dev_step`](#skip_until_dev_step)
+ * [`advanced`](#advanced)
+ * [`delete_unrelated_steps`](#delete_unrelated_steps)
+ * [`update_files_before_start`](#update_files_before_start)
* [🔎 Examples](#-examples)
* [Real-time chat app](#-real-time-chat-app)
* [Markdown editor](#-markdown-editor)
@@ -49,8 +58,7 @@ https://github.com/Pythagora-io/gpt-pilot/assets/10895136/0495631b-511e-451b-93d
# 🔌 Requirements
-
-- **Python**
+- **Python >= 3.9**
- **PostgreSQL** (optional, projects default is SQLite)
- DB is needed for multiple reasons like continuing app development if you had to stop at any point or app crashed, going back to specific step so you can change some later steps in development, easier debugging, for future we will add functionality to update project (change some things in existing project or add new features to the project and so on)...
@@ -76,6 +84,7 @@ All generated code will be stored in the folder `workspace` inside the folder na
**IMPORTANT: To run GPT Pilot, you need to have PostgreSQL set up on your machine**
+
# 🐳 How to start gpt-pilot in docker?
1. `git clone https://github.com/Pythagora-io/gpt-pilot.git` (clone the repo)
2. Update the `docker-compose.yml` environment variables
@@ -87,28 +96,96 @@ All generated code will be stored in the folder `workspace` inside the folder na
This will start two containers, one being a new image built by the `Dockerfile` and a postgres database. The new image also has [ttyd](https://github.com/tsl0922/ttyd) installed so you can easily interact with gpt-pilot.
-# 🧑💻️ Other arguments
-- continue working on an existing app
+
+# 🧑💻️ CLI arguments
+
+## `app_id` and `workspace`
+Continue working on an existing app using **`app_id`**
```bash
python main.py app_id=
```
-- continue working on an existing app from a specific step
+_or_ **`workspace`** path:
+
+```bash
+python main.py workspace=
+```
+
+Each user can have their own workspace path for each App. (See [`user_id`](#user_id-email-and-password))
+
+
+## `user_id`, `email` and `password`
+These values will be saved to the User table in the DB.
+
+```bash
+python main.py user_id=me_at_work
+```
+
+If not specified, `user_id` defaults to the OS username, but can be provided explicitly if your OS username differs from your GitHub or work username. This value is used to load the `App` config when the `workspace` arg is provided.
+
+If not specified `email` will be parsed from `~/.gitconfig` if the file exists.
+
+See also [What's the purpose of arguments.password / User.password?](https://github.com/Pythagora-io/gpt-pilot/discussions/55)
+
+---
+
+## `app_type` and `name`
+If not provided, the ProductOwner will ask for these values
+
+`app_type` is used as a hint to the LLM as to what kind of architecture, language options and conventions would apply. If not provided, `prompts.prompts.ask_for_app_type()` will ask for it.
+
+See `const.common.ALL_TYPES`: 'Web App', 'Script', 'Mobile App', 'Chrome Extension'
+
+---
+
+## `step`
+Continue working on an existing app from a specific **`step`** (eg: `user_tasks`)
```bash
python main.py app_id= step=
```
-- continue working on an existing app from a specific development step
+
+## `skip_until_dev_step`
+- Continue working on an existing app from a specific **development step**
```bash
python main.py app_id= skip_until_dev_step=
```
This is basically the same as `step` but during the actual development process. If you want to play around with gpt-pilot, this is likely the flag you will often use.
-- erase all development steps previously done and continue working on an existing app from start of development
+
+- Erase all development steps previously done and continue working on an existing app from start of development
+
```bash
python main.py app_id= skip_until_dev_step=0
```
+---
+
+## `advanced`
+The Architect by default favours certain technologies including:
+
+- Node.JS
+- MongoDB
+- PeeWee ORM
+- Jest & PyUnit
+- Bootstrap
+- Vanilla JavaScript
+- Socket.io
+
+If you have your own preferences, you can have a deeper conversation with the Architect.
+
+```bash
+python main.py advanced=True
+```
+
+
+## `delete_unrelated_steps`
+
+
+## `update_files_before_start`
+
+
+
# 🔎 Examples
Here are a couple of example apps GPT Pilot created by itself:
@@ -155,8 +232,10 @@ Here are the steps GPT Pilot takes to create an app:
4. **Architect agent** writes up technologies that will be used for the app
5. **DevOps agent** checks if all technologies are installed on the machine and installs them if they are not
6. **Tech Lead agent** writes up development tasks that Developer will need to implement. This is an important part because, for each step, Tech Lead needs to specify how the user (real world developer) can review if the task is done (eg. open localhost:3000 and do something)
-7. **Developer agent** takes each task and writes up what needs to be done to implement it. The description is in human readable form.
-8. Finally, **Code Monkey agent** takes the Developer's description and the currently implement file and implements the changes into it. We realized this works much better than giving it to Developer right away to implement changes.
+7. **Developer agent** takes each task and writes up what needs to be done to implement it. The description is in human-readable form.
+8. Finally, **Code Monkey agent** takes the Developer's description and the existing file and implements the changes into it. We realized this works much better than giving it to Developer right away to implement changes.
+
+For more details on the roles of agents employed by GPT Pilot refer to [AGENTS.md](https://github.com/Pythagora-io/gpt-pilot/blob/main/pilot/helpers/agents/AGENTS.md)
![GPT Pilot Coding Workflow](https://github.com/Pythagora-io/gpt-pilot/assets/10895136/53ea246c-cefe-401c-8ba0-8e4dd49c987b)
diff --git a/pilot/helpers/agents/AGENTS.md b/pilot/helpers/agents/AGENTS.md
new file mode 100644
index 000000000..df403256b
--- /dev/null
+++ b/pilot/helpers/agents/AGENTS.md
@@ -0,0 +1,64 @@
+Roles are defined in `const.common.ROLES`.
+Each agent's role is described to the LLM by a prompt in `pilot/prompts/system_messages/{role}.prompt`
+
+## Product Owner
+`project_description`, `user_stories`, `user_tasks`
+
+- Talk to client, ask detailed questions about what client wants
+- Give specifications to dev team
+
+
+## Architect
+`architecture`
+
+- Scripts: Node.js, MongoDB, PeeWee ORM
+- Testing: Node.js -> Jest, Python -> pytest, E2E -> Cypress **(TODO - BDD?)**
+- Frontend: Bootstrap, vanilla Javascript **(TODO - TypeScript, Material/Styled, React/Vue/other?)**
+- Other: cronjob, Socket.io
+
+TODO:
+- README.md
+- .gitignore
+- .editorconfig
+- LICENSE
+- CI/CD
+- IaC, Dockerfile
+
+
+## Tech Lead
+`development_planning`
+
+- Break down the project into smaller tasks for devs.
+- Specify each task as clear as possible:
+ - Description
+ - "Programmatic goal" which determines if the task can be marked as done.
+ eg: "server needs to be able to start running on a port 3000 and accept API request
+ to the URL `http://localhost:3000/ping` when it will return the status code 200"
+ - "User-review goal"
+ eg: "run `npm run start` and open `http://localhost:3000/ping`, see "Hello World" on the screen"
+
+
+## Dev Ops
+`environment_setup`
+
+**TODO: no prompt**
+
+`debug` functions: `run_command`, `implement_code_changes`
+
+
+## Developer (full_stack_developer)
+`create_scripts`, `coding` **(TODO: No entry in `STEPS` for `create_scripts`)**
+
+- Implement tasks assigned by tech lead
+- Modular code, TDD
+- Tasks provided as "programmatic goals" **(TODO: consider BDD)**
+
+
+
+## Code Monkey
+**TODO: not listed in `ROLES`**
+
+`development/implement_changes` functions: `save_files`
+
+- Implement tasks assigned by tech lead
+- Modular code, TDD
diff --git a/pilot/helpers/agents/Architect.py b/pilot/helpers/agents/Architect.py
index e6b1e0727..26e5d9fdd 100644
--- a/pilot/helpers/agents/Architect.py
+++ b/pilot/helpers/agents/Architect.py
@@ -39,6 +39,7 @@ def get_architecture(self):
# 'user_tasks': self.project.user_tasks,
'app_type': self.project.args['app_type']}, ARCHITECTURE)
+ # TODO: Project.args should be a defined class so that all of the possible args are more obvious
if self.project.args.get('advanced', False):
architecture = get_additional_info_from_user(self.project, architecture, 'architect')
diff --git a/pilot/helpers/agents/CodeMonkey.py b/pilot/helpers/agents/CodeMonkey.py
index cbb854ef0..94ead90cf 100644
--- a/pilot/helpers/agents/CodeMonkey.py
+++ b/pilot/helpers/agents/CodeMonkey.py
@@ -10,9 +10,12 @@ def __init__(self, project, developer):
self.developer = developer
def implement_code_changes(self, convo, code_changes_description, step_index=0):
- if convo == None:
+ if convo is None:
convo = AgentConvo(self)
+ # "... step {i} - {step.description}.
+ # To do this, you will need to see the local files
+ # Ask for files relative to project root."
files_needed = convo.send_message('development/task/request_files_for_code_changes.prompt', {
"step_description": code_changes_description,
"directory_tree": self.project.get_directory_tree(True),
@@ -20,7 +23,6 @@ def implement_code_changes(self, convo, code_changes_description, step_index=0):
"finished_steps": ', '.join(f"#{j}" for j in range(step_index))
}, GET_FILES)
-
changes = convo.send_message('development/implement_changes.prompt', {
"step_description": code_changes_description,
"step_index": step_index,
diff --git a/pilot/helpers/agents/Developer.py b/pilot/helpers/agents/Developer.py
index 33bd3da48..d01e501be 100644
--- a/pilot/helpers/agents/Developer.py
+++ b/pilot/helpers/agents/Developer.py
@@ -2,7 +2,6 @@
import uuid
from termcolor import colored
from utils.questionary import styled_text
-from helpers.files import update_file
from utils.utils import step_already_finished
from helpers.agents.CodeMonkey import CodeMonkey
from logger.logger import logger
@@ -13,7 +12,9 @@
from const.function_calls import FILTER_OS_TECHNOLOGIES, DEVELOPMENT_PLAN, EXECUTE_COMMANDS, GET_TEST_TYPE, DEV_TASKS_BREAKDOWN, IMPLEMENT_TASK
from database.database import save_progress, get_progress_steps, save_file_description
from utils.utils import get_os_info
-from helpers.cli import execute_command
+
+ENVIRONMENT_SETUP_STEP = 'environment_setup'
+
ENVIRONMENT_SETUP_STEP = 'environment_setup'
@@ -40,15 +41,21 @@ def start_coding(self):
def implement_task(self):
convo_dev_task = AgentConvo(self)
+ # TODO: why "This should be a simple version of the app so you don't need to aim to provide a production ready code"?
+ # TODO: why `no_microservices`? Is that even applicable?
task_description = convo_dev_task.send_message('development/task/breakdown.prompt', {
"name": self.project.args['name'],
"app_type": self.project.args['app_type'],
"app_summary": self.project.project_description,
"clarification": [],
+ # TODO: why all stories at once?
"user_stories": self.project.user_stories,
# "user_tasks": self.project.user_tasks,
+ # TODO: "I'm currently in an empty folder" may not always be true?
"technologies": self.project.architecture,
+ # TODO: `array_of_objects_to_string` does not seem to be used by the prompt template?
"array_of_objects_to_string": array_of_objects_to_string,
+ # TODO: prompt lists `files` if `current_task_index` != 0
"directory_tree": self.project.get_directory_tree(True),
})
@@ -56,7 +63,22 @@ def implement_task(self):
convo_dev_task.remove_last_x_messages(2)
self.execute_task(convo_dev_task, task_steps, continue_development=True)
- def execute_task(self, convo, task_steps, test_command=None, reset_convo=True, test_after_code_changes=True, continue_development=False):
+ def execute_task(self, convo: AgentConvo, task_steps, test_command=None, reset_convo=True, test_after_code_changes=True, continue_development=False):
+ """
+ :param convo:
+ :param task_steps: [{
+ type: 'command|code_change|human_intervention',
+ command: { command: '', timeout: 1000ms }
+ code_change: { name: 'file name', path: '/path/to/file', content: "console.info('Hello');" },
+ (or code_change_description: str)
+ human_intervention_description: 'description of step in debugging'
+ }, ...]
+ :param test_command: None
+ :param reset_convo: True
+ :param test_after_code_changes: True
+ :param continue_development: False
+ :return:
+ """
function_uuid = str(uuid.uuid4())
convo.save_branch(function_uuid)
@@ -75,6 +97,7 @@ def execute_task(self, convo, task_steps, test_command=None, reset_convo=True, t
run_command_until_success(data['command'], data['timeout'], convo, additional_message=additional_message)
elif step['type'] == 'code_change' and 'code_change_description' in step:
+ # DEV_TASKS_BREAKDOWN
# TODO this should be refactored so it always uses the same function call
print(f'Implementing code changes for `{step["code_change_description"]}`')
code_monkey = CodeMonkey(self.project, self)
@@ -83,6 +106,7 @@ def execute_task(self, convo, task_steps, test_command=None, reset_convo=True, t
self.test_code_changes(code_monkey, updated_convo)
elif step['type'] == 'code_change':
+ # IMPLEMENT_TASK
# TODO fix this - the problem is in GPT response that sometimes doesn't return the correct JSON structure
if 'code_change' not in step:
data = step
@@ -158,7 +182,6 @@ def continue_development(self, iteration_convo):
def set_up_environment(self):
self.project.current_step = ENVIRONMENT_SETUP_STEP
- self.convo_os_specific_tech = AgentConvo(self)
# If this app_id already did this step, just get all data from DB and don't ask user again
step = get_progress_steps(self.project.args['app_id'], ENVIRONMENT_SETUP_STEP)
@@ -178,7 +201,9 @@ def set_up_environment(self):
logger.info(f"Setting up the environment...")
os_info = get_os_info()
- os_specific_technologies = self.convo_os_specific_tech.send_message('development/env_setup/specs.prompt',
+
+ convo_os_specific_tech = AgentConvo(self)
+ os_specific_technologies = convo_os_specific_tech.send_message('development/env_setup/specs.prompt',
{
"name": self.project.args['name'],
"app_type": self.project.args['app_type'],
@@ -188,7 +213,7 @@ def set_up_environment(self):
for technology in os_specific_technologies:
# TODO move the functions definitions to function_calls.py
- cli_response, llm_response = self.convo_os_specific_tech.send_message('development/env_setup/install_next_technology.prompt',
+ cli_response, llm_response = convo_os_specific_tech.send_message('development/env_setup/install_next_technology.prompt',
{ 'technology': technology}, {
'definitions': [{
'name': 'execute_command',
@@ -215,11 +240,11 @@ def set_up_environment(self):
})
if llm_response != 'DONE':
- installation_commands = self.convo_os_specific_tech.send_message('development/env_setup/unsuccessful_installation.prompt',
+ installation_commands = convo_os_specific_tech.send_message('development/env_setup/unsuccessful_installation.prompt',
{ 'technology': technology }, EXECUTE_COMMANDS)
if installation_commands is not None:
for cmd in installation_commands:
- run_command_until_success(cmd['command'], cmd['timeout'], self.convo_os_specific_tech)
+ run_command_until_success(cmd['command'], cmd['timeout'], convo_os_specific_tech)
logger.info('The entire tech stack needed is installed and ready to be used.')
diff --git a/pilot/helpers/agents/ProductOwner.py b/pilot/helpers/agents/ProductOwner.py
index f9213ebdd..98650e5a5 100644
--- a/pilot/helpers/agents/ProductOwner.py
+++ b/pilot/helpers/agents/ProductOwner.py
@@ -19,7 +19,11 @@ class ProductOwner(Agent):
def __init__(self, project):
super().__init__('product_owner', project)
- def get_project_description(self):
+ def get_project_description(self) -> None:
+ """
+ Prompt user for app_type, name, description and ask clarifying questions.
+ Use the LLM to generate a summary of the project.
+ """
self.project.app = save_app(self.project.args)
self.project.current_step = PROJECT_DESCRIPTION_STEP
@@ -42,8 +46,10 @@ def get_project_description(self):
self.project.app = save_app(self.project.args)
+ # "Describe your app in as much detail as possible"
main_prompt = ask_for_main_app_definition(self.project)
+ # Ask clarifying questions
high_level_messages = get_additional_info_from_openai(
self.project,
generate_messages_from_description(main_prompt, self.project.args['app_type'], self.project.args['name']))
@@ -67,7 +73,13 @@ def get_project_description(self):
return
# PROJECT DESCRIPTION END
- def get_user_stories(self):
+
+ def get_user_stories(self) -> list[str]:
+ """
+ Sends several requests to the LLM to generate user stories, given the project description and clarifications.
+ Asks the user if they have anything to add for each proposed story.
+ :return: a list of brief story descriptions.
+ """
self.project.current_step = USER_STORIES_STEP
self.convo_user_stories = AgentConvo(self)
@@ -93,7 +105,7 @@ def get_user_stories(self):
logger.info(f"Final user stories: {self.project.user_stories}")
- save_progress(self.project.args['app_id'], self.project.current_step, {
+ save_progress(self.project.args['app_id'], USER_STORIES_STEP, {
"messages": self.convo_user_stories.messages,
"user_stories": self.project.user_stories,
"app_data": generate_app_data(self.project.args)
diff --git a/pilot/helpers/cli.py b/pilot/helpers/cli.py
index d75832003..f7e3745cf 100644
--- a/pilot/helpers/cli.py
+++ b/pilot/helpers/cli.py
@@ -100,12 +100,13 @@ def execute_command(project, command, timeout=None, force=False):
if not force:
print(colored(f'\n--------- EXECUTE COMMAND ----------', 'yellow', attrs=['bold']))
- print(colored(f'Can i execute the command: `') + colored(command, 'yellow', attrs=['bold']) + colored(f'` with {timeout}ms timeout?'))
+ print(colored(f'Can I execute the command: `') + colored(command, 'yellow', attrs=['bold']) + colored(f'` with {timeout}ms timeout?'))
answer = styled_text(
project,
'If yes, just press ENTER'
)
+ # TODO: handle "no"
# TODO when a shell built-in commands (like cd or source) is executed, the output is not captured properly - this will need to be changed at some point
@@ -234,6 +235,7 @@ def build_directory_tree(path, prefix="", ignore=None, is_last=False, files=None
return output
+
def execute_command_and_check_cli_response(command, timeout, convo):
"""
Execute a command and check its CLI response.
diff --git a/pilot/prompts/prompts.py b/pilot/prompts/prompts.py
index 6780f0917..7742a5e2d 100644
--- a/pilot/prompts/prompts.py
+++ b/pilot/prompts/prompts.py
@@ -12,7 +12,7 @@
def ask_for_app_type():
- return 'Web App'
+ return 'App'
answer = styled_select(
"What type of app do you want to build?",
choices=common.APP_TYPES
@@ -40,7 +40,7 @@ def ask_for_app_type():
def ask_for_main_app_definition(project):
description = styled_text(
project,
- "Describe your app in as many details as possible."
+ "Describe your app in as much detail as possible."
)
if description is None:
@@ -68,9 +68,22 @@ def ask_user(project, question, require_some_input=True):
def get_additional_info_from_openai(project, messages):
+ """
+ Runs the conversation between Product Owner and LLM.
+ Provides the user's initial description, LLM asks the user clarifying questions and user responds.
+ Limited by `MAX_QUESTIONS`, exits when LLM responds "EVERYTHING_CLEAR".
+
+ :param project: Project
+ :param messages: [
+ { "role": "system", "content": "You are a Product Owner..." },
+ { "role": "user", "content": "I want you to create the app {name} that can be described: ```{description}```..." }
+ ]
+ :return: The updated `messages` list with the entire conversation between user and LLM.
+ """
is_complete = False
while not is_complete:
# Obtain clarifications using the OpenAI API
+ # { 'text': new_code }
response = create_gpt_chat_completion(messages, 'additional_info')
if response is not None:
@@ -93,12 +106,21 @@ def get_additional_info_from_openai(project, messages):
# TODO refactor this to comply with AgentConvo class
-def get_additional_info_from_user(project, messages, role):
+def get_additional_info_from_user(project, messages, role):
+ """
+ If `advanced` CLI arg, Architect offers user a chance to change the architecture.
+ Prompts: "Please check this message and say what needs to be changed. If everything is ok just press ENTER"...
+ Then asks the LLM to update the messages based on the user's feedback.
+
+ :param project: Project
+ :param messages: array
+ :param role: 'product_owner', 'architect', 'dev_ops', 'tech_lead', 'full_stack_developer', 'code_monkey'
+ :return: a list of updated messages - see https://github.com/Pythagora-io/gpt-pilot/issues/78
+ """
# TODO process with agent convo
updated_messages = []
for message in messages:
-
while True:
if isinstance(message, dict) and 'text' in message:
message = message['text']
@@ -109,22 +131,41 @@ def get_additional_info_from_user(project, messages, role):
if answer.lower() == '':
break
response = create_gpt_chat_completion(
- generate_messages_from_custom_conversation(role, [get_prompt('utils/update.prompt'), message, answer], 'user'), 'additional_info')
+ generate_messages_from_custom_conversation(role, [get_prompt('utils/update.prompt'), message, answer], 'user'),
+ 'additional_info')
message = response
updated_messages.append(message)
logger.info('Getting additional info from user done')
-
return updated_messages
def generate_messages_from_description(description, app_type, name):
+ """
+ Called by ProductOwner.get_description().
+ :param description: "I want to build a cool app that will make me rich"
+ :param app_type: 'Web App', 'Script', 'Mobile App', 'Chrome Extension' etc
+ :param name: Project name
+ :return: [
+ { "role": "system", "content": "You are a Product Owner..." },
+ { "role": "user", "content": "I want you to create the app {name} that can be described: ```{description}```..." }
+ ]
+ """
+ # "I want you to create the app {name} that can be described: ```{description}```
+ # Get additional answers
+ # Break down stories
+ # Break down user tasks
+ # Start with Get additional answers
+ # {prompts/components/no_microservices}
+ # {prompts/components/single_question}
+ # "
prompt = get_prompt('high_level_questions/specs.prompt', {
'name': name,
'prompt': description,
'app_type': app_type,
+ # TODO: MAX_QUESTIONS should be configurable by ENV or CLI arg
'MAX_QUESTIONS': MAX_QUESTIONS
})
@@ -135,6 +176,20 @@ def generate_messages_from_description(description, app_type, name):
def generate_messages_from_custom_conversation(role, messages, start_role='user'):
+ """
+ :param role: 'product_owner', 'architect', 'dev_ops', 'tech_lead', 'full_stack_developer', 'code_monkey'
+ :param messages: [
+ "I will show you some of your message to which I want you to make some updates. Please just modify your last message per my instructions.",
+ {LLM's previous message},
+ {user's request for change}
+ ]
+ :param start_role: 'user'
+ :return: [
+ { "role": "system", "content": "You are a ..., You do ..." },
+ { "role": start_role, "content": messages[i + even] },
+ { "role": "assistant" (or "user" for other start_role), "content": messages[i + odd] },
+ ... ]
+ """
# messages is list of strings
result = [get_sys_message(role)]
diff --git a/pilot/prompts/system_messages/architect.prompt b/pilot/prompts/system_messages/architect.prompt
index 3bf1d46d5..a6a4f2a60 100644
--- a/pilot/prompts/system_messages/architect.prompt
+++ b/pilot/prompts/system_messages/architect.prompt
@@ -1,10 +1,10 @@
You are an experienced software architect. Your expertise is in creating an architecture for an MVP (minimum viable products) for {{ app_type }}s that can be developed as fast as possible by using as many ready-made technologies as possible. The technologies that you prefer using when other technologies are not explicitly specified are:
-**Scripts**: you prefer using Node.js for writing scripts that are meant to be ran just with the CLI.
+**Scripts**: You prefer using Node.js for writing scripts that are meant to be ran just with the CLI.
-**Backend**: you prefer using Node.js with Mongo database if not explicitely specified otherwise. When you're using Mongo, you always use Mongoose and when you're using Postgresql, you always use PeeWee as an ORM.
+**Backend**: You prefer using Node.js with Mongo database if not explicitly specified otherwise. When you're using Mongo, you always use Mongoose and when you're using a relational database, you always use PeeWee as an ORM.
**Testing**: To create unit and integration tests, you prefer using Jest for Node.js projects and pytest for Python projects. To create end-to-end tests, you prefer using Cypress.
-**Frontend**: you prefer using Bootstrap for creating HTML and CSS while you use plain (vanilla) Javascript.
+**Frontend**: You prefer using Bootstrap for creating HTML and CSS while you use plain (vanilla) Javascript.
**Other**: From other technologies, if they are needed for the project, you prefer using cronjob (for making automated tasks), Socket.io for web sockets
\ No newline at end of file
diff --git a/pilot/prompts/utils/update.prompt b/pilot/prompts/utils/update.prompt
index 645606a9f..4e2261290 100644
--- a/pilot/prompts/utils/update.prompt
+++ b/pilot/prompts/utils/update.prompt
@@ -1 +1 @@
-I will show you some of your message to which I want make some updates. Please just modify your last message per my instructions.
\ No newline at end of file
+I will show you some of your message to which I want you to make some updates. Please just modify your last message per my instructions.
\ No newline at end of file
diff --git a/pilot/utils/llm_connection.py b/pilot/utils/llm_connection.py
index 6fe20a41d..3a290978f 100644
--- a/pilot/utils/llm_connection.py
+++ b/pilot/utils/llm_connection.py
@@ -130,6 +130,7 @@ def create_gpt_chat_completion(messages: List[dict], req_type, min_tokens=MIN_TO
# Advise the LLM of the JSON response schema we are expecting
gpt_data['functions'] = function_calls['definitions']
if len(function_calls['definitions']) > 1:
+ # DEV_STEPS
gpt_data['function_call'] = 'auto'
else:
gpt_data['function_call'] = {'name': function_calls['definitions'][0]['name']}
@@ -318,7 +319,7 @@ def return_result(result_data, lines_printed):
return return_result({'text': new_code}, lines_printed)
-def postprocessing(gpt_response, req_type):
+def postprocessing(gpt_response: str, req_type) -> str:
return gpt_response
diff --git a/pilot/utils/utils.py b/pilot/utils/utils.py
index 85b1f3e8f..b0cb804ea 100644
--- a/pilot/utils/utils.py
+++ b/pilot/utils/utils.py
@@ -61,6 +61,10 @@ def get_prompt_components():
def get_sys_message(role):
+ """
+ :param role: 'product_owner', 'architect', 'dev_ops', 'tech_lead', 'full_stack_developer', 'code_monkey'
+ :return: { "role": "system", "content": "You are a {role}... You do..." }
+ """
# Create a FileSystemLoader
file_loader = FileSystemLoader('prompts/system_messages')