Extract relevant context from a codebase using a type-directed approach.
Recommended.
npm i @jpoly1219/context-extractor
Not recommended. If the above steps do not work, please leave a GitHub issue.
Install the following dependencies:
npm install -g typescript-language-server typescript tsc
Clone the ts-lsp-client
repo:
git clone https://github.com/jpoly1219/ts-lsp-client
... and run these commands:
cd ts-lsp-client
npm install
npm run build
Clone this repo.
git clone https://github.com/jpoly1219/context-extractor.git
cd context-extractor
npm install
For OCaml support, you need to first go through the standard OCaml setup.
Once that is done, you should be able to create a local switch in this repo.
opam switch create ./
eval $(opam env)
After you activate the local switch, install the following dependencies:
opam install dune ocaml-lsp-server ounit2
We provide you with five OCaml examples, located in targets/ocaml
directory.
cd
into each of them and run the following:
dune build
Ignore the wildcard build errors. The command is meant to setup the modules and imports.
Almost there! Create a credentials.json
file following the steps at the credentials.json section below in the README.
Finally, build and run.
npm run build
node dist/runner.js
context-extractor
takes several steps to extract relevant types and headers:
- Determine the type of the hole.
- Extract relevant types.
- Extract relevant headers.
- Optionally complete the hole with an LLM.
This library exposes the method extractContext
, which has the following definition:
const extractContext = async (
language: Language,
sketchPath: string,
repoPath: string,
credentialsPath: string,
getCompletion: boolean
): Promise<{ context: Context | null, completion: string | null }
enum Language {
TypeScript,
OCaml
}
interface Context {
hole: string,
relevantTypes: Map<string, string[]>,
relevantHeaders: Map<string, string[]>
}
sketchPath
is the full path to your sketch file with the typed hole construct (_()
for TypeScript,_
for OCaml).repoPath
is the full path to your repository root.credentialsPath
is the full path to yourcredentials.json
.getCompletion
is a flag to set if you want the LLM to complete the typed hole. This completion is saved in thecompletion
field of the return result.null
values will only be set if something goes wrong internally. WhengetCompletion
is set to false, thecompletion
field's value will be an empty string.
The extractor calls OpenAI for code completion.
For this you need a credentials.json
file that holds your specific OpenAI parameters.
The json has the following format:
{
"apiBase": "<your-api-base-here>",
"deployment": "<your-deployment-here>",
"gptModel": "<your-gpt-model-here>",
"apiVersion": "<your-api-version-here>",
"apiKey": "<your-api-key-here>",
"temperature": 0.6
}
Internally, this is how fields above are populated when creating a new OpenAI client.
const openai = new OpenAI({
apiKey,
baseURL: `${apiBase}/openai/deployments/${deployment}`,
defaultQuery: { "api-version": apiVersion },
defaultHeaders: { "api-key": apiKey }
})
We have a Visual Studio Code extension that provides a frontend to this project.
The extension is not publicly available -- contact me at [email protected]
to request for a .vsix package.