Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

705 refactor handlechattogpt to remove mutation #743

Merged
merged 33 commits into from
Jan 23, 2024
Merged
Show file tree
Hide file tree
Changes from 23 commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
d6939b4
refactor chatGptCallFunction
heatherlogan-scottlogic Jan 4, 2024
a75cdef
pass history/email vars through openAI functions
heatherlogan-scottlogic Jan 5, 2024
5c06e23
remove chatResponse from tool call functions
heatherlogan-scottlogic Jan 5, 2024
2270d13
start refactor on chatGptSendMessage
heatherlogan-scottlogic Jan 5, 2024
904d74a
begin to remove mutations from controller functions
heatherlogan-scottlogic Jan 5, 2024
b9bd1bb
merge dev
heatherlogan-scottlogic Jan 9, 2024
237b657
dont return defences from chatGptSendMessage
heatherlogan-scottlogic Jan 9, 2024
06cc5de
rebase
heatherlogan-scottlogic Jan 9, 2024
b704fcb
update tests
heatherlogan-scottlogic Jan 9, 2024
d450106
rebase frontend changes & tidy up
heatherlogan-scottlogic Jan 9, 2024
ac428ed
fix alerted defences showing
heatherlogan-scottlogic Jan 9, 2024
644e1e2
fix user message added to history when transformed
heatherlogan-scottlogic Jan 10, 2024
1f289dc
set chatHistory to gptreply.chatHistory in getFinalReplyAfterAllToolC…
heatherlogan-scottlogic Jan 10, 2024
bc6bcef
Address some PR comments
heatherlogan-scottlogic Jan 11, 2024
df7ae2c
remove blocked and defence report from handleChatError as it is not used
heatherlogan-scottlogic Jan 11, 2024
e6e8150
save chat history to session on error
heatherlogan-scottlogic Jan 11, 2024
c9d1ccd
address PR comments
heatherlogan-scottlogic Jan 12, 2024
8438fe1
remove defenceReport from ChatResponse returned by openai
heatherlogan-scottlogic Jan 12, 2024
c84b24e
merge dev
pmarsh-scottlogic Jan 15, 2024
5968b84
removes defenceReport from LevelHandlerResponse interface
pmarsh-scottlogic Jan 15, 2024
45e2a41
Merge branch 'dev' into 705-refactor-handlechattogpt-to-remove-mutation
pmarsh-scottlogic Jan 17, 2024
b2f1a42
708 move logic for detecting output defence bot filtering (#740)
gsproston-scottlogic Jan 18, 2024
32fcc94
adds imports to test files to fix linting
pmarsh-scottlogic Jan 18, 2024
32c2c56
improve comment
pmarsh-scottlogic Jan 19, 2024
9e5ad27
update name from high or low level chat to chat with or without defen…
pmarsh-scottlogic Jan 19, 2024
1c0a15d
removes stale comments
pmarsh-scottlogic Jan 19, 2024
350ae26
moves sentEmails out of LevelHandlerResponse and uses the property in…
pmarsh-scottlogic Jan 19, 2024
1cdff7b
changed an if to an else if
pmarsh-scottlogic Jan 19, 2024
737dabb
unspread that spread
pmarsh-scottlogic Jan 19, 2024
9723a0a
return combined report without declaring const first
pmarsh-scottlogic Jan 19, 2024
404a86c
makes chatResponse decleration more concise with buttery spreads
pmarsh-scottlogic Jan 19, 2024
7142e47
updates comment
pmarsh-scottlogic Jan 19, 2024
ae17a08
remove linter rule about uninitialised let statements. changed the le…
pmarsh-scottlogic Jan 23, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
350 changes: 218 additions & 132 deletions backend/src/controller/chatController.ts

Large diffs are not rendered by default.

18 changes: 7 additions & 11 deletions backend/src/controller/handleError.ts
Original file line number Diff line number Diff line change
Expand Up @@ -7,26 +7,22 @@ function sendErrorResponse(
statusCode: number,
errorMessage: string
) {
res.status(statusCode);
res.send(errorMessage);
res.status(statusCode).send(errorMessage);
}

function handleChatError(
res: Response,
chatResponse: ChatHttpResponse,
blocked: boolean,
errorMsg: string,
statusCode = 500
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, this function is only ever called with blocked=true. Do you foresee cases where blocked would be false?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm true i think at one point we did use distinguish between blocked messages and error messages for styling, but i don't think we need this defence information here now.

Copy link
Member

@chriswilty chriswilty Jan 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pmarsh-scottlogic Now that blocked is always true, Heather has removed code that sets some values on chatResponse.defenceReport. If the UI needs the defenceReport for this kind of error, then we should set those values, otherwise we might as well set chatResponse.defenceReport to null to save on payload size.

Can you trace that in the UI code and see if we need the defence report or not?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose you mean isError is always true, when we've sent the response via handleChatError.

Anyway, luckily not much tracing to do, all you need to do is look at processChatResponse in ChatBox.tsx. Yes, when isError is true, the defenceReport is irrelevant. But the defenceReport is relevant if isError is false. And in the frontend, we don't know ahead of time whether sending a message will cause an error or not, therefore we still need isError and defenceReport in ChatHttpResponse.

Copy link
Contributor

@pmarsh-scottlogic pmarsh-scottlogic Jan 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm just checking the removed code now. Ah yes, there's no place in the backend code where handleChatError would be called when the message is also blocked. This is a good semantic improvement!

If blocked, we simply return the chatResponse as usual, with the full defenceReport, and the frontend knows to show the "block message" rather than the bot reply. Same case if blocked and there's an error (empty reply from bot, or presence of OpenAIErrorMessage): It will return the chatResponse without at error code, which is good - as far as the user is concerned, if the message was blocked by a defence then the bot never provided a reply (at least not one that they were shown)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this mean we could send null / undefined for defenceReport when we have a chat error? If so, then defenceReport in ChatHttpResponse would need to be optional, which is maybe not great for type-safety. In that case, maybe we could split the response into two types: ChatHttpResponse and ChatHttpErrorResponse.

That might be one to think about for future improvements though!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the defenceReport in the response is necessarily not null/undefined! It just might be "empty", that is it might have a null blockedReason, or an empty triggeredDefences list. Which means the response is bigger than it needs to be when defences are not active, but It's fine for now

) {
console.error(errorMsg);
chatResponse.reply = errorMsg;
chatResponse.defenceReport.isBlocked = blocked;
chatResponse.isError = true;
if (blocked) {
chatResponse.defenceReport.blockedReason = errorMsg;
}
res.status(statusCode);
res.send(chatResponse);
const updatedChatResponse = {
...chatResponse,
reply: errorMsg,
isError: true,
};
res.status(statusCode).send(updatedChatResponse);
}

export { sendErrorResponse, handleChatError };
94 changes: 62 additions & 32 deletions backend/src/defence.ts
Original file line number Diff line number Diff line change
Expand Up @@ -256,41 +256,32 @@ function transformMessage(
message: string,
defences: Defence[]
): TransformedChatMessage | null {
if (isDefenceActive(DEFENCE_ID.XML_TAGGING, defences)) {
const transformedMessage = transformXmlTagging(message, defences);
console.debug(
`Defences applied. Transformed message: ${combineTransformedMessage(
transformedMessage
)}`
);
return transformedMessage;
} else if (isDefenceActive(DEFENCE_ID.RANDOM_SEQUENCE_ENCLOSURE, defences)) {
const transformedMessage = transformRandomSequenceEnclosure(
message,
defences
);
console.debug(
`Defences applied. Transformed message: ${combineTransformedMessage(
transformedMessage
)}`
);
return transformedMessage;
} else if (isDefenceActive(DEFENCE_ID.INSTRUCTION, defences)) {
const transformedMessage = transformInstructionDefence(message, defences);
console.debug(
`Defences applied. Transformed message: ${combineTransformedMessage(
transformedMessage
)}`
);
return transformedMessage;
} else {
const transformedMessage = isDefenceActive(DEFENCE_ID.XML_TAGGING, defences)
? transformXmlTagging(message, defences)
: isDefenceActive(DEFENCE_ID.RANDOM_SEQUENCE_ENCLOSURE, defences)
? transformRandomSequenceEnclosure(message, defences)
: isDefenceActive(DEFENCE_ID.INSTRUCTION, defences)
? transformInstructionDefence(message, defences)
: null;

if (!transformedMessage) {
console.debug('No defences applied. Message unchanged.');
return null;
}

console.debug(
`Defences applied. Transformed message: ${combineTransformedMessage(
transformedMessage
)}`
);
return transformedMessage;
}

// detects triggered defences in original message and blocks the message if necessary
async function detectTriggeredDefences(message: string, defences: Defence[]) {
async function detectTriggeredInputDefences(
message: string,
defences: Defence[]
) {
const singleDefenceReports = [
detectCharacterLimit(message, defences),
detectFilterUserInput(message, defences),
Expand All @@ -301,6 +292,12 @@ async function detectTriggeredDefences(message: string, defences: Defence[]) {
return combineDefenceReports(singleDefenceReports);
}

// detects triggered defences in bot output and blocks the message if necessary
function detectTriggeredOutputDefences(message: string, defences: Defence[]) {
const singleDefenceReports = [detectFilterBotOutput(message, defences)];
return combineDefenceReports(singleDefenceReports);
}

function combineDefenceReports(
defenceReports: SingleDefenceReport[]
): ChatDefenceReport {
Expand Down Expand Up @@ -389,6 +386,40 @@ function detectFilterUserInput(
};
}

function detectFilterBotOutput(
message: string,
defences: Defence[]
): SingleDefenceReport {
const detectedPhrases = detectFilterList(
message,
getFilterList(defences, DEFENCE_ID.FILTER_BOT_OUTPUT)
);

const filterWordsDetected = detectedPhrases.length > 0;
const defenceActive = isDefenceActive(DEFENCE_ID.FILTER_BOT_OUTPUT, defences);

if (filterWordsDetected) {
console.debug(
`FILTER_BOT_OUTPUT defence triggered. Detected phrases from blocklist: ${detectedPhrases.join(
', '
)}`
);
}

return {
defence: DEFENCE_ID.FILTER_BOT_OUTPUT,
blockedReason:
filterWordsDetected && defenceActive
? 'My original response was blocked as it contained a restricted word/phrase. Ask me something else. '
: null,
status: !filterWordsDetected
? 'ok'
: defenceActive
? 'triggered'
: 'alerted',
};
}

function detectXmlTagging(
message: string,
defences: Defence[]
Expand Down Expand Up @@ -444,12 +475,11 @@ export {
configureDefence,
deactivateDefence,
resetDefenceConfig,
detectTriggeredDefences,
detectTriggeredInputDefences,
detectTriggeredOutputDefences,
getQAPromptFromConfig,
getSystemRole,
isDefenceActive,
transformMessage,
getFilterList,
detectFilterList,
combineTransformedMessage,
};
35 changes: 32 additions & 3 deletions backend/src/models/chat.ts
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
import { ChatCompletionMessageParam } from 'openai/resources/chat/completions';
import {
ChatCompletionMessage,
ChatCompletionMessageParam,
} from 'openai/resources/chat/completions';

import { DEFENCE_ID } from './defence';
import { EmailInfo } from './email';
Expand Down Expand Up @@ -60,6 +63,18 @@ interface SingleDefenceReport {
status: 'alerted' | 'triggered' | 'ok';
}

interface FunctionCallResponse {
completion: ChatCompletionMessageParam;
wonLevel: boolean;
sentEmails: EmailInfo[];
}

interface ToolCallResponse {
functionCallReply?: FunctionCallResponse;
chatResponse?: ChatResponse;
chatHistory: ChatHistoryMessage[];
}

interface ChatAnswer {
reply: string;
questionAnswered: boolean;
Expand All @@ -72,11 +87,16 @@ interface ChatMalicious {

interface ChatResponse {
completion: ChatCompletionMessageParam | null;
defenceReport: ChatDefenceReport;
wonLevel: boolean;
openAIErrorMessage: string | null;
}

interface ChatGptReply {
chatHistory: ChatHistoryMessage[];
completion: ChatCompletionMessage | null;
openAIErrorMessage: string | null;
}

interface TransformedChatMessage {
preMessage: string;
message: string;
Expand All @@ -94,10 +114,15 @@ interface ChatHttpResponse {
sentEmails: EmailInfo[];
}

interface LevelHandlerResponse {
chatResponse: ChatHttpResponse;
chatHistory: ChatHistoryMessage[];
sentEmails: EmailInfo[];
}

interface ChatHistoryMessage {
completion: ChatCompletionMessageParam | null;
chatMessageType: CHAT_MESSAGE_TYPE;
numTokens?: number | null;
infoMessage?: string | null;
}

Expand All @@ -115,11 +140,15 @@ const defaultChatModel: ChatModel = {
export type {
ChatAnswer,
ChatDefenceReport,
ChatGptReply,
ChatMalicious,
ChatResponse,
LevelHandlerResponse,
ChatHttpResponse,
ChatHistoryMessage,
TransformedChatMessage,
FunctionCallResponse,
ToolCallResponse,
};
export {
CHAT_MODELS,
Expand Down
Loading