Skip to content

Commit

Permalink
gemini版本支持流式输出,兼容Vision
Browse files Browse the repository at this point in the history
fatwang2 committed Feb 24, 2024
1 parent 7de1ab6 commit 077e645
Showing 3 changed files with 281 additions and 215 deletions.
3 changes: 2 additions & 1 deletion README-EN.md
Original file line number Diff line number Diff line change
@@ -7,6 +7,7 @@
<a href="https://www.buymeacoffee.com/fatwang2" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>

# Version Updates
- V0.1.7, 20240224, Gemini version supports streaming output and is compatible with vision model
- V0.1.6, 20240221, Supports Gemini model, can be temporarily configured through Cloudflare worker method
- V0.1.5, 20240205, supports news search, making it more convenient to quickly browse news
- V0.1.4, 20240120, Supports one-click deployment with Zeabur, very convenient, highly recommended!
@@ -17,7 +18,7 @@
For more historical updates, please see [Version History](https://github.com/fatwang2/search2ai/releases)

# Product Introduction
- search2ai, so that your LLM API support networking, search, news, web page summarization, has supported OpenAI, Gemini, the big model will be based on your input to determine whether the network, not every time the network search, do not need to install any plug-ins, do not need to replace the key, directly in your commonly used OpenAI/Gemini three-way client replacement of custom You can directly replace the customized address in your usual OpenAI/Gemini three-way client, and also support self-deployment, which will not affect the use of other OpenAI functions, such as drawing, voice, etc. The drawing function of Gemini is in the process of being adapted.
- search2ai, so that your LLM API support networking, search, news, web page summarization, has supported OpenAI, Gemini, the big model will be based on your input to determine whether the network, not every time the network search, do not need to install any plug-ins, do not need to replace the key, directly in your commonly used OpenAI/Gemini three-way client replacement of custom You can directly replace the customized address in your usual OpenAI/Gemini three-way client, and also support self-deployment, which will not affect the use of other functions, such as drawing, voice, etc.

<table>
<tr>
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -8,6 +8,7 @@
<a href="https://www.buymeacoffee.com/fatwang2" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>

# 版本更新
- V0.1.7,20240224,Gemini版本支持流式输出,且兼容vision model
- V0.1.6,20240221,支持Gemini模型,暂时可通过cloudflare worker的方式配置
- V0.1.5,20240205,支持新闻搜索,快速浏览新闻更方便
- V0.1.4,20240120,支持Zeabur一键部署,非常方便,强烈推荐!
@@ -18,7 +19,7 @@
更多历史更新,请参见 [版本记录](https://github.com/fatwang2/search2ai/releases)

# 产品介绍
- search2ai,让你的 LLM API 支持联网,搜索、新闻、网页总结,已支持OpenAI、Gemini,大模型会根据你的输入判断是否联网,不是每次都联网搜索,不需要安装任何插件,也不需要更换key,直接在你常用的 OpenAI/Gemini 三方客户端替换自定义地址即可,也支持自行部署,不会影响使用 OpenAI 的其他功能,如画图、语音等,Gemini的画图功能适配中
- search2ai,让你的 LLM API 支持联网,搜索、新闻、网页总结,已支持OpenAI、Gemini,大模型会根据你的输入判断是否联网,不是每次都联网搜索,不需要安装任何插件,也不需要更换key,直接在你常用的 OpenAI/Gemini 三方客户端替换自定义地址即可,也支持自行部署,不会影响使用的其他功能,如画图、语音等

<table>
<tr>
@@ -126,7 +127,6 @@ http://localhost:3014/v1/chat/completions
为保证更新,也可以先fork本项目后自己在vercel上部署

# 后续迭代
- Gemini支持流式输出、兼容非聊天场景
- 支持Azure OpenAI
- 修复Vercel项目流式输出问题
- 提升流式输出的速度
489 changes: 277 additions & 212 deletions gemini.js
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
addEventListener('fetch', event => {
event.respondWith(handleRequest(event.request))
})
})

const corsHeaders = {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET, POST, OPTIONS', // 允许的HTTP方法
@@ -36,10 +37,6 @@ addEventListener('fetch', event => {
}

async function parse_function_response(message) {
if (!message[0] || !message[0]["functionCall"]) {
console.log('Invalid message:', message);
return { function_name: 'ERROR', function_response: 'Invalid message' };
}
const function_call = message[0]["functionCall"];
const function_name = function_call["name"];

@@ -76,213 +73,281 @@ addEventListener('fetch', event => {
return {function_name, function_response};
}

async function fetchWithRetry(url, options, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
const response = await fetch(url, options);
if (response.ok) {
return response;
}
} catch (error) {
console.error(`Attempt ${i + 1} failed. Retrying...`);
}
}
throw new Error(`Failed to fetch after ${maxRetries} attempts`);
}

async function run_conversation(api_key, message) {
const date = new Date();
const timeZone = 'Asia/Shanghai';
const formatter = new Intl.DateTimeFormat('en-US', { dateStyle: 'full', timeZone });
const currentDate = formatter.format(date);
if (!message) {
console.log('Invalid message:', message);
return { error: 'Invalid message' };
}
const customMessage = [
{
"role":"user",
"parts":[
{
"text": `Today is ${currentDate}.You are a friendly intelligent assistant with the ability to search online, hopefully you will go online when the user asks for something that requires internet access, otherwise just answer, try to be as simple and clear as possible when answering the user's question, and you can use emoji to make your conversations more interesting!`
}
]
},
{
"role": "model",
"parts":[
{
"text": "okay"
}
]
},
];
message = [...customMessage, ...message];
console.log('Running conversation with message:', message);
const originalMessage = [...message];
let functionResponseJson;

const definitions = [
{
"name": "search",
"description": "search on the Interent when the users want something new to know",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The query to search"
}
}
}
}
];

const data = {
"contents": message,
"tools": [{
"functionDeclarations": definitions
}]
};

const response = await fetch("https://gemini.sum4all.site/v1beta/models/gemini-pro:generateContent?key="+api_key, {
method: 'POST',
body: JSON.stringify(data)
});

if (!response.ok) {
console.log('Received error response from run_conversation');
// 修改: 返回表示出错的 Response 对象
return new Response(JSON.stringify({ error: "Error fetching from Google Language API" }), {
headers: { ...corsHeaders,'content-type': 'application/json' },
status: 500 // 代表出现 Internal Server Error 的错误码
});
}
console.log('Received successful response from run_conversation');

let responseJson = await response.json();

if (!responseJson["candidates"][0]["content"]) {
console.log("ERROR: No content in response");
console.log(responseJson);
return;
}

message = responseJson["candidates"][0]["content"]["parts"];

if (message[0]["functionCall"]) {
const {function_name, function_response} = await parse_function_response(message);

const functionResponseData = {
"contents": [

...originalMessage
,
{
"role": "model",
"parts": [
...message,]
},{
"role": "function",
"parts": [{
"functionResponse": {
"name": function_name,
"response": {
"name": function_name,
"content": function_response
}
}
}]
}],
"tools": [{
"functionDeclarations": definitions
}]
};
console.log('functionResponseData:', functionResponseData);
const functionResponse = await fetch("https://gemini.sum4all.site/v1beta/models/gemini-pro:generateContent?key="+api_key, {
method: 'POST',
body: JSON.stringify(functionResponseData)
});

if (!functionResponse.ok) {
console.log('Received error response from run_conversation');
return;
}

functionResponseJson = await functionResponse.json();

if (!functionResponseJson["candidates"][0]["content"]) {
console.log("ERROR: No content in response");
console.log(functionResponseJson);
return new Response(JSON.stringify({ error: "No content received from Google Language API"}), {
headers: { ...corsHeaders,'content-type': 'application/json' },
status: 400 // 代表 Bad Request 的错误码
});
}
} else {
functionResponseJson = responseJson;
}
// 将响应封装成一个 Response 对象,然后返回
return new Response(JSON.stringify(functionResponseJson), {
headers: { ...corsHeaders,'content-type': 'application/json' },
status: 200,
});
}

// HTTP请求处理主函数
async function handleRequest(request) {
console.log('[handleRequest] Request received', { method: request.method, url: request.url });

// 创建一个新的响应对象,并设置 CORS 头部
if (request.method === 'OPTIONS') {
console.log('[handleRequest] Preparing CORS preflight response.');
const response = new Response(null, {
status: 204, // OPTIONS 请求通常返回 204 No Content
headers: corsHeaders
});

// 输出响应头部
console.log('[handleRequest] CORS preflight response headers:', JSON.stringify([...response.headers]));
return response;
}

// 解析请求 URL 的路径部分
const url = new URL(request.url);
const path = url.pathname;

if (path.includes('/v1/models/gemini-pro')) {
console.log('[handleRequest] Handling gemini-pro request.');

// 提取 API 键和请求内容
const api_key = request.headers.get('x-goog-api-key');
let message;
try {
const requestBody = await request.text(); // 获取请求文本
console.log('[handleRequest] Request body:', requestBody);
message = JSON.parse(requestBody).contents; // 解析 JSON 内容
} catch (error) {
console.error('[handleRequest] Error parsing request body:', error.message);
return new Response(JSON.stringify({ error: 'Bad JSON in request' }), {
headers: { ...corsHeaders,'content-type': 'application/json' },
status: 400,
});
}
async function run_conversation(api_key, message, isStream, isSSE) {
const date = new Date();
const timeZone = 'Asia/Shanghai';
const formatter = new Intl.DateTimeFormat('en-US', { dateStyle: 'full', timeZone });
const currentDate = formatter.format(date);
if (!message) {
console.log('Invalid message:', message);
return errorResponse('Invalid message', 400);
}
const customMessage = [
{
"role":"user",
"parts":[
{
"text": `Today is ${currentDate}.You are a friendly intelligent assistant with the ability to search online, hopefully you will go online when the user asks for something that requires internet access, otherwise just answer, try to be as simple and clear as possible when answering the user's question, and you can use emoji to make your conversations more interesting!`
}
]
},
{
"role": "model",
"parts":[
{
"text": "okay"
}
]
},
];
message = [...customMessage, ...message];
console.log('Running conversation with message:', message);
const originalMessage = [...message];

const definitions = [
{
"name": "search",
"description": "search on the Interent when the users want something new to know",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The query to search"
}
}
}
}
];

const data = {
"contents": message,
"tools": [{
"functionDeclarations": definitions
}]
};
const api_url = isStream ?
"https://gemini.sum4all.site/v1beta/models/gemini-pro:streamGenerateContent?key="+api_key :
"https://gemini.sum4all.site/v1beta/models/gemini-pro:generateContent?key="+api_key;

const response = await fetchWithRetry(api_url,{
method: 'POST',
body: JSON.stringify(data)
});
// 打印响应的状态
console.log('Response status:', response.status);
console.log('Response ok:', response.ok);
console.log('Response status text:', response.statusText);
if (!response.ok) {
console.log('Received error response from run_conversation');
// 修改: 返回表示出错的 Response 对象
return new Response(JSON.stringify({ error: "Error fetching from Google Language API" }), {
headers: { ...corsHeaders,'content-type': 'application/json' },
status: 500 // 代表出现 Internal Server Error 的错误码
});
}
console.log('Received successful response from run_conversation');
let responseJson = await response.json();
console.log('Response body:', responseJson);

let responseContent;
if (isStream) {
// 流式情况下,candidates 是数组
if (!responseJson?.[0]?.["candidates"] || responseJson[0]["candidates"].length === 0) {
console.log("ERROR: No candidates in response");
return new Response(JSON.stringify({ error: "No candidates in response" }), {
headers: { ...corsHeaders,'content-type': 'application/json' },
status: 500 // 代表出现 Internal Server Error 的错误码
});
}
responseContent = responseJson[0]["candidates"][0]["content"];
message = responseContent["parts"];
if (!message[0] || !message[0]["functionCall"]) {
console.log("No functionCall in message, returning initial content");
const encoder = new TextEncoder();
const stream = new ReadableStream({
async start(controller) {
for (let i = 0; i < responseJson.length; i++) {
if (isSSE) {
// SSE 格式
controller.enqueue(encoder.encode(`data: ${JSON.stringify(responseJson[i])}\n\n`));
} else {
// JSON 格式
controller.enqueue(encoder.encode(i === 0 ? '[' : ','));
controller.enqueue(encoder.encode(JSON.stringify(responseJson[i])));
}
await new Promise(resolve => setTimeout(resolve, 100));
}
if (!isSSE) {
controller.enqueue(encoder.encode(']'));
}
controller.close();
}
});
return new Response(stream, {
headers: { ...corsHeaders,'content-type': isSSE ? 'text/event-stream' : 'application/json' },
status: response.status
});
}
} else {
// 非流式情况下,candidates 是对象
if (!responseJson["candidates"]) {
console.log("ERROR: No candidates in response");
return new Response(JSON.stringify({ error: "No candidates in response" }), {
headers: { ...corsHeaders,'content-type': 'application/json' },
status: 500 // 代表出现 Internal Server Error 的错误码
});
}
responseContent = responseJson["candidates"][0]["content"];
message = responseContent["parts"];
if (!message[0] || !message[0]["functionCall"]) {
console.log("No functionCall in message, returning initial content");
return new Response(JSON.stringify(responseJson), {
headers: { ...corsHeaders,'content-type': 'application/json' },
status: response.status
});
}
}

if (message[0]["functionCall"]) {
const {function_name, function_response} = await parse_function_response(message);

const functionResponseData = {
"contents": [

...originalMessage
,
{
"role": "model",
"parts": [
...message,]
},{
"role": "function",
"parts": [{
"functionResponse": {
"name": function_name,
"response": {
"name": function_name,
"content": function_response
}
}
}]
}],
"tools": [{
"functionDeclarations": definitions
}]
};
console.log('functionResponseData:', functionResponseData);
const functionResponse = await fetchWithRetry(`${api_url}${api_url.includes('?') ? '&' : '?'}${isSSE ? 'alt=sse' : ''}`, {
method: 'POST',
body: JSON.stringify(functionResponseData)
});

if (!functionResponse.ok) {
console.log('Received error response from run_conversation');
return new Response(JSON.stringify({ error: "Error fetching from Gemini API" }), {
headers: { ...corsHeaders, 'content-type': 'application/json' },
status: functionResponse.status
});
}

// 直接转发Gemini的流式响应
return new Response(functionResponse.body, {
status: functionResponse.status,
headers: {
...corsHeaders,
'Content-Type': isSSE ? 'text/event-stream' : 'application/json'
}
});
} }

try {
// 调用 run_conversation 函数并获取响应
const response = await run_conversation(api_key, message);

// 检查响应类型并处理
if (response instanceof Response) {
console.log('[handleRequest] run_conversation provided a response object.');
return response;
} else {
console.error('[handleRequest] run_conversation returned an unexpected response type.');
throw new Error('Invalid response type from run_conversation');
}
} catch (error) {
// 捕获错误并返回错误响应
console.error('[handleRequest] Error during request handling:', error.message);
return new Response(JSON.stringify({ error: error.message }), {
headers: { ...corsHeaders,'content-type': 'application/json' },
status: 500,
});
}
} else {
// 处理不符合特定路径的其他请求
console.log('[handleRequest] Request not found for path:', path);
return new Response(JSON.stringify({ error: 'Not found' }), {
headers: { ...corsHeaders,
'content-type': 'application/json' },
status: 404,
});
}
}

// HTTP请求处理主函数
async function handleRequest(request) {
// 预检请求的处理
if (request.method === 'OPTIONS') {
return handleCorsPreflight();
}

// 解析请求路径
const url = new URL(request.url);
const path = url.pathname;
// 如果请求路径包含 '/models/gemini-pro-vision',则直接转发请求
if (path.includes('/models/gemini-pro-vision')) {
// 创建一个新的请求对象,复制原始请求的所有信息
const index = path.indexOf('/models');
const newRequest = new Request('https://gemini.sum4all.site/v1beta' + path.substring(index), {
method: request.method,
headers: request.headers,
body: request.body,
redirect: request.redirect
});

// 使用 fetch API 发送新的请求
const response = await fetch(newRequest);

// 直接返回响应
return response;
}
if (!path.includes('/models/gemini-pro')) {
return jsonResponse({ error: 'Not found' }, 404);
}
// 检查路径是否符合预期
let api_key = request.headers.get('x-goog-api-key');

// Check if 'Authorization' header exists and starts with 'Bearer '
let authHeader = request.headers.get('Authorization');
if (authHeader && authHeader.startsWith('Bearer ')) {
// Extract the api key from the 'Authorization' header
api_key = authHeader.slice(7);
}

try {
// 解析请求体
const requestBody = await request.json(); // 使用 request.json() 解析 JSON 请求体
const isStream = path.includes('streamGenerateContent');
const isSSE = url.searchParams.get('alt') === 'sse';

// 调用 run_conversation 并直接返回其响应
return await run_conversation(api_key, requestBody.contents, isStream, isSSE);
} catch (error) {
// 解析请求体失败或 run_conversation 抛出错误
console.error('[handleRequest] Error:', error.message);
return errorResponse(error.message, 500);
}
}
function handleCorsPreflight() {
// 处理 CORS 预检请求
return new Response(null, {
status: 204,
headers: corsHeaders,
});
}

function jsonResponse(body, status = 200) {
// 辅助函数创建 JSON 响应
return new Response(JSON.stringify(body), {
headers: { ...corsHeaders, 'Content-Type': 'application/json' },
status,
});
}
function errorResponse(message, statusCode = 400) {
return new Response(JSON.stringify({ error: message }), {
status: statusCode,
headers: { 'Content-Type': 'application/json' },
});
}

0 comments on commit 077e645

Please sign in to comment.