-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(webapp): virtual background #29
base: main
Are you sure you want to change the base?
Conversation
@huanghuang358 @Rocket1184 Review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've tried the virtual background stream locally and it works like a charm. Nice work! I'm here to make some suggestions to further imporove the maintainability of the codebase:
|
||
imageSegmenter = await ImageSegmenter.createFromOptions(vision, { | ||
baseOptions: { | ||
modelAssetPath: "https://storage.googleapis.com/mediapipe-models/image_segmenter/selfie_multiclass_256x256/float32/latest/selfie_multiclass_256x256.tflite", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we should download the model and put it under /static
instead of using google's URL
const vision = await FilesetResolver.forVisionTasks( | ||
"https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@latest/wasm" | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's definitely a bad idea to use jsdelivr URL in production. Vite has the ability to import file as URL: https://vite.dev/guide/assets#explicit-url-imports
So we may just import those files from node_modules
async function enableSegmentation(deviceId: string) { | ||
try { | ||
if (!imageSegmenter) { | ||
await createImageSegmenter() | ||
} | ||
// 开始图像分割 | ||
const stream = await navigator.mediaDevices.getUserMedia({ audio: false, video: { width: 480, height: 360, deviceId: deviceId } }) | ||
video.srcObject = stream | ||
video.onloadeddata = async () => { | ||
video.play() | ||
webcamRunning = true | ||
await predictWebcam() | ||
streamForVirtualBackground = canvas.captureStream() | ||
} | ||
} catch (error) { | ||
console.error("启动摄像头失败:", error); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
async
function can also return Promise
s, so no need to store streamForVirtualBackground
globally and check it in an interval afterwards.
async function enableSegmentation(deviceId: string) { | |
try { | |
if (!imageSegmenter) { | |
await createImageSegmenter() | |
} | |
// 开始图像分割 | |
const stream = await navigator.mediaDevices.getUserMedia({ audio: false, video: { width: 480, height: 360, deviceId: deviceId } }) | |
video.srcObject = stream | |
video.onloadeddata = async () => { | |
video.play() | |
webcamRunning = true | |
await predictWebcam() | |
streamForVirtualBackground = canvas.captureStream() | |
} | |
} catch (error) { | |
console.error("启动摄像头失败:", error); | |
} | |
} | |
async function enableSegmentation(deviceId: string): Promise<MediaStream | undefined> { | |
try { | |
if (!imageSegmenter) { | |
await createImageSegmenter() | |
} | |
// 开始图像分割 | |
const stream = await navigator.mediaDevices.getUserMedia({ audio: false, video: { width: 480, height: 360, deviceId: deviceId } }) | |
return new Promise(resolve => { | |
video.srcObject = stream | |
video.onloadeddata = async () => { | |
video.play() | |
webcamRunning = true | |
await predictWebcam() | |
resolve(canvas.captureStream()) | |
} | |
}) | |
} catch (error) { | |
console.error("启动摄像头失败:", error); | |
} | |
} |
async function asyncGetStreamForVirtualBackground(deviceId: string): Promise<MediaStream> { | ||
await enableSegmentation(deviceId) | ||
while (streamForVirtualBackground === null) { | ||
await new Promise(resolve => setTimeout(resolve, 100)) // 每100ms检查一次 | ||
} | ||
return streamForVirtualBackground | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
async function asyncGetStreamForVirtualBackground(deviceId: string): Promise<MediaStream> { | |
await enableSegmentation(deviceId) | |
while (streamForVirtualBackground === null) { | |
await new Promise(resolve => setTimeout(resolve, 100)) // 每100ms检查一次 | |
} | |
return streamForVirtualBackground | |
} | |
async function asyncGetStreamForVirtualBackground(deviceId: string): Promise<MediaStream> { | |
return enableSegmentation(deviceId) | |
} |
let imageSegmenter: ImageSegmenter | ||
let webcamRunning: boolean = false | ||
let streamForVirtualBackground: MediaStream | null = null | ||
|
||
const videoWidth = 480 | ||
const videoHeight = 360 | ||
|
||
// 创建背景图片元素 | ||
const backgroundImage = new Image() | ||
backgroundImage.src = './background.jpg' | ||
|
||
// 初始化视频元素 | ||
const video = document.createElement('video') | ||
const canvas = document.createElement('canvas') | ||
const canvasCtx = canvas.getContext('2d')! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are too many global variables. Please consider wrapping all those state and actions into a class
, maybe something like:
export class VirtualBackgroundStream {
private deviceId: string
private webcamRunning: boolean
private imageSegmenter: ImageSegmenter
// other more memebers ...
constructor(deviceId: string) {
this.deviceId = deviceId
}
public startStream(): Promise<MediaStream> {
// enableSegmentation ...
}
public destroyStream:(): void {
// disableSegmentation ...
}
}
to make the lifecycle management more clear, and easier for future extensions
add a new feature accroding to #9 .