Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Footprint / laser point detection for videos #70

Open
mzur opened this issue Feb 27, 2019 · 5 comments
Open

Footprint / laser point detection for videos #70

mzur opened this issue Feb 27, 2019 · 5 comments

Comments

@mzur
Copy link
Member

mzur commented Feb 27, 2019

Jessica Nephin (DFO Canada) asked if laser point detection would become available for videos. I don't see a way for automatic detection but manual detection and footprint calculation should be possible (unless there is an oblique camera angle).

@mzur mzur changed the title Footprint / laser point detection Footprint / laser point detection for videos Jul 10, 2020
@mzur mzur transferred this issue from biigle/videos Jul 10, 2020
@mzur mzur added the discuss label Jul 10, 2020
@mzur mzur transferred this issue from biigle/core Feb 13, 2024
@mzur
Copy link
Member Author

mzur commented Dec 10, 2024

@ToukL here is the current plan for video laser point detection that we talked about.

Building blocks

  1. The actual automatic detection (Develop a better laser point detection algorithm #44) which will return (x,y) positions for the laser points for different video frames. This will probably based on a discrete sampling (e.g. 500 ms intervals of the video), so the laser point positions will behave like point video annotations with "points" and "frames". Users can also annotate laser points manually. These positions take precedence over the automatically detected positions.

  2. The "measure box" which is a visual indicator of the region where measurements based on the laser points are somewhat accurate. This is similar to the export area of the reports module but differs in key aspects. The measure box is defined for each video (instead of the whole volume). Also, the measure box can move and change size over the duration of the video, like a rectangle video annotation. The measure box is automatically determined based on the automatic/manually chosen laser point positions and a "y axis padding" in pixels.

  3. Report post-processing. This is the part where we are not sure if it should be implemented in BIIGLE, yet. To get an estimated size for each annotated object, for each annotation, the post-processing determines the annotation coordinates ("points") that are closest to the laser point position (maybe only consider the y axis for this?). The estimated annotation size is then computed based on the annotation points at this frame and the currently estimated pixel area at this frame. As a first step, this post-processing can be implemented as a script that reads the CSV report and outputs a modified version.

UX

The video laser point detection can work similarly to the image detection. In a new tab of the video volume, users can specify a laser point label and a laser distance. Then they can start the detection process. If they want to use manually annotated laser points, they can create point annotations with the laser label before they start the detection process. The "y axis padding" of the "measure box" could also be configured here.

When the process is finished, users see non-editable circle annotations at the detected locations of the laser points throughout the video. Also, they see the automatically determined "measure box". Both the laser point circle annotations and the measure box visibility can be toggled in the annotation tool settings.

This is about it. The measure box is only a visual aid for annotation and not (yet) used for anything else.

Future

In the future the report post-processing may be directly implemented in the biigle/reports module.

Also the computed pixel area information from the laser point detection could be used to implement the ruler tool and measure tooltip for the video annotation tool.

As a somewhat separate issue, the video CSV report can include interpolated geolocations of the annotations. This would make use of the already uploadable but unused video metadata (related to biigle/core#462).

@mzur mzur moved this to Medium Priority in BIIGLE Roadmap Dec 10, 2024
@mzur mzur removed the discuss label Dec 10, 2024
@ToukL
Copy link

ToukL commented Dec 12, 2024

UX

The video laser point detection can work similarly to the image detection. In a new tab of the video volume, users can specify a laser point label and a laser distance. Then they can start the detection process. If they want to use manually annotated laser points, they can create point annotations with the laser label before they start the detection process. The "y axis padding" of the "measure box" could also be configured here.

Thanks @mzur for the very clear plan and instructions. I've started building UX bricks and I'm wondering: should we add a "video information" page corresponding to the route /videos/{id} as for images, that would contains metadata info and computed area once the laser points detection is done ?
For the moment the route /videos/{id} redirects to '/videos/{id}/annotations' #1
Is it ok to modify existing routes or should I create a new one ?

@mzur
Copy link
Member Author

mzur commented Dec 12, 2024

So at some point I want to move the image info view as a tab into the image annotation tool (biigle/core#620). What do you think, is it best kept as a separate view or as a tab in the annotation tool? Whatever you decide, feel free to implement this for videos. We have to take a look if the plain /videos/:id route is used somewhere. As long as nothing breaks because of it, it could be used as a new route for a video info view.

@ToukL
Copy link

ToukL commented Dec 12, 2024

Well the annotation tool tab option seems a good idea in the future as it's more intuitive to get those info directly where the user does its annotations and not get back to the volume view every time. The thing is: I guess it takes longer to implement than a copy of the image info view. So as time goes by I will begin with the separate view and maybe if I have the time later I can help on the new tab ?

@mzur
Copy link
Member Author

mzur commented Dec 13, 2024

Sounds good 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Medium Priority
Development

No branches or pull requests

2 participants