Skip to content

A tool to annotate (label) the location of known-geometry objects in video frames for use in machine learning.

Notifications You must be signed in to change notification settings

beiju/agent-annotator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

81 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Agent Annotator

This is a tool to annotate (label) the location of known-geometry objects in video frames for use in machine learning. annotator-backend contains a Rust Rocket web server that ingests videos, splits them into frames, allows administrators to create projects and invite users, serves the annotator interface to users, and stores and serves annotated video frames. annotator-interface contains a React app that communicates with the backend to load frames from a single video and allows users to precisely place object templates on each frame. It is optimized for ease of use, with objects defaulting to their location in the previous frame and robust keyboard support.

The release version of annotator-interface is stored within annotator-backend, so all that's required for deployment is annotator-backend.

Agent templates are currently hard-coded into annotator-interface. Modify annotator-interface/src/agents.json to add a new agent.

Deploying

  1. Install Rust and Cargo
  2. Set up a Postgres database. This can be managed or manual.
  3. Clone this repo to the server; cd into annotator-backend.
  4. Copy Rocket.example.toml to Rocket.toml and fill in a secret_key, the data_path to your folder full of video files, and the postgres connection details.
  5. Run cargo run to build and launch the server. Database tables will be set up on first launch.

Updating annotator-interface

After making changes to annotator-interface, run npm run build within the annotator-interface folder to create a new build. Copy the result to annotator-backend/public/annotator.

Video folder structure

The annotator expects files in the format generated by the JHU IMERSE MagnetoSuture System. There is a folder for each recording session -- this is the granularity with which experiments are assigned to projects. Within the session folder there is a folder for each trial run. The layout is as follows:

data_path/
  session_folder/
    trial_folder/
      camera.avi-0000.avi
      data.csv
    ...
  ...

camera.avi-0000.avi should be the video to be annotated (the strange file name is caused by a quirk of software). data.csv should be a csv file with a row for each frame of video and at least the columns video_frame_number, indicating the latest frame of video for this row (there can be multiple rows with the same number) and video_camera_timestamp, indicating the timestamp at which the indicated frame was captured. Neither number need start at zero.

About

A tool to annotate (label) the location of known-geometry objects in video frames for use in machine learning.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published