While Video has become ubiquitous thanks mostly to smartphones it doesn't mean you want to actually watch all of it.
Carnegie Mellon University computer scientists say they have invented a video highlighting technique called LiveLight that can automatically pick out action in videos shot by smartphones, GoPro cameras or Google Glass users.
LiveLight constantly evaluates action in a video, looking for visual novelty and ignoring repetitive or eventless sequences, to create a summary that enables a viewer to get the gist of what happened. What it produces is a miniature video trailer. Although not yet comparable to a professionally edited video, it can help people quickly review a long video of an event, a security camera feed, or video from a police cruiser's windshield camera, according to Carnegie researchers.
"A particularly cool application is using LiveLight to automatically digest videos from, say, GoPro or Google Glass, and quickly upload thumbnail trailers to social media. The summarization process thus avoids generating costly Internet data charges and tedious manual editing on long videos. This application, along with the surveillance camera auto-summarization, is now being developed for the retail market by PanOptus Inc., a startup founded by the inventors of LiveLight," the researchers stated.
The LiveLight video summary occurs in "quasi-real-time," with just a single pass through the video. It's not instantaneous, but it doesn't take long -- LiveLight might take 1-2 hours to process one hour of raw video and can do so on a conventional laptop. With a more powerful backend computing facility, production time can be shortened to mere minutes, according to the researchers.
Calling it the "ultimate unmanned tool for unlocking video data" the Carnegie researcher said LiveLight's algorithm processes the video and compiles a dictionary of its content. The algorithm then uses the learned dictionary to decide in a very efficient way if a newly seen segment is similar to previously observed events, such as routine traffic on a highway. Segments thus identified as trivial recurrences or eventless are excluded from the summary. Novel sequences not appearing in the learned dictionary, such as an erratic car, or a traffic accident, would be included in the summary, the researchers stated.
Though LiveLight can produce these summaries automatically, people also can be included in the loop for compiling the summary. In that instance, LiveLight provides a ranked list of novel sequences for a human editor to consider for the final video. In addition to selecting the sequences, a human editor might choose to restore some of the footage deemed worthless to provide context or visual transitions before and after the sequences of interest.
The ability to detect unusual behaviors within long stretches of tedious video could also be a boon to security firms that monitor and review surveillance camera video, the researchers said.