Week of Oct 12 Highlights
We are happy to whip up a sweet Friday dessert for you. This week we look into video codecs and ABR, wait with anticipation for Adobe’s new prototyping tool, find some OSS face recognition and again start editing our movies on the mobile while finishing them on the desktop.
Increasing amounts of streaming video, higher expectations on quality and still some bandwidth constraints put pressure on streaming services to improve the delivery of video streams. Until the next generation of codecs are widely available, improvements in ABR could be one way to go. If you are deep into selection of codecs, and need to understand quality and performance differences between VP9 and HEVC/H.264 you should read this blog post doing a comprehensive comparison.
Watching video is definitely done mostly on mobile, e.g., Twitter video is 90% mobile, and editing is moving mobile as well. Last week I tweeted about Adobe releasing Premier Clip 2.0, and this week Apple released the new mobile iMovie.
Adobe is going all in on mobile, in photography and in video. Video storytelling is moving to now and to mobile. http://t.co/tOUmYK7r3D
— Vidispine (@vidispine) 8 oktober 2015
Well, it’s only two data points, but both of them allow me to shoot my video on the mobile, start the editing, and then finish off at my desktop. We will see more and more of seamless editing between mobile and desktop, and your workflow and your systems need to be ready to handle this.
Video sharing grows, so are misuse of copyrighted material, where Twitter has started to suspend accounts for sharing without permission. This problem will grow and measures to make it possible to balance between the ability to share, and the right to your material, are needed.
Automatic metadata harvesting will increase in importance over the next years with the explosion of material. Faces, well people, are important metadata, and really good face recognition mostly exists in proprietary format, from companies with huge amounts of training material. If you’re not into that, a few researchers from Carnegie Mellon University, did an open source implementation of Google FaceNet, that does face recognition in real time. Video and article at The Next Web.
Finally we are looking forward (well, some of us) to Adobe Project Comet, with data-driven user experience and interaction design. Some really good stuff there with the ability to drop a bunch of assets in your design, and have the tool recognize how they should be laid out. Watch the video. A logical next step would be a connection back to your digital asset management system, and suddenly you can easily find where your assets have been used. Yeah.
That’s all for now. What was the highlights of you week? Talk to us on Twitter or get more of this by signing up to our newsletter.