Following his participation in the 5G Media Action Group’s panel discussion on Media Production and 5G, we sat down to discuss the changes that that Ross Video’s Business Development Manager, Tom Crocker, is predicting will impact the production industry.
In this post, Tom reflects on the evolution of connectivity, the changes that 5G is specifically bringing to broadcasters and reviews the implications for technology providers like Ross Video.
Reflecting on our journey
People often forget how incredibly quickly we’ve evolved, how connected we are, and I think it’s worth reminding people about it from time to time .
I remember back in the 90s when our first modem arrived (I say arrived – I think we went to a shop and bought it with cash. People did in those days). It was a charming little black box with an actual go-faster-stripe on it. It ran at 14,400 bps and on a good day would propel us along the information superhighway at a blistering 4 kbps per second. Although the creaky copper wiring of suburban Melbourne’s aging telephone infrastructure kept our modem from realizing its full potential, I still got a Hotmail address. My Geocities page featured a banner saying, “Under Construction” with a little icon declaring that this page was “Made in Notepad.” It had frames and a garish tessellated background I’d found on AltaVista. Golden years.
The upgrade to a 33.6k unit a year or so later was revelatory. I was able to download individual mp3’s in under 2 hours, provided my connection wasn’t interrupted by one of mum’s friends calling for a chat.
Culminating with 5G and the implications to broadcast
Since those early days, we’ve seen ISDN, ADSL, cable and fibre adopted and in parallel 2G, EDGE, 3G, 4G and now 5G. Like most of the world, I’ve long since stopped downloading any of my media – far easier to just stream it. And it’s reached the point where I’m annoyed if I can’t stream (at least) HD to my phone on a moving train.
So, given the long march of progress what is so particularly significant about this upgrade?
For those in the business of making media, 5G crosses a few thresholds that previous incremental increases in wired and wireless mobile connectivity haven’t.
Three big differences
1 :: Bandwidth vs. Bitrate
While H.265 might be a dog to work with on a multi-track timeline, it it creates beautiful pictures at very low bitrates. While the rasters and frame rates have increased, the bandwidth required for carrying them from place to place has not increased in proportion, thanks to cleverer codecs and the increase in processing power available everywhere to en/de-code them.
The big leap in available bandwidth when moving from 4G to 5G means that the bandwidth available now comfortably exceeds the bitrates we use for most media acquisition tasks. The implication of this is that we can now choose between streaming, or chunk-uploading as growing files directly to where that media is needed without workarounds like proxies: we could even arrive in a place where the media recorded on the camera is simply a backup and ingest becomes totally transparent.
2 :: Acquisition by Anyone, Anywhere
Moore’s law has also been beavering away on the devices in people’s pockets – notably the cameras. Now that mobile phone cameras have gone beyond the point of being “good enough” for a lot of tasks, people are using phone cameras and creating good results.
We can certainly have a long discussion about the advantages and disadvantages of shooting media for professional output on phones instead of dedicated cameras, but that’s not going to stop people from using them for content acquisition. Mobile journalism has had its own annual conference for several years now and several documentaries and even the occasional feature film have been shot on phones.
While phones are most significant, the armada of other cameras are also connected devices these days – everything from traditional 2/3” ENG cameras to GoPros. Not only are almost all devices capable of uploading what they’ve shot, but there are a lot more of them out there.
3 :: Interacting with Everything
When the coronavirus pandemic forced the adoption of working from home, content creators and networks adapted quickly and many executive mindsets changed overnight. This changed the expectation that we need reliable wired connections, which may start to be ignored as people return to hotels and airports to be where the action is.
When those remote content creators start using 5G to have useful and painless interactions with media at a distance, we’re going to see an explosion of possibilities.
The implications for Ross and the middle of the workflow
At Ross we don’t do ENG cameras or phones, so you might be wondering why I’m writing this at all. We do believe these changes are going to have a massive impact on the other parts of our portfolio and the corresponding workflow:
- NRCS | Inception
- Automation | OverDrive
- MAM/PAM | Primestream & Streamline
- Ingest and playout | Media IO/Tria
- Graphics | XPression & Piero
- … and more.
That means we need to consider how the pressures on these systems are likely to change and what we can do to help, optimize and improve these areas so that people using them continue to find it easy to justify our place in their workflows.
Perhaps the most obvious outcome of all this is that there will be far more media coming in, faster, from more devices, in more codecs.
Once upon a simpler time, a news crew would be dispatched to a location, return with some fanfare, and insert a Digibeta tape into a rectangular hole. The clips would then appear in the system and the “workflow” could begin. It was all in the same codec, it only went to one linear destination and then the tape was put on a shelf.
The requirement to generate proxies in the field for urgent submission has disappeared. Most clips can be uploaded faster than they can be shot, in the original resolution, and it no longer makes sense to ingest locally to a laptop, make selections and then upload. It’s far better, and now possible, to just upload everything all the time and make decisions afterwards.
Here’s some scenarios that we’re expecting:
- Same Crew, Different Content. For a particular event, several crews are capturing content. Many with phones, some with ENG cameras and one person with a drone they bought off eBay.
- Crowd Submissions. The general public are sharing videos on social media. Most of these are unusable bilge but some of them are genuine news you need to use, from stories you couldn’t get a crew to.
- Extended Remote Journalism. Your journalists who are working on investigative pieces have been in the field for weeks and are constantly submitting content.
All of these have connected devices and clips are just…. arriving. As they’re shot. This is in some ways ideal: resources are available to whoever needs them almost as soon as they exist. Time from acquisition to broadcast can be massively reduced. Since your MAM is now in the cloud, staff on site, equipment on site and the amount you spend on infrastructure have all been massively reduced.
So far, so good.
Now what do we do with it?
Suddenly the problem has shifted from getting the content, to dealing with the content you’ve got. Here’s a quick rundown of the issues that we anticipate broadcasters are going to encounter:
- How do you make sure the right clips end up in the right place? Given the disparate and unpredictable sources, times and locations?
- How do you ensure quality is maintained? Both technically and editorially?
- How do you manage the fact that every codec from H.265 at 120p 8K to some 2:3 drone footage in a codec nobody’s ever heard of (in 27.43fps) turns up.
- How do you rationalize the metadata, given that every metadata standard is different?
- How do you link everything across every system? How does an operator using the NRCS know that a resource exists? The MAM team? The graphics team?
- How do you manage all these issues when the team is spread around the world?
These are just some of the problems that technology vendors like Ross and broadcasters worldwide are looking to solve.
The really exciting part is where we start looking at the other opportunities in these areas:
- Borders between systems blurring. For decades we’ve been looking at tools like NRCSs and MAMs as box-shaped islands connected by arrows in our workflow diagrams. In this new world there will need to be far blurrier borders between systems as their functions overlap, enabled by tools like MOS, APIs and common schemas like NewsMLG2.
- Hyper-converged workflows. Workflows used to have several beginnings. For the journalists it started in the NRCS. For camera operators it started when they hit the record button. For the in-house team it started when a clip arrived. Ideally, we’ll eliminate the arrows completely to create hyper-converged workflows where what can be automated, is automated.
In the sunlit uplands of the future that all happens without any human interaction between the camera operator hitting “stop,” wherever they might be, and people beginning to work on the footage. In a streaming or chunked-upload scenario the time gap between those two steps could be measured in fractions of seconds.
After all, if the MAM knows what the cameras are doing, the clips would be labelled on the way in. The MAM would know where to put them and who to show them to, they’d be in the right format and anyone who needed them would be notified and given the right resolution for their needs.
In summary: bring on the future!
In a world with enough bandwidth and coverage to keep everything (including the media assets) connected all the time there is no longer any need for these functions to be viewed separately. They should all just be different views on the same resources, and everyone should be able to interact with the assets they need at the same time, in different ways.
This will allow humans to focus on human tasks: the creation of content engaging to other humans.
A concluding caveat from Ross
We thought it was important to include a quick note to recognize that just because 5G exists, doesn’t mean that it’s always available. This post was written as a forward-thinking post, projecting into the future where 5G networks are more readily, if not always, available.
And at time of writing the mobile vendors haven’t seen fit to provide Tom enough coverage to stretch to the back half of his kitchen. The best wired connection available in his corner of middle-class north-easter Hampshire, under an hour from London in the United Kingdom, is a fibre-to-cabinet affair which is massively over-contended so he only gets 40 mbps down on a good day.
Tom bets that he’ll get 5G long before he gets fibre to his house.
Tom has over 15 years of broadcast vendor experience working in technical and strategic roles at industry leaders including Avid, Sony, Glookast and now Ross as Business Development Manager for News Workflow. He has spent the majority of that time in technical roles developing innovative workflows for customers around the world, particularly focused on the optimization of metadata flow and large-scale media management within news environments. For the last 7 years he has been specialized in remote wireless contribution and how it can impact the wider production environment.
Tom is Norwegian/British and has travelled all his life, living in Sudan, Australia, Norway and is now settled near Reading in the UK with his wife and two cats. He enjoys reading, playing guitar and going to the pub. He doesn’t enjoy running, but he does it anyway.