Developing HEVC-based codecs with good analysis tools

High Efficiency Video Coding (HEVC) is an emerging standard designed for encoding and decoding video streams. It can be stored and delivered more efficiently and economically than such predecessors as the H.264 or MPEG-2 standards and delivers an average bit rate reduction of 50% over H.264 and delivers higher quality at the same bit rate. Demand for high quality video for a multitude of consumer-driven applications has driven this standard into prominence in the recent years.

This white paper from Interra details the technical issues around developing HEVC-based video codecs, including complexity, compression, quality and buffer analysis.

Agile field-to-audience news infrastructure

In this e-book from Sony the company delves into the perpetual upheaval that is TV news. In the following pages, the company analyses the difficulty in pinning down today’s news audience, lays out five reasons for considering its journalist-friendly Media Backbone Hive and takes a look at the latest news studio renovations at Chinese state broadcaster CCTV.

Webinar: Demystifying live and on-demand OTT workflows

The world of OTT video delivery is increasingly competitive. Akamai Technologies’ Peter Chave delivers a webinar on improving your workflows and quality of experience. Topics include defining and measuring quality, live and VOD workflows, avoiding transcoding errors and streamlining advertising.

Take 15 minutes out of your day and watch it now!

Webinar: The Broadcast Operations Control Center (BOCC) – increasing visibility and understanding

Akamai created its Broadcast Operations Control Center to support OTT video providers in the new online ecosystem. Located at the company’s US headquarters, the facility was designed to help ensure the reliability of OTT services through a combination of highly trained technical staff and a host of monitoring, analytics, reporting, quality and availability measurement tools.

Learn more about the Akamai BOCC and how it can help your OTT business in this Akamai webinar.

Quality control and monitoring in OTT workflows

OTT and video streaming are here to stay, and broadcasters need to brace up for the new viewing habits by embracing OTT workflows. Ensuring right delivery of technically sound content is critical. The right set of quality control tools are a must to ensure that you stay ahead in the OTT race. This white paper offers some solutions for maintaining high quality OTT output.

A guide to loudness measurement and control

A burning issue faced by the broadcasting industries in recent years has been inconsistent loudness. The notion that people tend to pay more attention to louder sounds resulted in enginners producing louder mixes. And some broadcasters increased the loudness of commercials deliberately to catch the audience attention, forcing government regulators to step in to curb the “Loudness Arms Race”.

This white paper describes different loudness measurement techniques and proposes an effective audio normalisation approach for file-based workflows.

Five reasons SVOD services fail

Having trouble getting traction with your video on demand service? In this white paper the renowned Ooyala research team breaks down the top five reasons SVOD services fail – and what you can do to fix them.

IP Live: time to embrace the challenge

For some broadcasters and media producers, the idea of moving to an IP-based live production environment is a scary prospect. This special TVBE supplement by Sony explores the practical benefits and promises of live IP and how to embrace the IP challenge.

Webinar: The promise of cloud

The cloud is where the film and TV industry will be spending the foreseeable future. Production and post workflows and storage, delivery and analytics will all be influenced by some form of cloud technology. This webinar by Akamai Technologies will get your team ready for a cloud-centric infrastructure.

Webinar: Enhancing video quality and performance to drive viewer engagement

For OTT publishers, the name of the game is making sure that whenever someone tunes in, an amazing experience awaits.

This webinar from Akamai explores client side HTTP/UDP and multicast technology, strategies for delivering HD and 4k streams, and scaling to reach a wider audience. Join Akamai’s Scott Brown, VP of product management, as he addresses video quality and its impact on viewer engagement.

ATSC 3.0 and its impact on video quality assessment

ATSC 3.0 is establishing itself as the next generation standard for digital television. The standards committee clearly realises that the viewing experience is no longer confined to a ‘static’ model of people watching TV in their homes. Rather, viewers are enjoying content wherever they may be.

This white paper examines the complex QA issues around UHD, HDR, wide colour gamuts and other image parameters in an ATSC 3.0 world.

System reliability for broadcast-over-IP applications

From headend to backend, passing through contribution and distribution networks, IP is now almost everywhere in broadcast infrastructure. This is due to the flexibility of IP-based systems, their relatively low cost and their high performance: they are increasingly prevalent across digital video technologies and installations.
However, transmitting data over IP networks is not an easy task, and IP packets still undergo a lot of challenges to reach their destination. In this white paper, you will learn about the sources of errors in IP infrastructure and gain insights in preventing those errors.

Webinar: Navigating the OTT 2.0 landscape

Streaming video is no longer a novel alternative to linear broadcast. OTT is the new normal. This webinar by Akamai addresses the next generation of video streaming, dubbed OTT 2.0, and will get you up to speed ond OTT and its possibilities in the future.

Measuring video quality and its impact on your bottom line

Gone are the days when viewers sat patiently through 15 seconds of buffering. Akamai’s research shows that viewers will abandon a stream after as little as two seconds of delay. This Akamai case study outlines the challenges of delivering quality video online and it’s importance to the bottom line.

Global traffic management for cloud application architectures

Cloud computing is rewriting the rules of the IT game. Technologies, architectures, deployment strategies, cost models, even user expectations — everything is being affected.  The ephemeral nature of the workloads and a more distributed cloud infrastructure create a host of new questions and opportunities.

Based on research by FactPoint, this white paper from Cedexis offers insights into managing traffic and workloads in the new cloud paradigm.

Securing the integrity of video analytics data

Securing video analytics data is a high priority for operators and customers, but the intricacy of data security is a sensitive topic. Service providers are faced with risks of maintaining data privacy and consumer trust. A security breach not only reduces customer trust in their video service provider, it reduces the trust of vendors, partners, and advertisers in an operator’s ability to provide valuable analytics information and insight.
Informed by interviews with top industry decision-makers, this white paper reveals:

  • how operators understand data security is not necessarily their core business
  • why operators must pay constant attention to data security, not just periodic updates
  • consumers take their privacy seriously, and so do operators

Download it now:

Case study: Streaming sport video

Swedish VoD services company Viaplay needed a dynamic solution that could leverage the best from every available CDN for streaming major sporting events. Read how Cedexis helped optimise Viaplay’s CDN architecture.

How to build a private CDN with off-the-shelf components

As Netflix and others have discovered, the cost-effective path to high-performance video service is with a network of private hosting solutions and clouds that act like a private CDN. Until recently, this solution would have been too complicated. But new software packages, open source solutions and newer services make it possible to assemble a private CDN solution with off-the-shelf products.

Webinar: Progress in managing the IP transition

IP4Live is a strategic approach to the IP revolution, which features interoperability between vendors. IP’s march to ubiquity is well on its way, and is being adopted by all parts of the broadcast workflow, opening up new challenges as well as a wider range of possibilities for live broadcasters. How can traditional and emerging IP systems co-exist? How best to make the transition? The webinar offers practical, tangible examples of how to take steps into working in IP-based broadcast environments.

Avoiding traffic jams with RUM for latency-based load balancing

You want to get content your audience fast, no matter where in the world they are. Ensuring a speedy, consistent user experience becomes increasingly challenging as traffic increases globally. Several global traffic management have tried to tackle this problem, but only one of them, real user monitoring (RUM), succeeds.

The death of dedicated AV distribution

4K-multimedia-over-IP is a new and emerging class of AV distribution service with a potentially disruptive impact on the current distributed

AV solutions sold today. Its capabilities go far beyond the distribution of video and audio. Solutions are available that offer various

combinations of additional capabilities: Ethernet, USB, keyboard and mouse, control signals and even power. This white paper will explore the possibilities of 4K-multimedia-over-IP and reveal new opportunities in AV content distribution and interactivity.

Streamlining your storage infrastructure for Corporate Video

Manage Corporate Video More Effectively with the Right Storage Infrastructure. As video becomes a more important part of corporate assets, how can companies develop a storage infrastructure to manage these critical resources more easily and economically?

Trying to apply traditional storage and backup procedures to unstructured data is creating major headaches for organizations with significant amounts of image and video data in their IT mix. And as this data increases in volume, systems easily reach the breaking point. Catalogues grow too large. Backups can’t be completed on time. Workflow becomes complex and backup apps can’t communicate with media asset managers. All this leaves IT departments struggling to manage and protect these key assets.

As a company with roots in large-scale video and broadcast applications, QsmartStorage understands how to help companies apply the right technology to the problem of managing fast-growing graphic files. For unstructured data, Quantum’s StorNext® eliminates data volume problems by giving users a multi-tier digital asset solution that simplifies workflow, integrates with leading asset managers, and protects data automatically without relying on backup by moving data between storage tiers—while keeping it all available for fast access and re-use.

Total cost of ownership: The key metric for multi-DRM strategy

As Multi-DRM Becomes the Norm, Buy is a Smarter Strategy than Build

Frost & Sullivan takes an in-depth look at the typical process and value judgement that companies go through as they plan their anti-piracy infrastructure for OTT services.

The paper outlines 5 factors to consider when evaluating total cost of ownership for a DRM solution, which includes the most misunderstood aspects of build vs. buy approaches.

Download your copy of the white paper for best-practice guidance on deploying secure content services that provide consistent, compelling user experiences across all devices and consumption scenarios.

Future-proofing media workflows through software-defined storage

The explosion in demand for video and for increasing resolution (HD to UHD and beyond) is putting a severe strain on video workflows. Critical to these workflows is the underlying storage that enables the video creation-to-consumption value chain. This white paper, commissioned by Avid, describes a new breed of software defined storage platforms, that are agile, simple, and reliable.

Migrating to a file-based workflow with TMD

Successful Malaysian broadcaster Astro needed to upgrade, consolidate and streamline its technology platforms and migrate to file-based workflows. Find out how TMD provided a solution with its Mediaflex-UMS platform in this detailed case study.

Webinar: Opening up live broadcast production with IP

IP is here to stay and is being adopted by all parts of the broadcast workflow, opening up new challenges and a wider of pallet of possibilities for live broadcasters to produce and deliver better stories faster. How can traditional and emerging IP systems co-exist? How best to make the transition?

This in-depth webinar by EVS website, hosted by TVBEurope, delves into the company’s pioneering efforts in developing all-IP broadcast workflows.

Click Download below to watch!

Webinar: The future of AV distribution is IP – and it’s now!

The AV/IT merger has been talked about for long enough and now it truly has arrived. New technology exists that enables AV distribution to be managed over a standard IP network, challenging the conventional proprietary switch solutions. This will drive the growing demand for AV over IP solutions and integrators, consultants and end users must have a better understanding of the technology and the benefits.

To watch the webinar, click the link below.

The future of revenue security for UHD video

Learn about the 3 pillars of security that meet the requirements for premium UHD services. These security pillars reinforce each other and when combined provide a strong security platform for UHD content, whether live or on-demand and however it is delivered


Welcome to the MCN generation: A guide for the future of video

The names PewDiePie, Stampylonghead, Jenna Marbles and FunToyz-Collector may not mean much to everybody. But with an aggregate of over 23 billion views on YouTube, these individuals are the personification of the rise of the Multi-Channel Network (MCN). How can the potential this huge global audience be harnessed?

Focus Forward: 2016 Technology Trends Report

These are exciting and challenging times. All segments of the media and entertainment ecosystem are facing multiple technology transitions, each with wide-ranging implications. How will media companies navigate these technology evolutions? This comprehensive report by Imagine Communications, based on the responses of more than 700 professionals from all sectors of the industry, provides a realistic assessment of the state of the media and entertainment industry technology landscape.

Dogan TV: The Transition from SDI to IP

Turkish broadcaster Dogan TV completed the migration from SDI to an IP infrastructure at the end of 2015. Read the full case study on how Cinegy them the first national broadcaster in the world whose workflows are fully IP operated.

Video recording in the cloud: Use cases and implementation

Recording live TV has been around since the 1970s. That recording is now being revolutionised by the ability of IP networks to store content and stream it on-demand. With “cloud PVR” applications, content is captured in the heart of the network, instead of being recorded on a local drive, and streamed as video-on-demand. This white paper describes use cases for cloud PVR in IP, cable TV and OTT environments and gives tips for better implementations.

A High Frame Rate (HFR) primer

An in-depth look at topic of high frame rate (HFR) and its many applications to digital cinema and broadcasting. How much more storage is required for HFR content? What is the optimal frame rate for cinema and for TV? What are the benefits for audiences?

Best Practices for a 4K Media Workflow

The ‘4K Evolution’ is unfolding faster than anyone could have predicted. While wave after wave of cameras, televisions, monitors and even smartphones have all adopted 4K – the revolution of very high definition content fueled by new monetization capabilities is truly upon us.

Content producers recognize that the need to develop ’very high definition’ content now includes not only several variations of 4K frame sizes – but increasingly, higher frame sizes driven by even higher resolution cameras but also the need to preserve source content in the highest resolution possible.

An RF hole-in-one for NEP’s Presidents Cup coverage

For last year’s Presidents Cup Golf Tournament, Australian systems integrator NEP had to switch multiple antennae to multiple receivers for nine roving wireless cameras. Find out how RF experts ETL Systems untangled the coverage with its Enigma Series router.

The fight is on: Winning the pay-OTT battle

At CES 2016 Netflix announced its launch into 130 new countries, adding to its existing services available in 55 countries. Although still the largest international pay-OTT service in terms of subscribers, local battles between regional pay-TV and OTT operators and newcomers are brewing.

This paper offers guidance on how operators can adapt their management and processes throughout the whole subscriber lifecycle to deal with the specific challenges of pay-OTT. It offers expert analysis of findings from an independent Research Now survey, commissioned by Paywizard, which asked consumers how they are responding to the different TV offerings available today.

Media Manifest Delivery Core

Motion Pictures Laboratories (MovieLabs) and The Entertainment Merchants Association (EMA) and released the Media Manifest Delivery Core specification last month, a simplified schema for the delivering of media assets of online motion picture and television content.

Called the Media Manifest Delivery Core, the new specification derives from the more-encompassing Media Manifest specification and will be used to identify assets such as video and audio files, trailers, subtitles, closed captions, and images delivered to an online distributor for a video title. The Media Manifest Delivery Core organises the packaging of the multitude of files needed for online video and the various versions required to enable global retailers to deliver and manage the user experience.

This white paper covers the entire Media Manifest Delivery Core specification.

The Ultimate Guide to Tapeless Archives

Disk archives and content libraries have finally become a practical proposition, delivering faster access to media assets for many more users and at the same time, reducing maintenance costs and downtime. This white paper outlines the benefits of tapeless archives, including reduced software and hardware costs, reduced power consumption and operational costs, and easy expansion.

Calibrating HDR: Using SpectraCal’s C6 HDR High Luminance Colorimeter

High Dynamic Range (HDR) and Ultra HD TV have been big news items in the entertainment industry, and they’re here to stay. A new set of video monitor and TV standards requires a new way of calibration and measurement. Learn how SpectraCal’s C6 HDR Colorimeter is addressing the extended measurement requirements of the next generation of UHD TV’s.

The MaximalSound algorithm

A complete overview of the MaximalSound algorithm, a fully automated pre-mastering process is a fully automated pre-mastering process. This white paper takes us through each step of the processing order: analyze, harmonic enhancement, crossover, de-expander, limiter, video links and format conversion.

The future of television hinges on the democratisation of distribution as well as content

According to the Video Advertising Bureau in Q2 of 2015 90% of all viewing time was still spent watching television.  Even millennials – generally believed to be the audience least likely to consume linear television – spent more than 80% of their video time is spent in front of the TV.

All age groups spend more than 75% of the time allocated to consuming video in front of television programmes. For all the forecast of the demise of TV, the medium remains incredibly popular – the question is why?

An adaptive, digital embracing technology

TV has a history of innovation.  The 1980’s introduced video recording; not long after cable and satellite vastly extended consumer choice.  As technology became more intelligent, as did TV with the introduction of hard disc drive recording, digital channels, on demand programming and, most recently, catch up television services delivered over the internet.

Most of these developments have been successful in providing increased flexibility. Consumers of television have never had more choice to watch what they want, when they want and where they want.

The research conducted by the Video Advertising Bureau also demonstrated that consumption of television on smartphones and tablets is increasing whilst PC viewing is falling. As technology evolves people’s habits (moving from a desktop experience to a smartphone experience, for example), as does people’s hunger for a real television experience and the level of expectation grows – regardless of where or on which device they are consuming TV.

Democratisation of broadcast

As well as expanding choice for consumers, digitisation further opens up television to brands that previously would not have been able to make a business from it. A number of large sporting brands now have TV channels and there has been a growth in channels portraying news dedicated to different countries, religions or lifestyles (such as Al Jazeera and Russia Today).  To date these have been based on traditional TV platforms but this will change as the reliability of IPTV increases and the cost continues to reduce. This opens up the opportunity to monetise content that was previously uneconomic and enables new organisations to create and distribute TV, further fuelling choice and driving the next wave of television choices.

The challenge for new entrants

Whilst it may be cheaper for new entrants to deliver TV over the internet it certainly is not less complicated. There are a number of technical challenges that need to be overcome to deliver effective television services.

Business models are far from “tried and tested” especially in an advertising market where the impact of innovations such as programmatic trading and dynamic insertion are yet to be fully seen.  Complications include creating the right combination of advertising and subscription fees; delivering a reliable service over networks already creaking with data demand and ensuring that content is accessible and discoverable to users.

It is hardly surprising that non-traditional broadcasters have steered clear of television to date. But this is increasingly not an option. In a quad play world, service providers that cannot deliver all relevant services will not enjoy the cross promotion opportunities and customer loyalty that comes with a successful television service.

According to VOD Professional at VUIX 2015, the most important aspect for a user (more important than cost and content availability) is that a TV service works faultlessly and that sophisticated features are available wherever and whenever they watch. Customers have set their TV expectations high – anything less than a seamless experience simply will not do.  This echoes the lessons from the Video Advertising Bureau’s research: people want to consume TV on different devices, but they still want it to be recognisable as television.  Simply having a video presence on the web is not television.

In reality, there are four key pillars that make modern television compelling: live TV; catch up TV; video on demand and the capability to record programme for viewing later.  To drive compelling content on new devices such as smartphones and tablets, all four need to be present and delivered in one seamless experience.  A consumer should be able to start watching a programme on an iPad on the commute home, pause and pick up again at exactly the same moment on the TV in the living room.

At PerceptionTV, our objective was to deliver this requirement rapidly and cost effectively, enabling operators, broadcasters and content owners to launch new TV services, within months rather than years. The Perception Platform, now operational for more than nine years, delivers that stability and cutting edge functionality, leading this revolution in technology and change within IPTV.


Despite very real challenges, television continues to deliver a service that drives customer loyalty like no other in the communications portfolio, driving innovation that has delivered more choice to users and kept even the most connected consumers loyal to TV.  If TV is to continue to evolve it will need to deliver a user experience that is familiar, seamless and integrated across a wide range of devices.  Delivering this is challenging for broadcasters and service providers without experience in TV delivery will need third party expertise to ensure that they deliver the experience that consumers expect.

Disaster recovery and business continuity

A lack of planning, implementation or indeed the absence of a disaster recovery strategy can be costly for any business of any size. The solution to the problem is not only about the hardware and software required. It is largely to do with people, processes, skills and providing systems that are automated and integrated. Hoping for the best is simply putting the whole business’s survival down to luck, or lack of it.

An Argument for Open IP Standards in the Media Industry

The broadcast and media industry’s transition from Serial Digital Interface (SDI) to Internet Protocol (IP) as the primary means of moving signals between and through facilities is upon us. With it comes the promise of increased agility and system scalability that can help broadcasters develop new business models and remain competitive.

While there’s no longer a question as to whether or not a transition is necessary, opinions are quite varied as to the pace and level of priority a broadcaster should be placing on the transition. A key impediment in moving the industry forward, however, is the fact that multiple competing approaches to the transition are being introduced to the market, further complicating an already daunting decision.

File Delivery – DPP & AS-11

The migration for tapeless environments, the multi-vendor, – formats and – platforms scenario and the adoption of HD as the standard format have created the need to provide guidelines so that broadcasters can undertake the issues created by different codecs, input formats and delivery standards. uses ToolsOnAir just: play for 4K broadcast to ASTRA

Based out of the EnStyle Studios in Buggingen Germany, has always been on the forefront of broadcast innovation. Since March 1st, 2012 has provided their viewers 24 hours of unique programming daily. Now with 4K transmission, which began on September 4th, 2015, remains at the forefront of European DRTV.

ToolsOnAir’s just: play running on a MacPro 8-core has been a key part of the workflow since inception.

Optimizing production workflows in Adobe Creative Cloud

Today’s editors demand more performance from their editing and productions workflows, hoping to convert the saved time into further creative tasks.

mxfSPEEDRAIL seamlessly integrates with Adobe Premiere Pro Editing Suite, enabling the editors to have direct access to specific ingest tasks right inside Adobe Premiere’s Pro Interface.

It’s a streaming world after all

On 25th October, Yahoo and NFL created history when they live streamed the first ever regular season NFL to be aired online around the world (not counting the pirated ones). Yahoo reportedly spent upwards of $20 million on the rights and could then enjoy exclusive ad rights for the game, too. The game was watched by a reported 15 million viewers globally and was streamed to a wide range of devices, including smartphones, tablets and connected TVs. The game was available to viewers in 185 countries in any form they wanted.

The move by the NFL highlights how the trend of meeting audiences on their premises is hitting home to broadcasters, brands and organisations at large. Few have multimillion dollar budgets to spend on licensing, production or distribution, but with live broadcasting online come new opportunities. There are solutions to cater to any budget – from the most basic and low-cost productions and cost-efficient streaming alternatives, all the way to the most sophisticated productions and delivery. What most alternatives have in common is that they are able to reach audiences, especially a younger generation, where they happen to be at that moment.

As streaming live and on-demand video becomes an increasingly important way of reaching audiences, more and better tools to understand the audience are developed. The opportunities to track, measure and analyse user behaviour, patterns and then customise the online viewer experience are more sophisticated and extensive than linear TV has ever been. Being able to customise content, delivery methods, formats and commercial messaging doesn’t only mean that the viewer gets a better experience, but also means that the sender is able to collect valuable data to constantly improve their efforts and, maybe more importantly, monetise the audience in a brand new way compared to when all viewers are treated in the same way.

Its happening in Europe, too. The BBC recently announced that Sir Elton John’s summer performance from Cornwall’s Eden Project will be made available on its iPlayer streaming service. It’s becoming the norm – BBC Music is responsible for a number of exclusive iPlayer commissions in recent times, such as Music Box with Guy Garvey, live music series All Shook Up and Amy Winehouse In Her Own Words. However, the names and the events are getting bigger all the time.

Many broadcasters already offers a live stream through its apps and it’s only a matter of time until we’re no longer able to recall a world without live streaming content, always available at our fingertips. The reason we formed Bambuser was so that anyone become could become a “broadcaster”. Even with very limited resources, anyone could stream live video to people all over the world. You can start with a budget of zero dollars and then grow with your needs.

The wide range of services available means that whether you’re a local non-profit organization that needs to communicate with your audience through video, or a global brand – like Red Bull streaming its Neopop event – there’s a solution for you.

Live streaming introduces us to a flood of video content and a never-ending stream of events craving our attention. Curation and deciding what’s worth paying attention to will be a great challenge. The opportunities with instant access to live content from every corner of the globe are endless and new interactive video formats allow the audience to participate in the content creation process and engage better with content.

by Jonas Vig, founder, Bambuser

Why some media companies earn much more from their content rights than others now available on demand

All media companies spend upwards of 40% of their total outlay on content rights. Yet a few select companies earn a disproportionate return on their investment. It’s more than just great leadership, these companies have aggressive rights exploitation practices in place. In this webinar, we will give an insider’s look at the art, science and best practices that “move the needle” for every facet of the industry, from programming to distribution and sales, to consumer products licensing.

Click here to enter this webinar

Viewability metrics complexity is hampering vCPM model

Viewability metrics complexity is hampering vCPM model

As marketers increasingly demand proof that their online adverts are actually being viewed, Dominic Finney, MD at FaR Partners (a Theorem Digital company) believes the time has come for the media industry to work out an industry-wide de facto standard that can verify the viewability of adverts.

Online ad spending may by on the rise, but marketers are increasingly concerned that there isn’t a standard where they can verify the viewability of adverts, so that they know they are getting what they actually pay for.

Digital media ad revenue is forecast to surpass TV ad spending for the first time in the US next year, according to research firm Magna Global, with a revenue of $66 billion. Digital media spending continues to grow as advertisers look to Web advertising, social media and video to target consumers. But the industry can’t rest on its laurels and it is essential to sort out the issue of viewability to retain trust in the industry going forward.

Viewabiliy is now a crucial issue
Concerns over viewability rose earlier this year when it was revealed by German ad verification company Meetrics that only 49% of online ads met the Internet Advertising Bureau (IAB) standard – that 50% of an ad must be in view for at least one second.

On top of this you have the issue of third part vendor accreditation to track viewability.  The Media Rating Council now has 15 accredited vendors, but has yet to accredit any third-party technology to track mobile ad viewability. Viewability stats vary from vendor to vendor, making it impossible for marketers to keep abreast of how their adverts are performing. Imagine if we had a host of companies coming up with tv ratings in the UK, instead of just The Broadcasters’ Audience Research Board (BARB).  We’d never have a definitive answer on the most viewed programme. That gives you an idea of how complex the viewability issue is.

The media industry has up until now agreed to disagree on the viewability issue and has been making half-hearted attempts to sort it out. But, the big tech players are taking the issue into their own hands. Recently heavyweight Google entered the foray saying that around 56% of adverts never get a chance to be viewed.  Some of this down to ad blocking exacerbating the issue, but other adverts were simply scrolled out of view or sitting lost on the background tab.

Google has since said that it will make adverts 100% viewable in the next few months, bringing all campaigns purchased on a CPM into view across the Google Display Network (GDN). Google is looking to ensure that picks up media spend where it is having the most impact.

The IAB has now said that publishers should aim to guarantee a 70% advert viewability figure. The American Association of Advertising Agencies, however, has said that the goal should be 100% viewability – and soon.

The big players are all too aware that advertisers are tired of paying for ads that no one sees. Facebook said that later this year it would begin selling 100% viewability adverts in the site’s News Feed. At the same time, Twitter has started offering autoplay videos across its service, and said it will only charge when video advert is 100% in-view for at least 3 seconds.

A lack of transparency in viewability has become a big problem for advertisers. Respondents to a study we carried out for advertising technology company InSkin found that inconsistency of measurement across vendor solutions was at the heart of producing real inconsistencies in viewability scores.  Of those FaR surveyed, it was rated as being central ( 8.3 out of 10 in terms of importance) to future digital marketing strategies.

The viewability puzzle

The general consensus is that the IAB’s viewability standard provides only a basic foundation and does not work across all ad formats and platforms which means we are nowhere near creating standards and consistency of measurement tools to enable a solid vCPM model.

It is time for the industry to get together and come up with a standard for viewability that gives advertisers confidence in the measurement metrics. Only then will the digital advertising industry be able to move forward to deliver the necessary tools and measurement reports for today’s industry that works across a host of devices, platforms and ad formats.

How Aframe Works – A Technical Overview

Aframe is a cloud video production and asset management platform used by businesses to streamline their management and storage of video and accompanying data while enabling collaboration between disparate teams from any location.

Whether sharing video in the building, or across the globe, content creators rely on Aframe to provide a central repository that all team members and stakeholders can access. Media is tagged, shared, reviewed and approved, all within one simple cross platform interface.

Smart data and its role in the next generation of Content Discovery

Ok, so I’ve been on a journey and it’s an extremely interesting one! Let me explain why.

So, I am a ‘newbie’ to world of TV but the speed and pace of the industry has swiftly swept me along.

My journey to enlightenment started with content discovery.  Audiences need the right tools and environment to search and discover, wherever they are and through whatever connected device that they are using.  But fundamental to this, is the (meta)data which sits in the background to aid and support this.

Metadata is the fuel which powers personalised search and recommendations.  Imagine a world without detailed synopses, film or programme information nor images/trailers – our viewing experience would be limited. Navigation and finding information would be tedious and constantly disturbed due to other sources having to be referred to. Thankfully today we have the data and supporting technology that provides us with a more knowledgeable interface.

But content discovery is transforming and the speed of change is occurring rapidly.  We are now increasingly watching content using VoD services and as an audience, we expect to be able to watch our favourite films or series at anytime.  The next observable change is the move to more image-enriched content. The use of trailers and pixel perfect images provide an easier mechanism for audiences to engage and recognise what is being presented to us.    And finally ‘provide me with the best possible experience’.   We are all used to seeing recommendations being presented, but now we expect a lot more from our search capabilities. The desire to have the same ‘online search’ experience has prompted the industry to invest more within semantics. This utilises and understands intent and contextual meaning of terms; considering location, variation of words, concept matching, and natural language queries to provide more relevant search.

And so what is next? Voice recognition software is being talked about as the next generation of search. Yes, you will be able to talk to your TV or tablet and tell it what you want to watch!  Sounds fab right? But it will be interesting to see how many people actually exploit this feature.  Voice recognition and biometrics software are currently built into most smartphones today, however usage appears low.  With more and more people watching content on the go, will we have a population of people saying out loud the name of their favourite TV programme or film? It’ll be interesting to see how this evolves.

But whilst the technological revolution continues, we continue to demand access to more content and are often left disappointed.  The Ericsson ConsumerLab report found that 50% of consumers who watch linear TV reported that they can’t find anything to watch on a daily basis. So what is going wrong? Are there too many options or are we just not being presented with what we want. I would suggest that the latter is the most prevalent.  This indicates that the industry now needs to work harder to attract, retain and personalise the viewers’ experience.

So what can help to support this challenge? Yes you’ve got it, it’s the ‘D’ word again – data.

The platforms which support personalised search require enriched data sets coupled with intelligent tagging.  For example, if you were searching for a scientific documentary on Einstein, you wouldn’t expect to see a fictional cartoon character also named ‘Einstein’ being presented as your programme of choice.

At this year’s IBC, we showed the power behind enriched data sets.  If you were lucky enough to grab a slot at our innovation showcase, we demonstrated how the use of subtitles can provide more granular search capabilities; i.e. by location, product, scene, mood and character information.

So finding that key scene (when the big reveal happens) will no longer be a massive effort requiring 10 minutes of navigating! Imagine a sleek, intuitive and a more relevant experience.

The world of TV is evolving!


Jennifer Walker, Product Marketing Manager, Content Discovery

Image Performance Enhancements in the EOS C300 Mark II

It is clear that both digital cinematic technology and creative aspirations have been moving into higher realms. The need to advance beyond the constraints of 8-bit depth and harness the far greater flexibilities of 10 and 12-bit in support of creative postproduction processes cannot be ignored. A choice between RGB 4:4:4 or YCrCb 4:2:2 video components is deemed highly desirable to cover a wider range of origination possibilities. Higher frame rates are also sought.

Extended Recording Capabilities in the EOS C300 Mark II

The original EOS C300 represented Canon’s definitive entry into the digital cinematography marketplace. While largely television-centric and comparatively modest in its recording capabilities it quickly established a worldwide reputation for excellent image quality, ergonomics that favored handheld shooting, unusually high sensitivity, and very low power consumption. These attributes made it a favorite among global documentary shooters. MPEG- 2 remains a central recording format to much of the broadcast television world.

Ergonomic and Operational Enhancements to EOS C300 Mark II

The C300 gained a reputation for high performance imagery – a testament to the excellent performance of the 4K image sensor developed by Canon. That camera was, however, largely television-centric in that it exclusively originated 1080-line HD from that single 4K image sensor and recorded this on-board as 8-bit YCrCb 4:2:2 via an MPEG-2 50 Mbps codec at frame rates up to 30fps.

How Video Can Support Remote Communities Around The World

It has been more than five years since the devastating 7.0 magnitude earthquake decimated the island nation of Haiti. While the country still strives to rebuild and regain its socioeconomic footing, one company has made a commitment to aiding more than 450 children who live in an orphanage five hours’ drive from Port-au-Prince.

RTBF chooses Wallix for access control

Today’s broadcasting organisations rely on robust and effective information technology systems more than ever. To meet this demand RTBF is continually improving and evolving its networking infrastructure to ensure it can meet the needs of an increasingly digital audience. RTBF needed a way to allow privileged users access to the right systems and also be able to easily remove access to those they no longer need permission for.

Ericsson Mobility Report: On The Pulse Of The Networked Society

There has been a significant increase in video traffic shares on smartphones and tablets. TV and video content is increasingly being accessed via smartphones. Smartphone subscriptions are expected to almost double by 2021 and grow more than 200 percent from 2015–2021 in the Middle East and Africa.

Programmatic Advertising

Programmatic advertising is one of the most discussed topics in television advertising today. Based on the number of articles, summits, blog posts and YouTube videos, it seems as though everyone in the advertising and media industries, worldwide, has adopted programmatic technologies.

So what is it? Programmatic advertising generally refers to a number of technologies that automate the planning, selling, buying and optimization of advertising inventory using audience data. “Programmatic” is another word for automated, and programmatic buying and selling refers to any ad buy processed via a computerized interaction, as opposed to manual or partially automated processes such as fax or email.

Secure my site – media security concerns, beliefs and attitudes

The cloud migration of the infrastructure for the creation and delivery of media is one of the most important technology changes the industry has ever seen. Savings in capital and operational costs combined with the ease of delivering to all consumer screens make the architectural shift a compelling proposition. Yet with the pace of cyber-attacks increasing, web site security remains of paramount concern to media executives. Download this report to find out the latest data on how your peers are preparing in the current environment and increasing their investment in security.

Using data to drive profit: top 10 keys for using data analytics in the media and entertainment industry

Netflix was able to circumvent the traditional (and very expensive) “pilot” TV test process because they had in-depth knowledge about their viewers based on advanced analytics. Similarly, AMC Networks used advanced analytics to gain a richer picture of whom their viewers are and what they want  to better understand how to keep their attention in an increasingly crowded entertainment marketplace. “These examples from illustrate just two of the ways advanced analytics can provide extremely valuable insight into today’s media viewers.

Broadcasters: download this must-read whitepaper to learn ten best practices for successfully implementing data analytics to understand audiences and attract new viewers, increase viewer loyalty and drive profits  in the media and entertainment (M&E) industry.

The next step in ABR video streaming

Over-the-top (OTT) streaming video is the fastest growing form of video consumption, overtaking traditional linear TV and disk-based playback. As carriers and programmers embrace OTT delivery, many find that quality of experience is a key differentiator. However, consistently delivering a high quality viewing experience via bandwidth-constrained networks continues to pose significant challenges. Long start-up time, buffering stalls, low quality video, and playback artifacts can all degrade quality of experience and shorten viewer engagement, directly impacting monetization.

Adaptive bitrate (ABR) streaming is the current industry standard for Internet video and for OTT services. ABR addresses buffering and stalling, enabling continuous playback as the available bandwidth varies. However, while ABR addresses the varying bandwidth problem, it does not deliver consistent video quality.

This paper provides an overview of ABR streaming and outlines the challenges service providers face today. It then introduces QBRTM, a new ABR enhancement technology from MediaMelon, and shows the benefits it provides versus alternative approaches.

QBR is easily integrated into existing workflows without changing the video content and yields dramatic benefits. QBR enables service providers to enjoy the quality benefits of variable bitrate (VBR) encoding, ensuring consistent quality playback, while retaining the buffering/stalling reduction of traditional ABR streaming.

The Interactive Classroom: when to select a whiteboard, projection screen or both in one

The need to clearly present online content in the classroom is driving educators to find practical solutions that cost-effectively handle both new digital and traditional whiteboard implementations. How can a whiteboard double as a video display surface without losing the best attributes of each use? Is it possible to annotate and interact with clearly projected video on a whiteboard surface that can stand up to prolonged marker use and erasure? Download this whitepaper to see how educators can have it all and within budget.

NUGEN Audio’s Loudness Toolkit 2 for integrated and effortless Loudness compliance

The world of loudness is ever-evolving, with updates to standards and new recommended practices being adopted around the world at a seemingly unstoppable pace. As a result, users are demanding more from their loudness tools – and they’re also seeing opportunities beyond simple compliance. At Nugen Audio, we’re investing considerable time and energy into researching and developing solutions to the problems that lie on the audio professional’s critical path.

Switching power in new Telerecord truck

Telerecord has traditionally built its own outside broadcast units. The company’s engineers work with local coachbuilders to get the vehicle and its core facilities – like heating and air conditioning – right, before they design and install all the technical equipment. Drawing on its experience with other vehicles in the fleet, Telerecord set out to build their largest unit in a very tight timescale.

Cloud storage technologies show promise for media workflows

Smart media vendors are readying for an increase in cloud usage according to new research.  And why not? Benefits include stronger performance, improved security and ROI in integrated cloud storage solutions. Learn more about this growing opportunity.

The move to OTT TV requires a shift in the traditional Broadcast Business Model

Changing the Channel on the Broadcast Business Model

Viewer Experience: The move to OTT TV requires a shift in the traditional Broadcast Business Model

By Keith Zubchevich, Chief Strategy Officer at Conviva

The biggest disruption the TV industry has ever faced is the arrival of on demand, Over the Top (OTT) TV. Viewing habits have fundamentally changed, driving content providers to focus their efforts on delivering a high quality delivery and viewing experience. Video optimisation companies have access to data and analytics only available because of the OTT model, and they are able to analyse, in real time, how people are responding to content. If the old pay TV model was akin to a manufacturer (the content producer) and retailer (the delivery platform), the OTT content delivery model is more similar to a multifaceted affiliate model. As streaming services such as HBO Now create a user experience via a website or mobile app, content producers and distributors are no longer dependent upon the traditional pay TV providers: it is now the consumer who decides when, where and how they want to view content.

However, meeting consumer expectations has become a much more complex proposition for all parties. The challenge lies in monitoring, analysing and optimizing the performance of these diverse delivery channels, so each consumer experiences a consistent quality of service, regardless of delivery method or device. Because we know that consumers, when they are disappointed, swiftly and ruthlessly switch from one provider to the next, in pursuit of the ultimate experience.

This presents the TV industry with a new set of challenges, as broadcast syndicators have to adapt to viewer’s needs, whilst remaining profitable. The more complex, and less consistent and controlled, delivery environment of OTT is precisely why video optimisation companies collect and interpret data and use it to fine-tune consumer experiences. Companies need data analytics and the technology behind them to identify and resolve issues that threaten the quality of the viewing experience in order to ensure their audience grows, and, importantly, develops brand loyalty. The plethora of information available to video optimisation companies allows them to identify why and where the delivery is mediocre, while simultaneously maintaining an optimal quality for viewer engagement, all in real-time: saving an audience before it abandons, rather than understanding why they abandoned, is the new name of the game.

Optimisation is at the centre of the broadcast syndication business model: everyone in the market has a stake. This unique space is currently occupied by companies who integrate video optimisation and data analytics – they are the foundation upon which broadcast syndicators’ current and future business models are constructed. These companies ensure that no viewing experience disappoints, because they are able to identify why anything might go wrong and pre-emptively fix it. Video optimisation companies ensure that whatever is going on in the delivery network, the viewing experience meets and exceeds viewer expectations. They are not only a crucial piece of the modern TV business model, but they directly ensure broadcast syndicator’s profitability.

Making Analytics and Optimisation Profitable

Nowadays, video optimisation companies play a crucial role in helping content owners such as HBO, ESPN, and Vevo to manage the demands of the user.  They maintain Quality of Experience so that OTT services delight viewers, who either pay for subscription services or watch the ads in the video content. Video optimisation companies direct content producers and broadcast syndicators towards profitable business models, and take advantage of the data available for OTT that is opaque in an MSO-driven environment. Content is still king, but success lies just as much in its delivery: if a show doesn’t stream smoothly, or the picture quality doesn’t scale to the viewer’s device, viewers will abandon the service and find a better experience elsewhere. The market has changed and the vast amount of viewing platforms and shows indicates viewers’ loyalty is always to the best experience. It is a simple concept: if it doesn’t work, no one will watch it.

4 Strategies to Successfully Sell to Corporate IT

AV products continue to evolve to better integrate with existing corporate networks and other systems, thus challenging systems integrators to demonstrate knowledge and understanding when interacting with IT professionals. For an AV professional to appear credible you’ll need build fluency in the language of IT to ensure more effective and efficient interactions with current and prospective customers.

TV-synced advertising 101: Bridging the TV and online worlds

TV advertising has always been an incredibly powerful medium to reach large audiences, especially during prime time. However, the rise in channel count and the advent of devices used to access the Internet has transformed the way we consume TV. It has moved from a linear experience that brought the household together to a more personalized experience delivered to connect screens separated from the TV. This raises questions as to the effectiveness of traditional broadcast advertising in a multiscreen world, in which brands and agencies still need to ensure that their commercials reach the right audience on the right screen at the right time while still reaching large audience numbers.


TV has undergone tremendous changes since the turn of the century, blending TV and IP represented the first step in enabling connectivity to pervade consumers’ everyday lives, and connected devices such as smartphones and tablets have closed the loop by allowing users to watch video content on any platform. Viewers are also harnessing connectivity to combine the social experience with TV consumption.  While watching primetime TV and commercials, 71% visit a social media platform during the commercial and 64% visit a social media platform during the TV show (Facebook, “From One Screen to Five: The New Way We Watch TV” conducted by Millward Brown 2014). In a world where content sources and devices keep increasing, it becomes difficult to predict the viewability of a TV advert as viewers switch between TV and online.

While the rise of these devices seems to have created new challenges for brands and agencies in terms of reach with consumers becoming harder to pin down, the web has also opened the door to better targeting and more granular consumer data. Social media has further increased the amount of data that consumers are willing to share, enabling brands to use these channels to better understand their target audiences. Advertisers have creatively harnessed the power of technology and, for a few years now, have been looking into the potential of the second screen to better reach the right consumer on the right screen without having to forego TV and its reach.

In 2013, an answer to this dilemma emerged with TV synced advertising, which utilizes real—time content recognition technology to decipher the content being played on TV, and combines this with built-in targeting and Real-Time Bidding (RTB) capabilities. This new solution leverages the reach of TV and the granular metrics and click-to-purchase potential of the online and social worlds to deliver advertisements on the second screen in real-time.


TV Synced Ads powered by Teletrax can identify TV programs and commercials within seconds of broadcast and trigger digital campaigns for synchronization and/or competitive response. These ads can be delivered through social, video, display, and search channels across phone, desktop, and tablet devices for truly integrated marketing. This enables advertisers to reach the right people, with the right content, at the right place and time. Additionally, through 4C’s Social Ads Product, marketers can leverage targeting based on predictive data science, capitalize on pre-optimized, custom audience segments and manage complex campaigns with simplicity through bulk editing, smart groups and auto-optimization tools.


The solution is based on “fingerprinting”, a technology enabling an aired TV commercial or piece of content to be matched with a global TV channel database followed by automated call to action. Using a content database, the network detects that a particular ad or piece of content is on air, allowing the synchronisation technology to automatically target users watching TV while browsing the web. Inventory is made available in a dedicated marketplace in order to allow brands and agencies to buy premium slots in real-time and ensure that their online advertising is aligned with a broadcast campaign. This allows media buyers to acquire digital impressions at the best price while ensuring that their ads will be viewed on both screens.


Previous campaigns have demonstrated that consumers are very receptive to TV-synced advertising, with unprecedented lifts in Click-Through Rates (CTR) on social media. TV-synced advertisements combining Facebook and TV have seen a 60% uplift in consumer clicks compared to a Facebook-only adverts, and up to 250% on Twitter.


These techniques allow advertisers to target consumers that can purchase goods and services when they see an advertisement via a connected device. TV-synced advertising provides an immediate solution to bridge the gap between the broadcast and online worlds by ensuring that brands and agencies can create multi-platform ad campaigns. By utilizing TV-synced advertising solutions, brands and agencies can create data-driven cross-screen advertising campaigns that adapt to their target audience.


Data provides advertisers with consumer understanding, ensuring that the right message reaches the right screen, at the right time, regardless of viewing habits and the device in use. Media buyers can streamline their purchases while ensuring that they only acquire the best impressions, increasing return on investment for brands. In the near future, we expect creative teams to design advertising campaigns that leverage each screen and social feed to offer promotions or expand the narrative out of home in order to offer new immersive advertising experiences that are relevant to each audience member’s activity and location in real-time.


By Andy Nobbs, SVP Global TV Analytics Sales & EMEA Activation, 4C

Subscriber management: the driving force behind pay-TV profitability

The pay-TV industry is finding new ways to serve the 1.5 billion digital TV homes and grab a share of a market estimated to be worth $400 billion by 2020. Yet, with intense competition and higher expectation from subscribers, many operators are struggling with how to monetise the opportunity. 

New video wall delivers a stellar performance

 With the installation of an LED video wall in a new drama theatre space, the first of its kind in an educational establishment in the UK, LED technology has added a new dimension to performing arts at St Paul’s Catholic School in Coventry.

The media industry transition to IP: making the technology case

Is your media facility cloud-ready?

Broadcasters and media companies have discovered that moving to an IT-based, IP-deliv­ered production infrastructure affords them an economical way to deal with diverse content distribution platforms (TV, web, and mo­bile) and serves the expanding range of consumer devices. Such IP upgrades however involve the very core of their money-making infrastructures, so they need to be undertaken strategically.

Download “The Media Industry Transition to IP: Making the Technology Case“ to help answer the question of whether or not your facility is “cloud-ready” and learn how some of the industry’s leading producers and broadcasters are maintaining quality and reliability in the transition.

Smart home, smart play for retailers

Smart Home Transforms Installation Play into DIY Opportunity for Electronics Retailers

“Consumers see electronics stores and home improvement stores equally when they think about this category, and that means opportunity,” says Ben Arnold, retail market analyst for The NPD Group.   “That means there is a lot of opportunity for electronics retailers.”’s Kalen Daniel agrees. View this ebook to learn how handling DIY correctly provides rewards in new business opportunities from a new customer base.

A buyers guide to specifying large LED screens

Full colour LED screens are becoming increasingly popular and costs have plummeted in the past few years. But there’s a wide range available – how does an innocent buyer sort the wheat from the chaff?  The aim of this short paper is to highlight the key factors that buyers and specifiers should be aware of when comparing similar specification displays (screen size and resolution).

IP-based solutions ready to revolutionise big venue broadcasting

IP-based solutions ready to revolutionise big venue broadcasting


John Halksworth, Senior Product Manager, Adder Technology


When it comes to broadcasting from large venues, IT, and in particular IP-based solutions are becoming more and more ingrained with the end to end broadcasting process. The convergence of IT and broadcast has led to organisations using a more integrated approach in their operations, taking advantage of the benefits of using this technology and the associated boost in connectivity.

This change has largely been made possible through the use of a standard IP network as a transport method around the broadcast workflow. As broadcasters further integrate IP into their infrastructures, entire convoys of OB trucks and legions of staff being required to cover major events may well become a thing of the past. IP represents efficiency, reliability and cost-effectiveness to broadcasters.


KVM makes location broadcasting easier, cheaper and more efficient

The main area where IP is making its mark is in KVM (keyboard, video and mouse). IP-based high performance KVM brings added functionality, scalability and cost savings, as it essentially turns a single screen into a portal for several computers – none of which need to be in the same physical location as the screen and input device. This opens up a world of possibilities for broadcaster’s at large venues, as operators can log into any machine and perform a number of functions from their specific location.


Fewer computers mean less space is required, less heat and noise is produced, and less air conditioning is required. Fewer staff members can perform the same amount of work by switching between machines using the same keyboard, monitor and mouse. Through the use of extension technology, USB and video signals can be delivered to the users, and therefore multiple machines can be controlled by one person or several people in different locations. Two operators can also view the same content on different screens. While only one user can actively work on the content and have control, the other can view it in real-time.


If a particular computer or individual system node fails, other areas are not affected, and individual components can be replaced quickly and easily, and this is particularly true of the switching component. IP-based high-performance KVM technology uses high specification, off-the-shelf devices that can be easily obtained and are inexpensive to keep in stock.


Also, if all machines are linked via a high-performance KVM system, the failure of one piece of equipment is easily handled. An operator can simply move into another studio and access the same computer from a new workstation.


Major Australian arts venue reaps benefits of IP-based KVM

By way of an example of this system in action, one of Australia’s most famous landmarks and world-class performing art centres has had a huge amount of success by implementing an IP-based high-performance KVM solution for its AV suites. The digital KVM matrix was installed to enhance the performance of its recording facility, and to improve its operational flexibility.

The AV suites are located within the main shell of the expansive venue and are connected to every on-site editing booth via an advanced fibre-based digital network, providing unrivalled audio and visual quality.


The venue had tried a range of solutions, but required fanless remote units, compatible with Mac video outputs and with pixel perfect video. Working closely with technicians, the systems integrator proposed the use of the AdderLink Infinity solution, including the AdderLink Infinity Manager, which utilises IP-based high-performance KVM technology.


The AdderLink Infinity satisfied every requirement, providing complete operational flexibility. Plus, because it operated over IP, the infrastructure was already in place throughout the entire facility. Engineers at the performing arts centre have reported that the system ‘felt solid’ and they found the manager both intuitive and easy to use. Another benefit was the remote OSD function whereby video can be pushed to remote monitors.


As IP continues to be adopted throughout the broadcast workflow as a standard transport layer, organisations will enjoy a multitude of advantages, including enhanced cost-effectives, scalability, interoperability and functionality. IP-based KVM, particular IP-based high-performance KVM, is an excellent use case of how broadcasters can reap the benefits of this technology and provides the ideal platform to expand the use of IP further across organisations in this sector.


How today’s worship trends influence projection screen selection

As contemporary worship traditions continue to evolve along with growing congregations, video projection is playing an ever more important role in delivering the message. Given the variety of purposes projection screens serve in the worship setting, ministries need to consider several important environmental and technical factors when selecting this important communication tool.

Download this whitepaper to discover how projection screens in the worship setting improves broadcast efficiencies as well as the critical factors to consider for optimal output.

The LEOPARD Project

Meyer Sound Labs developed the LEO family of speakers to produce the most ideal linear speakers possible: that is, to reduce distortion to the theoretical limit. LEOPARD is a small line array element in the LEO family that wasn’t targeted to a specific market. Instead, we gave it to our engineers as a challenge to see how well it could be designed

How acquisition can encourage flexible storytelling

More than ever before, broadcasters need to make sure the programming they’re outputting is engaging for their viewers. This is in part because of the many channels and large amount of content now available to audiences on any platform, at any time. To keep audiences engaged and therefore tuned in, a broadcaster’s output needs to stand out visually and creatively.

12G-SDI Physical Layer Analysis using the Ultra 4K Tool Box

This white paper covers the use of the Advanced Physical Analysis features of the Omnitek Ultra 4K Tool Box to perform physical layer analysis of 12G-SDI signals which differs from the techniques already adopted by the industry to measure SD-SDI, HD-SDI and 3G-SDI signals. Here we will explain the differences between traditional physical layer analysis and those that now need to be adopted for 6G-SDI and 12G-SDI to ensure repeatable and comparable results to those typically only available on very high-end oscilloscopes such as the Teledyne LeCroy SDA 820Zi-A.

Traditional formats blend with technology at MIPCOM


By Jamie Searle, Director of Content Partnerships and Creator Services at Rightster

MIPCOM 2015, the annual trade show for entertainment content, took place this month in Cannes, bringing together the biggest names in the television industry. With the likes of Facebook, ITV and Hulu in attendance, the question of how technology and traditional linear formats sit together was a big focus of the conference. Facebook, in particular, was addressed in detail, with announcements on new engagement tools and API development further cementing Facebook’s place as the primary second screen platform; especially with voting functionality to add to the conversation around linear TV events. In Nicola Mendelsohn’s (VP EMEA, Facebook) keynote, she described how 75 percent of Facebook’s video views were on mobile, and they are delivering 4 billion views a day.

TV x Twitter

It’s not just Facebook either; Twitter too is stepping up the charge to compete on ownership of TV conversation via the TV x Twitter product, which uses hashtags, live tweets and promoted tweets alongside TV advertising and programmes. Twitter also announced some interesting new developments in its Amplify offering. These make it easier for advertisers to run ads against defined premium content categories, and to showcase video via promoted tweets. It is now much easier for video content owners to monetise via direct video uploads to desktop, which run pre-rolls from advertisers who want to reach audiences in a particular category organically.

From a multi-platform perspective, brands, content owners and creators should be aware of these changes and think about how they can advertise and create relevant content that plays to the strength of each platform. These developments also create opportunities for enhanced data to boost efficiency on campaigns. It allows studios and producers who are developing digital-first brands to gain further value from their paid media spend on these platforms, as they build their owned and operated audiences.

The SVOD upsell

Fullscreen’s Rooster Teeth is a great example of a ‘digital-first ’producer with a large subscriber base (19 million in total) targeting a younger male audience. It is digital-first original programming that is really making waves when it comes to how traditional TV industries adapt to digital audiences. As traditional TV usage amongst millennial viewers is falling (by 10.6 per cent between September 2014 and January 2015, according to a Nielsen survey earlier this year), for Fullscreen and other TV businesses, a shift to subscription video on demand (SVOD) is important as a revenue stream. Upselling pre-existing large audiences to SVOD services where there is already a pre-baked audience (for example with Rooster Teeth) alongside compelling originals is Fullscreen’s play here.

Video and talent integration

Brands themselves are becoming an increasingly important destination for content sales as they look to integrate user-generated and viral content into creative campaigns. Rightster’s launch of VideoSpring, a product that helps brands and agencies source and license video content for their own advertising campaigns, generated a lot of interest from clip library owners at MIPCOM, for example.

As TV companies embrace technology to address declining younger audiences, there was a notable increase in the number of distributors and production companies featuring or starring YouTube influencers in their programmes and films. This included BBCWW’s ‘Joe & Caspar Hit The Road’ with Joe Sugg (who has 5.4 million subscribers on YouTube) and Thatcher Joe (5 million subscribers), initially on DVD and VOD. Novel has recently announced ‘Cinemaniacs’ for CBBC, which features YouTuber Oli White (1.5 million subscribers) as well. TV companies should work creatively with digital talent to come up with new ideas. These new ideas present opportunities both for the talent and for the companies that are aiming to reach younger audiences. Expect to see TV companies working with Facebookers, Viners and Snapchat stars in 2016.

The real opportunity here is for TV businesses to utilise these touchpoints with influencers more holistically, in order to build owned and operated digital communities around their own brands. If MIPCOM 2015 taught us one thing, it was that TV organisations are recognising that they need to embrace technology in order to reach milliennial audiences.



Strengthening OTT and linear streaming services with SDV

Most linear television operations today include legacy video processing equipment dedicated to specific tasks such as encoding, splicing and multiplexing.  As exciting new OTT and VOD services offered by traditional DTH providers gain popularity, video providers are finding that legacy hardware-based video processing equipment cannot keep pace.


Clear-Com serves up Wimbledon 2015

Clear-Com serves up Wimbledon 2015


The  Championships, Wimbledon, took place at The All England Lawn Tennis and Croquet Club, London, from 29 June to 12 July 2015.

Long standing audio suppliers, RG Jones Sound Engineering, first brought in Clear-Com in 2014 to test how the equipment would integrate with the overall system. Following its success, they were asked to return in a more complete capacity.

The 2015 tournament was the 129th edition of The Championships, the 48th in the Open Era and the third Grand Slam tournament of the year. Widely regarded as the premier global tennis event, Wimbledon attracts more than 1.2 billion viewers.

For the 15th consecutive year, IBM was responsible for collecting and analyzing game match data in real time, providing The Championships’ information to user groups such as the press, media, broadcasters and public. Statistics collectors sit at the courtside to capture data from each match, such as the number of serves, service direction and return stroke; each stat collector has to be in contact with the help desk as well as being able to hear microphone audio feeds from around the court, including those at the Chair Umpire position. In order to ensure clear and reliable communication, this year some of these positions were equipped with Clear-Com RS-702 intercom beltpacks, installed by the audio team from RG Jones Sound Engineering Ltd.

At the heart of this communication set-up was Clear-Com’s Eclipse HX-Delta digital matrix. For connecting the Delta with Clear-Com’s RS-702s, 2-channel analog partyline beltpacks, over the IP network are the LQ-4W2 and LQ-2W2 devices for 4-wire and 2-wire IP interfacing, respectively. The two LQ interface devices enabled the bi-directional transmissions of audio signals between / among intercom beltpack users.

Clear-Com’s FreeSpeak II 1.9GHz wireless intercom system was also used during play to facilitate communication between the audio team located on Centre Court and the audio sound mixer in a remote mix position for adjusting sound levels of the Chair Umpire’s mic and cueing presentations. When the umpire called a shot, a sound engineer inside Centre Court determined the audibility of the call and then talked to the sound mixer on his comms panel via FreeSpeak II, in order to make adjustments to the sound levels. This process ensured the spectators could hear each call loud and clear over the PA system without delay and without it being so loud to pose a problem to the broadcast engineers.

The FreeSpeak II system wireless beltpacks were seamlessly connected to the Eclipse HX-Delta matrix system, via an E-Que-HX card that slots directly into the matrix. Two FreeSpeak II Tranceiver Antennas were deployed within the famous Centre Court to provide connectivity to all wireless beltpacks in and around this coverage area. This level of flexibility gave RG Jones engineer ‘Brew’ (James Breward), the comms system designer and programmer who was in the audio control room — full control and on-the-fly programming of all wireless beltpacks.

Brew said, “RG has tried many wireless comms systems over the last five years at Wimbledon, and the FreeSpeak II has proven to be the best sounding, most feature-rich and functional of all the digital systems on the market. We achieved great coverage of the bowl, the radio rack position and the majority of the covered walkway from our control room to Centre Court with no more than two antennas, both of which were installed without needing extensive coverage survey. Clear-Com  will be back for more next year.”


Putting the profits into pay-TV

Putting the profits into pay-TV
Jonathan Guthrie, CEO and co-founder, Paywizard

With the number of national broadcasters subsiding, an influx of low cost streaming and VOD services hitting the market, and the rise of mobile video changing consumer behaviour, broadcasters and multi-service operators are trying to find new ways to win, retain and grow their subscriber revenues. But today, subscribers expect to be able to watch the TV they want, whenever and wherever they like, which means that they are now actively responding to poor service, high cost or a lack of interesting content.
Benefits of proactive subscriber management include:

•Predictable revenue through greater visibility into subscriber behaviours
•Predictable operational factors
•Analysis of traffic patterns and transaction volumes
•Customer data and insight
•Regular and more effective engagement between brand and subscribers
•Better use of social engagement and marketing opportunities

Benefits of proactive subscriber management include:

•Predictable revenue through greater visibility into subscriber behaviours
•Predictable operational factors
•Analysis of traffic patterns and transaction volumes
•Customer data and insight
•Regular and more effective engagement between brand and subscribers
•Better use of social engagement and marketing opportunities

So with increasing competition and high subscriber expectations, operators around the world are all asking the same question: What can we do to increase profitability?

Although the saying “content is king” still holds true, content is no longer as proprietary as it once was. High value content such as sports and flagship show formats are still key draws for viewers, but with a future where the same content will be available from multiple operators in each country, the pay-TV industry needs to become smarter at other core areas that can help drive profitability.

This means that content quality, monthly subscription price and delivery methods are key factors in creating a compelling subscription TV package. But these elements can be enhanced further through a better understanding of existing and potential subscribers.

For example, Sky, which has an average pay-TV ARPU of approximately £400, spends roughly £390 in acquiring a new customer on a minimum 12-month contract (this includes the costs associated with set-top boxes, marketing, advertising, service activation, customer call centres and other costs such as discounted introductory offers). And with over 10 million subscribers in the UK and a churn rate of approximately 10%, Sky needs to sign up around a million subscribers a year to keep growing. Yet year on year the company manages to win customers and boasts the highest customer growth and lowest churn rates for 11 years.

So how does Sky do it? Alongside its highly regarded content acquisition strategy, the broadcaster has been a vocal advocate for the use of proactive subscriber management.

When deployed effectively, proactive subscriber management can improve key performance indicators that result in strong growth, lower churn, and most importantly, profitability. For example, if an operator has 100,000 subscribers paying $10 per month, with 25,000 new subscribers joining each year and an annual churn rate of 20%, its annual revenues will equate to $12.15 million in year 1, increasing to just $12.95 million in 5 years – giving a total of $62.96 million over the 5 year period. But if an operator was to improve acquisition by just 5% and reduce churn by the same 5% through effective subscriber marketing, then the operator’s revenues would equate to $12.56 million in year 1 and grow to $15.84 million by year 5 – giving a total of $71.65 million – an increase of almost $9 million through use of proactive subscriber management.

Not only do small improvements to acquisition and churn rates make a significant difference on the bottom line and profitability, but the difference increases massively year on year. Overall, the impact of being able to actively manage your subscriber base, by increasing acquisition by 5% and reducing churn by 5% equates, in this example, to an extra $8.94 million in revenue over 5 years.

Through the use of proactive subscriber management, operators can help to drive profitability in three key ways:

•Add more net new subscribers
Using a subscriber management system effectively can help drive prospect acquisition campaigns, including analysis-based recommendations for targeted advertising. Another effective acquisition strategy is tactical campaigning such as “recommend a friend” or sign-up incentive schemes that can be driven across multiple contact points via subscriber management tools.

•Sell more products and services
Intelligent subscriber management can help to provide context and analysis to help build ARPU increasing activities. For example, targeted loyalty campaigns and upsell/cross-sell opportunities that are particularly useful for MSOs. Another effective tactic is special promotional sales, targeted based on a deeper understanding of the aggregate subscriber base and individual preferences.

•Keep customers longer
Proactive subscriber management has a strong role to play in reducing the inevitable churn by identifying and targeting individual groups of customers when contract terms are due, or pending cancellation. Systems can even aid the retention teams that are needed for outreach to cancelled subscriptions. Proactive measures such as analysis of usage and bundle tailoring to help generate loyalty are strategies that can reduce churn.

As more pay-TV services come to market, broadcasters and operators need to attract subscribers – and keep them. For the past 17 years, Paywizard has been driving pay-TV revenues for its clients with the use of proactive subscriber management. Paywizard understands the TV market, sees the challenges operators face and helps to drive revenue opportunities in the new multiscreen world. Paywizard not only helps to get subscribers on board, it helps improve brand loyalty, and through effective marketing, can drive revenues and profitability, ensuring operators are successful in an increasingly complex TV landscape.

Essential A/V guide to high-performance cabling

Long. Flexible. Durable. Versatile. Corning’s active optical cabling solutions have A/V professionals thinking about glass in a whole new way.

From the chaos of capturing content in the field to the quiet, rarified air of recording studios and production suites, Corning’s USB 3.0 and Thunderbolt optical cables demonstrate rugged dependability and elegant simplicity. Just plug one in and watch your workflow improve.

Engineering Excellence: The right video wall choice made easy

The competition in video wall display is fierce but the choice is clear and large

The LCD-based video wall has emerged as the clear choice for digital signage, public display, and any corporate or educational setting where a large size, small depth and high resolution display area is needed. But with so many choices, how can you choose the right supplier of video wall technology? By choosing video walls with the leading new-generation technology.  Download this white paper to learn which is the best, most cost-effective solution to future-proof your investment.

ENVY puts iZotope RX audio repair at heart of audio post operations

In the heart of London, Envy Post Production is one of the leading post houses in the UK. With 160 operational areas in a tight cluster of five locations, ENVY provides complete audio and picture post production on programmes including factual, drama, comedy, documentaries and light entertainment, as well as collaborating on design, branding, and commercials.

Preventing churn 101: video intelligence

The industry has really turned a corner as the predominant revenue model continues to pivot from ad revenue to subscription revenue. In my mind, this became a tipping point in the industry, when many content providers realised that they absolutely had to have a digital OTT presence.

Interestingly, this is putting more pressure on the broadcast side of things, because there is an increasing opportunity for churn as existing barriers to switching providers have lowered.

The unifying power of software-defined video

In association with Elemental Technologies, TVBEurope explores the world of software-defined video and how traditional hardware can no longer cope with the changing market dynamics. TVBEurope speaks to Elemental’s John Nemeth, vice president of sales, EMEA, and Keith Wymbs chief marketing officer to gauge the lay of the land.

Owning your customer: making subscriptions more than just a payment

Today, subscribers expect an easy-to-use, personalised TV service that allows them to watch their favourite movies, TV shows and sports events on any device, at just about any time. But with an increasing number of pay-TV services coming to market, it is more important than ever for operators to retain ownership of their customers.

If a customer signs up for a pay-TV service through an app from an app store, not only do operators lose 30% of their revenue through this stream, but they also lose ownership of their customers. They have forfeited the valuable customer data and lost the opportunity to understand them, engage with them and create a compelling personalised experience – a crucial component in today’s competitive market. Offering a TV service that goes beyond subscriptions to strengthen the operator/customer relationship will ultimately help businesses to grow, and in turn, be profitable.

So how do you do this? The first way to retain ownership of customers is to deliver a service that makes signing up via a website, rather than an app, as simple and as slick as possible, in order to reduce customer drop off. If the process is testing and time consuming, customers will abandon the process and simply look for an alternative service provider that has an easier way to sign up. The first interaction a subscriber will have with the TV service is signing up, so getting this part right is absolutely critical!

Likewise, subscribers must be able to pay however they like – be it by a weekly, monthly or yearly subscription, or for every single piece of content they want to watch. Operators must also make sure customers can pay using the payment method they prefer – whether that’s credit or debit card, through e-Wallets like PayPal, or even via cash vouchers. With more content available than ever before both locally and internationally, customers expect to purchase and subscribe to the services they want, no matter which international or local payment option they prefer.

Another important factor is enabling customers to subscribe and watch services on the devices they like, which means managing delivery to multiple devices efficiently and allowing customers to pick and choose which screens they subscribe to services on.

But perhaps the most important of all is delivering a service that understands how subscribers engage with content. Utilising valuable data to analyse behaviour is critical – for instance, if a customer frequently watches sport, they are not going to be enticed by offers for children’s movies. Giving subscribers a personalised service will increase customer satisfaction and loyalty, ultimately leading to increased revenue and profitability.  But to do this successfully it is essential to analyse customer viewing habits and use marketing tools to make personalised recommendations based on these, with enticing offers and discounts to maximise upsell and cross sell opportunities.

This factor is also important for proactive churn reduction. Having access to subscriber data can help to identify and stop subscribers that are about to leave. Targeting a customer off-season with an offer of a month’s worth of premier league football matches might just entice a subscriber to renew their football package –preventing the operator from having to pay for customer acquisition costs if this customer did decide to leave.

So in order to truly own your customer it is important for operators to have a good subscriber management platform that allows sign up across devices, captures and analyses data and offers a range of different payment methods. While some operators are happy to pay 30% of their revenues to use an app store, using a subscriber management system gives companies independence and allows them to be in full control of their relationship with customers, helping to cut down costs, drive revenues and in turn, profitability.

At Paywizard we are helping pay-TV providers around the world ensure that they retain ownership of their customers. We understand that subscribing to a pay-TV service should be about more than just a bill; intelligent subscriber management technology, efficient payment processing and multi-channel customer service operations in place to support that. Our Agile platform combines with our expert services to increase subscriber acquisition, retention, up-sell and cross sell. With Paywizard, subscriber management is about delivering an engaging end-to-end service that makes the subscriber happy – and willing to pay for content.

Outdoor Digital Menu Boards – The Move From Static to Digital

Digital menu boards provide many benefits to QSRs, specifically as it comes to day-parting. Most QSRs serve breakfast, lunch, and dinner. With past static signage, all of the meal options for the day would either need to be squeezed onto one page of the menu board or the front side of the signage would display morning meal options and the back side would display afternoon and evening options. In the latter scenario, it would be required for the signage to be manually changed daily. Digital menu boards also help significantly with upselling. Through the use of a digital menu board, QSRs are able to add motion to high-ticket items.

Big Screen Brightness and Clarity Getting Up Close and Personal

From the towering, brilliant monoliths in Times Square to attention-commanding casino glitz to digital signage that curves its way into the nooks around airport baggage carousels, the opportunities for communicating through digital video are staggering.

A wild ride down Fury Road with Mad Max

As the title implies, Mad Max: Fury Road is a mad and furious, high-action post-apocalyptic film set in a desert wasteland. The plot calls for a warlord’s harem to race across the sandy landscape in a desperate, high-speed bid for freedom from his ruthless henchman. Both escapees and their pursuers form an “armada” of armored vehicles. Hidden within that sandstorm were members of the audio crew, trying to capture every bit of dialogue and sound effects, all while in motion. Faced with such a monumental challenge, veteran production sound mixer Ben Osmo and vehicle FX recordist Oliver Machin turned to Sound Devices’ 7-Series of digital audio recorders.

“I used four 788T-SSDs plus four CL-8s, and did mix down to each recorder, plus a two-track mix down to a 744T for dailies,” said Osmo. “I also had a 788T rigged in my sound cart and kept that in a larger truck for a couple of months, next to video split.” In addition to that equipment, Oliver Machin brought a sixth 788T in a bag to record extra vehicle FX when necessary.

Osmo said, “The use of multiple 788Ts became necessary when the challenge was to record multiple tracks under extreme conditions. The 788Ts were very versatile. As well as ISO tracks and mix downs, we were able to set up mix minuses with AUX sends into a monitor mixer. We had available 42 channels of radio mics. This was because of the repeater systems and different RF blocks in play, so we could pre-rig vehicles ahead of time, and in my van, I would then cross over to the correct receiver blocks once they were in action.”

There was also a separate action unit sound team, using a more simplified system, still pursuing the action and—because the vehicle sounds were so loud—providing usable guide track dialogue for future automated dialogue replacement (ADR).

Microphones were hidden in the cabin and on the principle cast, in the engine bays, near exhausts, on top of the ‘War Rig’ (the main characters’ get-away vehicle), and on a vast number of supporting cast members in other vehicles. Capturing all of that audio would be a major task on a normal sound stage, but portability requirements for Fury Road would not allow for a typical film-studio audio setup, so the crew had to get creative.

The crew used a 4×4 vehicle belonging to Osmo, which they dubbed the Osmotron. “Instead of having sound carts traditionally…that wasn’t going to cut it on a road movie travelling at 80- or 90-miles an hour across the desert. Nobody was going to keep up, so we built into his vehicle huge racks of radio mic receivers.”

“It was lucky that I had all SSD 788Ts,” Osmo said. “So, even though most of the filming was off road, they performed exceptionally well under extreme vibration.” Separately, a 744T was suspended in a pouch so it could absorb the shocks of the Namibian desert during the six-month long production schedule. “They never skipped a beat, especially when travelling and recording on very bumpy and dusty terrain.”

Adding to the complexity, the cast members were essentially in a rusty box, so RF reception had to be rethought, making repeaters sometimes necessary. The crew set up three multiplex systems (which Osmo designed with assistance of RF experts) with RF combiners and high-powered transmitter boosters to maximise the range of 1-to-3km, not only for recording purposes but also to aid communication behind the scenes.

“As we travelled long distances, the walkie talkie repeater towers were often out of range,” Osmo said, “so I was asked to provide my comms in the Lectrosonics radio mics and IFB systems to director George Miller and first AD and co-producer, PJ Voeten, as they also often were great distances apart—at least 500m to 1.5km—and they were able to have hands free communication. Cinematographer John Seale and two of his operators were on this system, and the first AC camera people, as well.”

Comms were also used to feed audio to IFB receivers for cast members, including Immortan Joe played by Hugh Keays-Burne. As sound mixer, Osmo also had to feed a musical mix to musicians armed with ear wigs to aid them in keeping time to the beat while riding atop the ‘Doof Wagon’ vehicle, and playing instruments, such as drums and a flaming guitar.

When the action call came, only the camera tracking vehicles, SFX, and the lonely sound van were in pursuit. Mark Wasiutak, key boom operator, travelled on the hero vehicles when cameras were on board. He was able to troubleshoot with assistance from the rest of the sound crew whenever the armada was stopped for checks.

The 788T-SSD is equipped with eight full-featured microphone inputs and 12 tracks of recording. In a compact, light-weight, stainless steel and aluminum chassis, the 788T-SSD accommodates individual controls and connectors for each of its eight inputs, as well as numerous additional I/O and data connections. Mounted to the 788T-SDD, the optional CL-8 accessory is a powerful mixing control surface, providing rotary faders for each of the recorder’s eight inputs, plus input routing and setting control.

The 788T-SSD comes with a factory-supplied, high performance solid-state drive, which provides several important benefits including vast internal storage capacity, continuous recordings of over 60 hours of 24-bit, eight-track audio at 48 kHz, plus better immunity to shock, temperature extremes and zero acoustical output.  In addition to its superior shock immunity, the SSD used in the 788T-SSD also enhances data transfer due to its increased transfer speed versus a spinning hard drive.

Another benefit of the 788T is its reliable timecode jamming capabilities, making it right at home in complex multi-camera sync-sound productions like Mad Max: Fury Road. Although the 788T has an on-board, high-accuracy Ambient timecode generator, while on location for Fury Road, the crew used Ambient master clocks with GPS antennas set to Greenwich Mean Time. All cameras were supplied with Lockit boxes and Deneke slates with Ambient Lockits.

Osmo said, “My 788T recorders and the 744T were jammed from the same Ambient master clock. All the recorders synched up beautifully and never missed a beat.”

Fury Road was recorded in various locations in Namibia, Cape Town, South Africa and Sydney, Australia.

By Ben Osmo, production sound mixer

Prospects for premium OTT in the USA

A snapshot of industry perspectives on the evolution of the market

Premium OTT services – subscription film and TV services delivered over the open internet to connected devices – are proliferating in the USA. How do industry participants believe the market and competitive environment will develop through to 2018? What are the prospects for growth and what factors will drive or hold back the market? Which categories of OTT provider are most likely to succeed?

Balancing consumer demands with security – how operators should address new technology

Consumers have come to expect a level of service from operators that matches their experience in other industries – new content services must be delivered as soon as possible to avoid frustration. As well as time-to-market, a second key consideration is protecting the investment that operators put into new products and services. While it is important to roll-out new features rapidly, it is equally important to make sure that revenue from offering new services is not absorbed by illegal activity, such as the theft of new 4K content.

Operators realise that this is a balance that needs to be struck, but many are hamstrung by legacy architectures and multiple technology partners which can significantly delay time to market. It can take time for existing suppliers to update products to take account of new technologies, delaying roll out.

To avoid ‘technological lock-in’ as they wait for suppliers to bring new technology online, operators need to take direct control of all security choices that underpin their whole ecosystem, ensuring the provisioning of appropriate security assets for each service and smooth interaction with the workflows of their chosen suppliers and the different certification authorities.

A case in point can be seen with the adoption of UHD 4K across the industry, which requires operators to negotiate a specific set of security requirements, often with a variety of providers and vendors. The exacting criteria require operators and their suppliers to work closely together to ensure that content is adequately protected from piracy while flexibility is still preserved to allow operators to design simple-to-use and attractive business models for their markets. With this in mind, there are three complex security considerations for operators:

Compliance to MovieLabs specifications

With 4K content and UltraHD consumer experience gaining popularity in the industry today, Hollywood studios have come together to issue a set of guidelines – MovieLabs specifications – with enhanced content protection requirements for UHD movies. Operators who wish to screen UHD movies, especially for early release windows, must work with solution providers that comply with the stringent specifications, which includes but goes beyond forensic watermarking. The specifications cover three sections – DRM System Specifications, Platform Specifications and End-to-End System Specifications.

Defense against piracy

The piracy landscape has been changing over the past few years, as improved broadband access has allowed content redistribution over the internet to flourish. Pirate services have also become more flexible and sophisticated, offering a wide range of high quality content on different devices, with attractive business models that compete with operators.

One of the main challenges associated with securing UHD 4K content would be to prevent the easy and quick redistribution of it over the internet. 4K content is naturally a big draw for piracy. If a breach occurs during the early release window for a UHD 4K movie, it could result in a far greater financial impact than ever seen before. Operators must have a comprehensive anti-piracy strategy and service partner in order to secure UHD 4K content and protect their investment.

Flexibility to set security rules

To get the most value out of UHD 4K content, operators must be able to support a variety of business models for their markets. This means they need a security solution that can set flexible rules on a per event basis and over the lifecycle of the content. For example, movies and sports events would have different security requirements, and the solution will need to support different levels of security on both older and new UHD 4K TVs that support different HDCP versions. They may also want to offer UHD 4K content at a lower resolution on analogue TVs, or on a home network to different devices.

The UHD 4K example above gives an ample overview of the challenges faced by operators when dealing with new technologies. Multiple security partners and products can complicate the roll out of new technology beyond an acceptable timeframe for consumers, and raise the prospect of customer churn. Operators must demand a model that allows them to manage partnerships on a vendor-neutral basis, putting them back in control of all the security and business decisions relating to their content distribution platforms. In addition, they must avoid technology lock-in, enabling them to make use of innovative product features and functionalities as they become available.

This is especially relevant in today’s evolving multiscreen environment, a fast-moving part of the industry where operators need to ensure that current and future business plans are not impeded by limited technology options.

By Peter Oggel, VP Product Management, Irdeto

Managing increasing data demands and resource contention challenges in Media & Entertainment workflows

Today’s media production workflows are extremely demanding. High resolution files and large teams of collaborators push storage systems for greater amounts of bandwidth and larger, more efficient storage capacities. The EditShare XStream EFS Shared Storage System is a scale-out media storage solution that has been specifically designed to address the challenges of today’s media production workflows.

This new system is based on the EditShare File System, a parallel file system that uniquely addresses the needs of a modern media production organisation by overcoming resource contention issues that normally impact the performance shared storage systems.


Technology: Friend or foe of the pirates?

Ubiquitous access to broadband and mobile internet combined with the increased power and screen quality of consumer devices has transformed how viewers consume content. Services such as catch-up, Over-the-Top (OTT), Video on Demand (VoD) and multiscreen offer consumers a large variety of options to enjoy content anywhere, anytime and on any platform. However, while technology is increasingly being used to enable operators and broadcasters to improve the quality of experience and service, it can act as a double-edged sword, helping pirates to develop new forms of illegal content sharing. Pirated Blu-Rays and DVDs, card-hacking and key sharing of pay-TV systems constitute the traditional weapon arsenal of pirates; but the advent of simplified technology for online video streaming means that the illegal redistribution of content over the Internet is now the greatest piracy threat facing the content industry. For example, emerging technologies like live streaming apps Periscope and Meerkat have recently hit the headlines as concerns grow that consumers and pirates are abusing the services to illegally re-stream high quality content, such as the Mayweather – Pacquiao ‘Fight of the century’ and Game of Thrones season premiere.

According to Variety, Game of Thrones managed to beat its own piracy record after the episode ‘Kill The Boy’ was illegally downloaded more than 2.2 million times in just 12 hours after first being aired on pay-TV. Lack of content accessibility is often described as a key piracy driver, and given the quick adoption of streaming applications as well as the low technical barrier to use these products, we can expect that consumers will take an ever more active part in enabling content to become accessible across the globe, often without knowing that they are infringing the law. This consumer desire to share their passion with their worldwide peers was particularly prevalent in the case of the Mayweather – Pacquiao boxing match, and premium sports content is increasingly a key target for content theft.

For an industry that relies on monetising live action, the idea that professional pirates can deliver HD sport in real time at a fraction of the legal cost, if not for free, is potentially disastrous. Videonet recently revealed that illegal viewing of major sports events is doubling every six months, posing a serious threat to the broadcast industry, which invests in premium sport content rights to retain subscribers from cord cutting or switching their subscriptions to OTT services. Unless they remain in control of sports content, these companies are at risk of losing valuable customers and revenue, as well as their competitive edge.

Traditional content encryption technology such as Conditional Access and Digital Rights Management remain an essential tool for operators looking to ensure that high value content is delivered only to their legitimate subscribers. However, these technologies are not designed to prevent illegal re-distribution of content once an authorised consumer has legitimately played it. Internet Monitoring combined with Takedown Notices can help reduce the number of illegal streams for each piece of content illegally shared, as long as the hosting service itself is complying with Digital Millennium Copyright Act (DMCA). Ultimately, these measures are proving insufficient in the face of tech-savvy pirates, which is why the industry is increasingly deploying forensic watermarking in addition to other content protection methods, whether on the server-side for OTT or directly in a set-top box for broadcast to ensure that pirated content is uniquely traceable back to the the source of the leak.

Forensic watermarking is the means by which a unique and imperceptible identifying code is inserted into a media asset, whether a movie, video or any other type of content. By adding a unique identitifier disseminated throughout a piece of media, that content, along with its owner, becomes identifiable. A digital watermark is used to enforce contractual compliance between a content owner and the intended recipient; it provides proof of misuse and a link back to the source of the leak.

Another advantage of forensic watermarking is its capacity to play a part in educating consumers. Illicit services increasingly look similar to legitimate services; many of them carry advertising and can even request viewers to pay for content. This means that a number of users may believe the service to be legal, and need to be informed that they are infringing the law. Using forensic watermarking to identify the source of an illegal stream, operators and broadcasters can turn the pirate stream into a tool for consumer education. By inserting a visual overlay into the pirated session, operators can provide information on legal alternatives to the illegal content as soon as it is stopped, as well as information about content copyright.

As the content industry increasingly turns to forensic watermarking to ensure that premium video is fully secure, technology providers need to ensure the robustness of their solutions and adapt to ever changing use cases. With the addition of real time watermark detection for live sports and Content Delivery Network-agnostic watermarking for OTT streaming, forensic watermarking solutions such as NexGuard provide the confidence and peace of mind for content owners, broadcasters and operators who want to stop illegal content re-distribution without inconveniencing consumer accessing content legally.

By Harrie Tholen, SVP, Sales and Marketing, NexGuard

How to make video ad viewability work for you

Viewability has suddenly become the most sought-after currency in the video industry. Its rapid rise being a direct result of industry headlines around the poor quality of ad views and increasing ad fraud last year. The topic’s pervasiveness, particularly when refering to video advertising should be considered a reflection of growing pains within the industry. Growing pains are to be expected, yet at this moment, buyers and sellers are caught in a technology gap. They are finding themselves located somewhere between a perceived industry ideal – where ads are paid for only if they are viewed – and a state where trading is based on impressions.

The discussion around viewability would greatly benefit from a solid definition of what viewability is, and an explanation of how we can measure it. While there are several MRC-accredited measurement vendors in existence, each has a slightly different criteria for video viewability and a standard form of measurement has not yet been recognised. However, the industry is making strides toward a uniform definition; brands, publishers and trade groups are working together on a definition of what should count as a ‘billable ad impression’ online.

The sticking point between the old and the new can also be observed in the journey from CPM-based selling to engagement-based selling. It’s a situation causing angst for publishers; agencies and advertisers are increasingly looking at campaign goals based on viewable ads rather than reported impressions. It doesn’t take an Einstein to understand that a ‘viewability-only’ focused model will require you to deliver a lot more ads to meet your goals, but on the same budget.

The good news is that by putting the right practices in place, viewability can be made to work for you. You can start laying these foundations by following this five-step checklist:

1- Know your potential

It’s critical to have a viewability measurement solution integrated with your ad server. Even if it isn’t bullet proof, you need to have a solid understanding of which assets will be likely to meet buyer expectations, and which may fall short. In addition to pure viewability measurement, video analytics can also tell you which assets are driving the highest engagement across all device types. This will help you understand how every aspect from page placement to player size is affecting engagement or causing abandonment. Without this readily available video intelligence, you’re already shooting in the dark.

2- Take action

Now that you have better intelligence on viewability performance across all of your assets, the most important part of staying in control is optimisng your inventory. Create packages that bring together your highest-performing assets, and offer those at a higher CPM. Use the data you’re collecting to understand how to improve the assets that are underperforming. Analytics can uncover the factors that are driving your highest engagement, so you can replicate them across your entire catalogue. Let the data be your guide.

3- Be proactive Too often buyers and sellers speak different languages with regard to expectations around viewability. The simplest advice, and something that is often overlooked is this: have a conversation with your buyer about how you’ll measure campaign outcomes. Share the information you’ve learned about your inventory and how your offering demands a premium. Back up your case with hard data and insights derived from the measurement you’ve made. Don’t be afraid to engage – the industry is moving fast enough that it’s never a bad idea to stop, talk and ensure you’re in agreement about what success means.

4- You’re in the same boat, now get on the same page

It’s important to understand that buyers and sellers can equally be mystified by the ongoing gyrations of industry standards and tools. So it’s even more important to align yourself with your buyer’s interests – and clearly they aren’t interested in ads that aren’t viewable. Particularly, agree on the tools you’ll use to measure viewability. Be willing to invest in the tools your buyer is using, and be sure you’re taking the time to look at things from their perspective as well as your own.

5- Test and learn before you fly

As we’ve already discussed, there are many tools to choose from, and results can vary dramatically. Run a trial and compare reports with your buyer. Understand the potential discrepancies between your data and theirs, and resolve them in advance. Begin the flight knowing there will be no surprises as you bring the campaign home.

At the end of the day, we’re all in this together and these approaches can instill buyers’ trust in their trading partners, and confidence in their campaign goals. Tech vendors will have greater incentive to align behind viewability measurements that rise to the top when buyers and sellers begin to standardise on tools and processes. All of this combined makes for a rather organic remedy to the industry’s growing pains and will help the market as it matures.

By Maria Flores, VP of Programmatic at Ooyala


A content everywhere game plan

In 2015, media enterprises face a herculean challenge of unification, collaboration, production, and distribution of personalised content on connected devices across diverse geographies. The content production and broadcasting infrastructure of traditional media enterprises is often built on multiple third party solutions. Systems typically lack agility, scalability, and efficient workflows for end-to-end multi-platform asset monetisation.


Ovum’s survey addressed the pivotal challenge of how new age media enterprises not only leverage MAM to centralise their digital assets into a single repository, but also how they maximise their content value via a collaborative production and distribution ecosystem.


This whitepaper addresses the most commonly cited questions pertaining to core value and ROI of MAM technology to media enterprises in the emerging content everywhere era.


How to select a GPS vehicle tracking system for your business

A GPS-based vehicle tracking system can be a highly effective tool to help lower costs and manage your fleet more efficiently.  But with so many different systems to choose from – each with a different array of features, functions, and benefits – how can you determine which will best fit your business?

The next step in audio networking

Somehow it doesn’t seem quite appropriate to assess the networking technology year in terms of its ‘eventfulness’. Nonetheless, it is hard to resist describing 2014 as a highly eventful year for audio networking – possibly even the one in which we witnessed some form of tipping point away from traditional, point-to-point connectivity.


Of course, many long-serving technologies – notably MADI – remain in heavy usage. But with Audinate’s Dante continuing to record new licensees at a formidable rate (in excess of 180 at time of writing), ALC NetworX’s Ravenna starting to achieve market traction, and the AES67 networking standard providing further lustre to Layer 3-based networking, the impression of a proper breakthrough is no mirage.


What’s more, the upward trajectory looks set to continue in 2015. A forthcoming control standard may complement AES67, while the AVnu Alliance – flag-waver for the Audio/Video Bridging (AVB) movement – will benefit from a certified audio endpoint reference platform, a newly created Industrial market segment, and a brace of additional members (namely, Belden, General Electric and National Instruments).


But what about the situation at the ground level? Virtually everyone involved with networking talks about upping the educational effort to reach more potential end-users – and few would deny the desirability of that. However, the early adopters among the integrator community have already passed this point and are actively thinking about how to deploy specific networking technologies in their fixed install projects.


Finding a new way to approach the networking issue isn’t easy, but Installation decided that it might be insightful to address three specific scenarios and invite leading manufacturers to identify the initial considerations to be taken into account when plotting a networked audio system. These, then, are the ‘headline’ priorities that should be heeded; sadly, there isn’t scope here to go in-depth into multiple design options.

The first scenario revolves around a mid-size conference facility requiring a comprehensive audio networking solution that is fully integrated and easy to use. Picture one large (1,000-seat) capacity room and a smaller (500-seat) space, along with respective control rooms and back office areas. There should also be opportunity to expand the configuration relatively painlessly in the future.


A similar degree of flexibility is called for in our second scenario, which involves a multipurpose performance venue. Holding a capacity crowd of 1,500, the venue requires the ease of (re)configuration necessary to host everything from stand-up comedy sets to full band performances. The networking scheme should cover all console positions as well as control and rack room areas.


The final scenario centres upon what is apparently a significant growth area for networking vendors: a college of higher education. Coverage of long distances is a particular priority for this environment, with the audio networking design needing to accommodate an auditorium (approximate capacity 800-1,000), two radio studios and a couple of classrooms. With the site expected to undergo a development programme in the mid-term future, it should also be possible to achieve easy expansion of the system.


1) Conference facility

Andreas Hildebrand, senior product manager at ALC NetworX, highlights some of the key factors that would have to be taken into account when designing a networked system in a conference facility. “The first thing I would look at is whether my conference system is something that can live in a local network infrastructure,” he says. “If it is bound to the borders of a venue, you could potentially run a Layer 2 environment, which gives the option of using a technology like AVB. But if it needs to run across network boundaries, into several local area network segments, then you need to look at a Layer 3 solution. Performance-wise you wouldn’t experience much difference between a Layer 2 and a Layer 3-based solution.”


But, he continues, “current and future product availability would be another key issue here”. While Ravenna itself is a Layer 3-based technology successfully introduced in certain application areas (mostly professional broadcasting and high-end recording), Hildebrand admits that there isn’t much in the way of available, Ravenna-supporting product suitable for conference applications at this time. “But it is certainly a market we are aware of and looking at,” he confirms.


Maintaining low latency would be another preference for conference applications, says Audinate CEO Lee Ellison, who highlights the ability to put together a Dante-based conference facility system using a wide variety of vendors’ equipment. “In terms of the conference market, [there is Dante deployment] for products from Symetrix, BSS, Biamp, QSC, Shure, Audio-Technica and others. There is also a wide selection of I/O boxes and suchlike to make it easier for the installer to connect, use and change the system,” says Ellison.


The general Harman philosophy, explains Harman International senior manager for systems design Adam Holladay, is that “we don’t want to force the customer down a particular route. The emphasis, therefore, is on offering as many solutions as possible in order to meet the needs of a particular project – [not least] because in an installed sound system, certainly a larger one, we find that the IT network is often determined well in advance of the audio networking protocol.”


With that caveat in place, Holladay says that he would probably recommend a Dante-based deployment of Harman equipment for conference applications. “In a conference venue, the Ethernet or network infrastructure will probably have been determined by an IT division beforehand. This would basically rule out using AVB as the AVB solution we offer is only going to function on AVB-compatible switches, and at this time there are not many of those,” he says.


Suggesting a possible workflow, Holladay says that Dante could be used to network between Soundcraft consoles and BSS Soundweb signal processing. “For ease of use, I would then suggest our BLU Link protocol to daisy-chain between BSS and Crown amplifiers; in essence to turn the rack room into a large matrix taking the audio off the network. This means that you can use the network for system-wide distribution, but then for processing audio from the processor box to the amplifier box, there is no need for audio from the network because they are right next to each other in the rack,” says Holladay.


2) Performance venue

Again pinpointing some of the main requirements for a venue of this kind, Hildebrand says: “It would be good if the selected networking solution could offer some interoperability schemes. For example, this would mean it is possible to extract some of the individual streams for an OB van in the event that a performance is to be broadcast.”

This need for interoperability would probably lead the consultant and venue operator in the direction of a Layer 3-based solution. “While you could use some sort of bridging or gateway technology from the mixing desk to produce an output that is suitable for the OB set-up, the more natural approach would be to use a Layer 3 approach right away.”

The specific advantages of implementing Ravenna in this case, suggests Hildebrand, would be “a very high flexibility in setting up the streaming formats and adapting to the latency requirements”.


UK-based audio interface specialist Focusrite is a long-term licensee of Dante technology. Invited to consider the roadmap for an installation of this kind, Focusrite product manager Will Hoult says that as a manufacturer of high-quality mic preamps “we would be looking to add the number of boxes required to satisfy the channel count, then depending on what audio workstation is used we would be able to connect to it. So for example, if it’s a Pro Tools HD system we can bridge directly in to it with a 32-channel RedNet 5 interface.”


Any such venue will inevitably include a mixing console as part of the network, “and we provide bridging interfaces that allow people to use pretty much any console, whether it has a network connection or not. It’s worth noting that one of the drivers behind developing the AES and MADI bridges that we now offer is to be able to connect equipment that is not endowed with a network port, to a Dante network.”


While network design is inevitably impacted by the maximum Ethernet cable length of 100m, Hoult points to the availability of fibre modules that allow the user to cover much greater distances. “For example, you can get a 40km fibre module capable of achieving a single mode fibre connection up to 40km long, which allows you to [bring the network] to a variety of different areas,” he says.


3) College of higher education

In a large, potentially cross-campus deployment as might frequently be found in an HE facility installation, a Layer 3-based solution may again be preferable. “You would probably go Layer 3 as you would need to route audio, video and data across network boundaries,” says Hildebrand. “You rarely have a facility like that sitting on a single, big local network segment, so you would need routing capabilities, and that means you need Layer 3.”


He continues: “If wide area connections are also part of the setup, the networking solution needs to be capable to offer high flexibility in the choice of operating parameters on individual routes to different destinations in order to deliver satisfying performance with the lowest possible latency, matching the individual jitter characteristics of the various WAN routes.”


Holladay confirms that, once again, “the chances in a college of higher education are that the IT specification is not going to be under the audio designer’s control. Since it is highly unlikely that the IT department would have chosen a [Layer 2-based] AVB-compatible switch as there are relatively few of them, that means a [Layer 3 design] would be the preferred option.”


In a college of higher education, remarks Hoult, the ability to deliver audio quickly and efficiently where it is required is an obvious priority. Once again, he suggests, a Dante-based deployment can come into its own in this environment. “Often you would be looking to move the audio equipment around the facility on something like a 12U rack, and in that regard the ease of use of Dante makes that a real possibility,” he says. “It’s based on the flexible location of devices and their identity, so it remembers which device audio was being received from previously. It might now be in a completely different location, but audio would still be received properly, and that makes a mobile rack-based approach – something that would be ideal for an HE college – a realistic possibility.”


Time of transition

Anecodtal evidence aside, it is quite difficult to ascertain precisely how widely the newer technologies are being used in real-world applications. But the experience of interface, conversion and routing technology products developer DirectOut does underline the current transitional state.


The company is currently completing work on its first Ravenna-based product – “we are finalising that now and expect to be able to announce more details shortly” – but DirectOut CTO Stephan Flock confirms that MADI conversion technology remains the bedrock of its current offer.


“It’s a slightly odd situation to be talking about the benefits of MADI at the same time we are also pursuing the road of audio over IP,” admits Flock. “But with MADI, you have defined point-to-point connectivity and very low latency. There is also the fact that it is a standard with a weight of history behind it, and it is very open with regards to selecting equipment and putting together a system design. There is a sense of reassurance that you are going to have a compatible way of working, and that can still be a challenge with networked solutions.” And that, in a nutshell, is why MADI will doubtless remain an integral part of the landscape for many years to come.


But as the above responses indicate, Dante, in particular, is now making dramatic inroads into all manner of install applications. Dependable, Layer 3-based networking is bringing unheralded-of flexibility to the built environment – so expect to see it applied widely to many more than the three scenarios outlined in this feature.


By David Davies, SVG Europe managing editor and freelance pro-AV writer

Bringing the Barbican into the new age of networking

Located in the heart of London, the Barbican is the very definition of the modern multi-performance venue. Although perhaps best known as the home of the London Symphony Orchestra, the Barbican regularly stages performances from across the musical spectrum, as well playing host to a wide variety of cinema screenings, presentations and workshops.

Backed by owner and financier the City of London Corporation, the Barbican has kept pace with the changing audio times, with recent developments including the installation of additional Meyer Sound equipment on two foyer stages during 2012.

Now, in the latest phase of work, the Barbican’s concert hall has been provided with multiple new DiGiCo mixing consoles and a bespoke audio networking infrastructure masterminded by Euan MacKenzie and Chris Austin from Autograph Sales & Installations. As Austin explains, the new configuration had to be capable of satisfying both current and likely future requirements.

“It had to be immediately familiar and acceptable to both visiting engineers and the in-house team, and able to provide enough capacity not only for their immediate needs but also to offer scope for future expansion,” he says. “It also had to be able handle in excess of 100 input channels, to have dual-engine backup facilities and allow the technical team to easily source compatible additional control surfaces when necessary.”

Tom Shipman from the Barbican’s audio team – one of the venue’s regular in-house mix engineers – was a keen advocate of a DiGiCo-based solution, having recently had the chance to play around with an SD9. “I got to grips with it very quickly. It was amazingly easy to use and felt very intuitive,” says Shipman.

After careful consideration, the audio team opted for an SD7 at FOH (equipped with Waves to allow visiting engineers to bring in SoundGrid servers) and an SD9 for the control room. The SD7 was supplied with an SD-Rack, which provides 56 inputs and 24 outputs, as the main stage rack, plus an SD-Mini rack with 24 inputs and 8 outputs, which can be used as a remote connection box or integrated into the main system as required. A second SD-Mini Rack was installed in the control booth to accept inputs from wireless microphones and provide outputs to the main house sound system.

DiGiCo provided guidance throughout the specification process. “The DiGiCo team invested time to fully understand the Barbican’s requirement today and offered them a solution that can expand as more demands are placed upon them,” says DiGiCo MD James Gordon. “It’s no secret that adding the Barbican to the DiGiCo family is something we are very proud to have achieved. Their international recognition and reputation reinforces DiGiCo as the ultimate range of live sound consoles, and we look forward to working with them in the coming years to further strengthen our relationship.”

Flexible networking

But the specification of the DiGiCo systems was only one component of the project. With a similar emphasis on flexibility, Autograph was asked to design and implement a new audio network that would ensure adaptable system control and connectivity. “This was achieved by installing a discreet networked system using about 10km of cable and including almost 600 fibre terminations, at the same time as adding HD video capability and extending the existing Cat6 network,” explains Austin.

The DiGiCo desks and racks have been equipped with Optocore fibre connectivity on Neutrik opticalCONs, complementing the infrastructure installed by Autograph. “This links all the racks they are using for the show with the SD7 at FOH, or the SD9 in the booth, depending on what’s going on. Often they rent in another SD7 and link this in too for monitors,” says Austin.

The upshot is a highly flexible configuration that allows the desks to share every input and for them to be put “almost anywhere” within the concert hall and backstage areas, including the TV gallery, the BBC’s facilities backstage and the OB trucks outside. “The DiGiCo racks also provide MADI splits which are used to provide broadcast and/or recording feeds to the BBC, who are regular visitors,” says Austin. Indeed, the BBC contributed to the expense of installing the new tie-lines.

With a view to possible future requirements, the Autograph team also installed separate multi- and single-mode fibre for video and data use, supplementing the existing multi-mode cabling. Each location now sports an HD coax HD-SDI connection, as well as Cat6, too.

Guiding PSNEurope around the Barbican shortly before a Heritage Orchestra-assisted performance by cult glam-electropop duo Sparks, Shipman pays tribute to the efficiency of the Autograph team. “It was really a very good cooperation with Autograph,” he says. “I would also highlight the quality of the work; for example, the cable terminations are superb.

“One of the BBC engineers who mixed the broadcasts from the recent London Jazz Festival highlighted how incredibly clean the was and actually suggested he would have to put more room noise in to make it sound more like a live mix. The whole project has gone so well, and we are very happy to have an infrastructure that sets us up nicely for the long-term.”

Throwing the net wider than Soho

The broadcast and post production markets can be big city-centric, which is understandable – if not completely forgivable – because the main players in these businesses have their headquarters in places like New York, Paris and London. Not only has England’s capital been accused of being especially inward looking but specific areas there seem think they are places unto themselves.

Soho is the most obvious example and today it is still a major focus for UK television and film production. But TV is a global market and digital technology has transcended distance and time zones to connect facilities at opposite sides and ends of the world. While its name might imply parochialism, connectivity and data management specialist Sohonet has played a part in connecting post production houses on many levels: internally for studios within the same premises; locally between neighbouring buildings; and, increasingly, linking facilities in other parts of the UK or different countries.

Sohonet was founded in 1995 when several London post houses came together to create a private network with the aim of improving productivity through the use of simpler, faster production chains. As more companies signed up for its services Sohonet began to outgrow its roots, to the point in 2003 when it became a private company after a management buyout.

The audio post production sector was an early adopter of ISDN (Integrated Services Digital Network) in the early 1990s as a replacement for expensive telecom circuits. As facilities looked for new technologies to replace ISDN Sohonet installed the first VDSL (Very high-speed Digital Subscriber Line) network in the UK, at Pinewood Studios, in 2001. Since then it has offered a variety of WAN and LAN technologies; it is now promoting its privately managed high quality IP network as more post houses adopt Audio over Internet Protocol (AoIP).

Sohonet chief technology officer Ben Roeder (pictured) says this offers uncongested connectivity to the company’s clients, with the network now linking leading facilities in Europe, North America, Australia and New Zealand. “This involves a lot of live sessions,” he comments, “but we’re also moving files around at high speed for audio description, subtitling and quality control, as well as multitrack mixes and commercials.”

With this as its foundation Sohonet has been building on its service offering. Last year it formed a partnership with software developer Signiant to add the Media Shuttle program to Sohonet Hub, giving customers further exchange tools in the form of Filerunner. On the storage side of its business Sohonet is now looking at Cloud-based Object technology, which Roeder sees as “the way forward”.

Sohonet routinely works with other technology providers, supplying the platform on which systems like Source-Connect, Digigram’s AoIP products and the T-VIPS (now merged with Nevion) JPEG 2000 system can be used. Roeder observes that in many respects audio takes precedence over video because of the large number of elements involved: “There are the six channels in a 5.1 Dolby mix and lots of channels in JPEG 2000 and SDI streams. In terms of all these formats we’re quite agnostic.”

Roeder adds that digital and file-based operations have given more scope and led to the proliferation of audio tracks that now need to be transferred between facilities. “Tapeless systems are allowing people to include audio description and other services, which couldn’t be done easily on tape,” he says. “It was getting to the point where two SDI XDCAM machines would have to be involved, one of them exclusively for the audio. But now people can have as many channels as they want.”

This in turn brings up the question of network capacity but developing technologies are appearing that offer companies like Sohonet the flexibility their clients need. “We’re testing 4Gb a second connections and there’s also the potential of Ethernet systems that can be built up in multiples of 25GHz channels,” explains Roeder. “All this will give more bandwidth and more services.”

Networking of this kind, Roeder says, suits any scale of application, from connecting studios and enabling machine rooms to be located in other buildings to providing a link for sessions between facilities on different continents. “Ultimately it’s about adding bandwidth that removes distances,” he concludes. “In terms of networking technologies this is only the beginning.”

By Erica Basnicki, freelance music technology and pro-audio journalist

Symetrix is the solution for all-new Hajfell

In 1994, the Hafjell ski resort, near Lillehammer, fell under the global media spotlight when Norway hosted the Winter Olympic Games. Two decades on and Hafjell is preparing to stage events as part of the 2016 Youth Olympic Games and, more imminently, the FIS Alpine Junior World Championships – a hectic programme that has understandably prompted a review of the national slalom slope’s technological capabilities.

Financed by the Norwegian government, the project leaders enlisted Drobak-based audio distributor and systems designer Norsk Lydteknikk to devise the revamped sound set-up. Company principal Bjørn Fjeld confirms that forthcoming sports events and the need to accommodate future expansion were at the top of the priority list as he set about designing the new system.

“The new equipment had to be of the best possible quality to deliver both speech and music, and capable of being adapted to future requirements,” says Fjeld. “In line with these expectations, the Symetrix SymNet Radius 12×8 DSP was selected because of its ‘studio sound quality’ and powerful support for Dante media networking.”

The Symetrix device takes its place amongst a notably high-end spec that also includes Community R.5-66 and R2-474 loudspeakers (located in a total of six zones), Ecler DPA 2000 and DPA 1400 amplifiers and microphones from Clock Audio. Although Fjeld and his team undertook all design work and constructed the racks, the actual on-site installation was carried out by Lillehammer-based firm Østbye og Sletmoen.

Fjeld highlights the “robustness and easy-to-use nature” of the SymNet Radius 12×8 DSP, adding that the combination of Community loudspeakers and the Symetrix unit “also gives us the opportunity to utilise Community’s digital FIR filters (1024 taps) in the Radius’s speaker-management module to help deliver optimal performance of the whole system.” It should also be noted that deployment occupies significantly less rack-space than the previous system.

It’s an exciting time for Norsk Lydteknik, which added Symetrix to its burgeoning distribution portfolio a little under 12 months ago. “It’s early days, of course, but we are already seeing some great sales for the SymNet Radius line and the Jupiter app-based DSPs,” he reports. “We are looking forward to communicating the advantages of selecting Symetrix equipment to an even broader base of users in 2015.”

Joined-up thinking

Somehow it doesn’t seem quite appropriate to assess the networking technology year in terms of its ‘eventfulness’. Nonetheless, it is hard to resist describing 2014 as a highly eventful year for audio networking – possibly even the one in which we witnessed some form of tipping point away from traditional, point-to-point connectivity.


Of course, many long-serving technologies – notably MADI – remain in heavy usage. But with Audinate’s Dante continuing to record new licensees at a formidable rate (in excess of 180 at time of writing), ALC NetworX’s Ravenna starting to achieve market traction, and the AES67 networking standard providing further lustre to Layer 3-based networking, the impression of a proper breakthrough is no mirage.


What’s more, the upward trajectory looks set to continue in 2015. A forthcoming control standard may complement AES67, while the AVnu Alliance – flag-waver for the Audio/Video Bridging (AVB) movement – will benefit from a certified audio endpoint reference platform, a newly created Industrial market segment, and a brace of additional members (namely, Belden, General Electric and National Instruments).


But what about the situation at the ground level? Virtually everyone involved with networking talks about upping the educational effort to reach more potential end-users – and few would deny the desirability of that. However, the early adopters among the integrator community have already passed this point and are actively thinking about how to deploy specific networking technologies in their fixed install projects.


Finding a new way to approach the networking issue isn’t easy, but Installation decided that it might be insightful to address three specific scenarios and invite leading manufacturers to identify the initial considerations to be taken into account when plotting a networked audio system. These, then, are the ‘headline’ priorities that should be heeded; sadly, there isn’t scope here to go in-depth into multiple design options.

The first scenario revolves around a mid-size conference facility requiring a comprehensive audio networking solution that is fully integrated and easy to use. Picture one large (1,000-seat) capacity room and a smaller (500-seat) space, along with respective control rooms and back office areas. There should also be opportunity to expand the configuration relatively painlessly in the future.


A similar degree of flexibility is called for in our second scenario, which involves a multipurpose performance venue. Holding a capacity crowd of 1,500, the venue requires the ease of (re)configuration necessary to host everything from stand-up comedy sets to full band performances. The networking scheme should cover all console positions as well as control and rack room areas.


The final scenario centres upon what is apparently a significant growth area for networking vendors: a college of higher education. Coverage of long distances is a particular priority for this environment, with the audio networking design needing to accommodate an auditorium (approximate capacity 800-1,000), two radio studios and a couple of classrooms. With the site expected to undergo a development programme in the mid-term future, it should also be possible to achieve easy expansion of the system.


1) Conference facility

Andreas Hildebrand, senior product manager at ALC NetworX, highlights some of the key factors that would have to be taken into account when designing a networked system in a conference facility. “The first thing I would look at is whether my conference system is something that can live in a local network infrastructure,” he says. “If it is bound to the borders of a venue, you could potentially run a Layer 2 environment, which gives the option of using a technology like AVB. But if it needs to run across network boundaries, into several local area network segments, then you need to look at a Layer 3 solution. Performance-wise you wouldn’t experience much difference between a Layer 2 and a Layer 3-based solution.”


But, he continues, “current and future product availability would be another key issue here”. While Ravenna itself is a Layer 3-based technology successfully introduced in certain application areas (mostly professional broadcasting and high-end recording), Hildebrand admits that there isn’t much in the way of available, Ravenna-supporting product suitable for conference applications at this time. “But it is certainly a market we are aware of and looking at,” he confirms.


Maintaining low latency would be another preference for conference applications, says Audinate CEO Lee Ellison, who highlights the ability to put together a Dante-based conference facility system using a wide variety of vendors’ equipment. “In terms of the conference market, [there is Dante deployment] for products from Symetrix, BSS, Biamp, QSC, Shure, Audio-Technica and others. There is also a wide selection of I/O boxes and suchlike to make it easier for the installer to connect, use and change the system,” says Ellison.


The general Harman philosophy, explains Harman International senior manager for systems design Adam Holladay, is that “we don’t want to force the customer down a particular route. The emphasis, therefore, is on offering as many solutions as possible in order to meet the needs of a particular project – [not least] because in an installed sound system, certainly a larger one, we find that the IT network is often determined well in advance of the audio networking protocol.”


With that caveat in place, Holladay says that he would probably recommend a Dante-based deployment of Harman equipment for conference applications. “In a conference venue, the Ethernet or network infrastructure will probably have been determined by an IT division beforehand. This would basically rule out using AVB as the AVB solution we offer is only going to function on AVB-compatible switches, and at this time there are not many of those,” he says.


Suggesting a possible workflow, Holladay says that Dante could be used to network between Soundcraft consoles and BSS Soundweb signal processing. “For ease of use, I would then suggest our BLU Link protocol to daisy-chain between BSS and Crown amplifiers; in essence to turn the rack room into a large matrix taking the audio off the network. This means that you can use the network for system-wide distribution, but then for processing audio from the processor box to the amplifier box, there is no need for audio from the network because they are right next to each other in the rack,” says Holladay.


2) Performance venue

Again pinpointing some of the main requirements for a venue of this kind, Hildebrand says: “It would be good if the selected networking solution could offer some interoperability schemes. For example, this would mean it is possible to extract some of the individual streams for an OB van in the event that a performance is to be broadcast.”

This need for interoperability would probably lead the consultant and venue operator in the direction of a Layer 3-based solution. “While you could use some sort of bridging or gateway technology from the mixing desk to produce an output that is suitable for the OB set-up, the more natural approach would be to use a Layer 3 approach right away.”

The specific advantages of implementing Ravenna in this case, suggests Hildebrand, would be “a very high flexibility in setting up the streaming formats and adapting to the latency requirements”.


UK-based audio interface specialist Focusrite is a long-term licensee of Dante technology. Invited to consider the roadmap for an installation of this kind, Focusrite product manager Will Hoult says that as a manufacturer of high-quality mic preamps “we would be looking to add the number of boxes required to satisfy the channel count, then depending on what audio workstation is used we would be able to connect to it. So for example, if it’s a Pro Tools HD system we can bridge directly in to it with a 32-channel RedNet 5 interface.”


Any such venue will inevitably include a mixing console as part of the network, “and we provide bridging interfaces that allow people to use pretty much any console, whether it has a network connection or not. It’s worth noting that one of the drivers behind developing the AES and MADI bridges that we now offer is to be able to connect equipment that is not endowed with a network port, to a Dante network.”


While network design is inevitably impacted by the maximum Ethernet cable length of 100m, Hoult points to the availability of fibre modules that allow the user to cover much greater distances. “For example, you can get a 40km fibre module capable of achieving a single mode fibre connection up to 40km long, which allows you to [bring the network] to a variety of different areas,” he says.


3) College of higher education

In a large, potentially cross-campus deployment as might frequently be found in an HE facility installation, a Layer 3-based solution may again be preferable. “You would probably go Layer 3 as you would need to route audio, video and data across network boundaries,” says Hildebrand. “You rarely have a facility like that sitting on a single, big local network segment, so you would need routing capabilities, and that means you need Layer 3.”


He continues: “If wide area connections are also part of the setup, the networking solution needs to be capable to offer high flexibility in the choice of operating parameters on individual routes to different destinations in order to deliver satisfying performance with the lowest possible latency, matching the individual jitter characteristics of the various WAN routes.”


Holladay confirms that, once again, “the chances in a college of higher education are that the IT specification is not going to be under the audio designer’s control. Since it is highly unlikely that the IT department would have chosen a [Layer 2-based] AVB-compatible switch as there are relatively few of them, that means a [Layer 3 design] would be the preferred option.”


In a college of higher education, remarks Hoult, the ability to deliver audio quickly and efficiently where it is required is an obvious priority. Once again, he suggests, a Dante-based deployment can come into its own in this environment. “Often you would be looking to move the audio equipment around the facility on something like a 12U rack, and in that regard the ease of use of Dante makes that a real possibility,” he says. “It’s based on the flexible location of devices and their identity, so it remembers which device audio was being received from previously. It might now be in a completely different location, but audio would still be received properly, and that makes a mobile rack-based approach – something that would be ideal for an HE college – a realistic possibility.”


Time of transition

Anecodtal evidence aside, it is quite difficult to ascertain precisely how widely the newer technologies are being used in real-world applications. But the experience of interface, conversion and routing technology products developer DirectOut does underline the current transitional state.


The company is currently completing work on its first Ravenna-based product – “we are finalising that now and expect to be able to announce more details shortly” – but DirectOut CTO Stephan Flock confirms that MADI conversion technology remains the bedrock of its current offer.


“It’s a slightly odd situation to be talking about the benefits of MADI at the same time we are also pursuing the road of audio over IP,” admits Flock. “But with MADI, you have defined point-to-point connectivity and very low latency. There is also the fact that it is a standard with a weight of history behind it, and it is very open with regards to selecting equipment and putting together a system design. There is a sense of reassurance that you are going to have a compatible way of working, and that can still be a challenge with networked solutions.” And that, in a nutshell, is why MADI will doubtless remain an integral part of the landscape for many years to come.


But as the above responses indicate, Dante, in particular, is now making dramatic inroads into all manner of install applications. Dependable, Layer 3-based networking is bringing unheralded-of flexibility to the built environment – so expect to see it applied widely to many more than the three scenarios outlined in this feature.


By David Davies, SVG Europe managing editor and freelance pro-AV writer

Best practices in bringing new mobile products to market in 2015

Time-to-market pressures are forcing mobile device manufacturers to reconsider their current processes and strategies. Here’s some advice to help OEMs better plan and get out in front of the critical issues that determine which companies win and lose in the mobile space.

How to support unified communications in an audiovisual environment

This white paper looks at the growing business need of blending the capabilities of conventional AV rooms with the simplified, on demand communication and content creation experience provided by UC platforms and cloud-based applications.

It explores the subsequent challenges and looks at a simple solution that already exists in most meeting rooms – the PC to bridge the gap between the two technologies.

In the end, how blending capabilities creates a lower cost, higher adoption UC collaboration experience within a traditional AV group collaboration space.


Comparison of SMD and DIP LEDs for use in large format LED screens

Advances in LED technology have meant increasing availability and use of SMD (Surface Mount Device) LEDs in a range of applications from lighting to big screens. These are now widely available alongside the classic DIP (Dual Inline Package) LEDs.

This white paper discusses the advantages and disadvantages of each specifically when used in large format full colour LED displays.

Service provider finds perfect signage recipe

PilotTV saved over 30 percent on installation and 70 percent on maintenance with the Intelligent Pluggable System Specification (IPSS). Find out how they did it, and how you can achieve these savings yourself.

Selling enterprises on video production: An integrator’s guide

For systems integrators, selling in-house video production capability to enterprise clients is the next big opportunity. AV systems are now available that integrate multiple video production capabilities typically found in broadcast studios. Now even the smallest enterprise clients can create professional grade, in-house videos quickly and cost effectively.

The ability to produce high-quality video in-house benefits enterprise clients in many tangible ways – whether they use it to communicate messages to employees, promote new products and services to customers, or train new employees consistently across many locations.

One benefit is the attention-grabbing power of compelling video, compared to just oral communications or written text – up to 80 per cent more effective, in fact.

Click here to view the ebook including videos

The current state of 4K and Ultra HD

Pixel Power takes a look at the case for higher resolution production – 4K and even 8K. It will also consider other initiatives for better picture quality, including higher frame rates and dynamic range/colour gamut.

It will consider the practicality of higher resolutions and other routes to improved picture quality, and the nature of the infrastructure needed to support possible future changes.

This white paper will explore:

– The case for higher resolution production – 4K or even 8K

– Alternative initiatives for better picture quality

– The practicality of higher resolutions

– The nature of the infrastructure needed to support possible future changes

Ensuring quality in IP video delivery systems

The growth of video delivered over IP networks is showing no signs of slowing down. The industry shift towards IP, combined with consumer demand for the highest quality video streamed to any device, at any time, anywhere, is forcing service providers and broadcasters to adapt to stay competitive in this evolving video landscape.

HTTP-based adaptive streaming was developed to enable high quality video delivery over the internet, and proved to be an efficient method for delivering content to smartphones, tablets and connected devices. Today it is increasingly replacing traditional IPTV solutions for delivering even the prime video services for the living room.

To ensure the best possible user experience, video service providers must address bandwidth, latency and packet loss in order to avoid quality issues such as buffering, slow responsiveness, low resolution and glitches. HTTP adaptive streaming is based on two main technologies that impact the quality management: the use of TCP as the transport protocol and the provisioning of content at multiple quality levels.

TCP is a bi-directional protocol, allowing clients to adapt to changing network conditions by requesting a suitable quality level. The TCP protocol also offers inherent retransmission capabilities, which enables it to efficiently deal with packet losses, preventing noticeable glitches. However, to maintain a certain bandwidth capacity in the presence of packet-loss requires over-provisioned networks. Another issue impacting the available bandwidth is latency. Despite not being a major issue with user interactivity, increased latency will lower the available bandwidth. This may prevent high bitrate streaming (UHD, HD) over long-haul networks.

To ensure optimal video delivery, service providers need to consider the following techniques to address these challenges.

Resource management: properly tracking and allocating bandwidth is required in order to deliver high or continuous quality content to a large number of viewers, without over-provisioning the network. This is challenging for adaptive streaming sessions, which consist of hundreds or thousands of small fragments instead of a single continuous stream. A solution is to use ‘virtual sessions’. Deploying an agile software defined network management solution makes it possible to allocate resources, provide load balancing and monitor functions that are scaled on demand.

Proximity to client: Originating streams as close as possible to the viewers makes it possible for operators to ensure high-quality video delivery. A distributed hierarchical network stores the most popular content closest to the end user and the least valuable content deeper in the network. In addition to improving quality, caching can be a cost saver by reducing upstream bandwidth requirements.

Multicast of HTTP live streams: Popular live content often cause peaks in network traffic. To overcome this issue, distributed caching servers can fan-out streams closer to the end user. However, this may still be inefficient in larger networks. There are several ways multicast can be used to deliver live streams, at least to a nearby cache. File fragments can be delivered over multicast, or video can be delivered as several synchronised transport streams. The first option requires less intelligent caches, but consumes more network bandwidth. The latter option requires intelligent caches that can segment, encrypt and re-package content, but requires only one format to be delivered through the network. Pushing live streams over multicast to edge caches is also an efficient way to minimise the end-to-end latency for live delivery, by avoiding intermediate cache traversals. For some live events, like sports, short latency is crucial.

Measure and Analyse: Since quality decisions are made by the clients, the only way to verify the quality experienced by end users and to evaluate the impact of any optimisations introduced is to measure and analyse the traffic that was actually delivered. This can be challenging if the delivery of every little chunk needs to be monitored. A video aware analytics tool that abstracts the data at a meaningful level is key.

HTTP-based adaptive streaming enables service providers to deliver premium content to a growing number of devices and over different network types. Yet, the issue of quality is still top of mind. By considering the techniques outlined above, service providers can address these issues and keep pace with industry demand.

By Göran Appelquist, chief technology officer, Edgeware




Creating scalable workflows for file-based video processing and delivery

Launching a new VOD or catch-up TV service can be a daunting task, especially when factoring in the potential growth of content availability and consumer demand for media. This white paper explains how video providers can evolve from basic to advanced filed-based workflows using software-defined video solutions without a complete overhaul of infrastructure and with minimal service interruption.

Leveraging video wall technology for high-impact results

The latest display technology’s rich interactive features and smaller footprint—combined with equipment and software costs that are now much more affordable—are all driving the rapid growth in the installation of high-impact video walls in venues from airports to corporate offices to schools and universities. This new white paper explains the two keys to making the business case for video walls for your individual project; understanding Total Cost of Ownership (TCO) dynamics and scalability (built-in technical scaling).

How to buy a state-of-the-art post production tool

Video editing has moved from the realm of the few to a wide range of new users. These changes not only empower creativity, but enable faster and better delivery of the end product, with fewer stages in between. Here’s how to shop for a state-of- the-art post production solution.

Making the best investment for audio and video editing

In an industry where time is money, efficiency in post production translates into savings. If this efficiency can be expanded to include each step in the chain, from acquisition to final delivery, then the return on investment is significant.

How safe is your STB?

Hackers are no longer teenagers wanting to gain notoriety. Over the years, we’ve witnessed cybercrime change. In 2008, the third generation saw the motive move from recognition to financial gains. The fourth generation hackers could be described as professionals by 2010. And now, it has developed into an active underground economy. Tools of the trade are for sale, botnets can be rented by the hour. There are even social networks and escrow services! You could classify the fifth generation as ‘Hacking as a Service”’.

Advanced persistent threats target specific companies for a specific purpose; with devastating effect. Last year saw American retailer Target’s Q4 profits plummet by 46 per cent as a result of an attack. And The Home Depot confirmed that hackers exposed USD $56 million credit and debit cards during its months-long security breach.

My STB is protected – isn’t it? 

Internet attacks on STBs are a viable option. Indeed, in 2012 Adam Gowdiak first presented his findings to the HITB security conference in Amsterdam. He’d discovered major security holes in STBs and DVB chipsets. By demonstrating a malware attack and satellite TV signal theft, he was also able to obtain sensitive information from the STB. This included user’s credentials, viewing history and billing details. You can imagine that much more could be accessed had the STB been used as a gateway connected to other devices.

If this had been a professional attack, the service outage alone would have cost the pay-TV operator millions. And the effect on the brand could be more devastating; even resulting in a loss of trust.

An agent on the inside

What is needed in today’s world is the capability to fight real-time attacks. Having an agent in the software allows you to do just that. The agent monitors everything that is happening and controls what processes are doing. It feeds back anomalies and enforces policies; allowing you to update policies over time. Even if a malware app gains Root access, the agent can either terminate it or restrict access. For example, the app is prevented from accessing certain registers or writing to particular screens. Pay-TV operators can have the confidence that their STBs are protected dynamically from the inside.

Relevant today and tomorrow 

To minimise the risk for pay-TV operators, it’s important to have a media platform security solution which works with existing STBs, as well hybrid STBs or other gateways. Such a solution can extend the life of a STB. Having robust security across all connected devices is paramount. And with the Internet of Things gaining momentum the stakes will only increase.

By Andrew Wajs, CTO, Irdeto


Prompt connections

The fundamental technology of prompting – a reflective screen in front of the camera lens to put the script in front of the presenter – has hardly changed since its invention. But that is not to say that teleprompting companies are not continuing to innovate.

Recent innovations have been around the script itself taking advantage of new forms of connectivity, cloud services and social media.

It is a common requirement, for example, for a journalist or presenter on location to collaborate on a script with colleagues back at base, or to post a final version of the script from the location with the producers. This has typically meant emailing updated scripts back and forth.

The risk with this is that a delay in the email could mean that the presenter uses a script other than the final draft. Equally, subtitle accuracy has depended on the presenter publishing the final version from the prompter and emailing it back.

The latest version of WinPlus, the Autoscript software, now includes direct access to DropBox, Google Docs, Microsoft OneDrive and Box. This means that the latest version of the script can be shared in the cloud without any additional actions or operations. The feature is particularly useful when large files need to be accessed, and ensures that users are always able to download or save scripts in the field, without going via email or a web browser.

Social media is also growing in importance to broadcasters, particularly with audiences choosing to watch live programming while interacting on a second screen. Many producers and presenters are now choosing to fire out tweets around and during programmes as a means of creating further audience engagement and driving more viewers to the event.

The one piece of technology that knows precisely where the programme has reached is the prompter. It tracks the script line by line, so it can be used to automatically trigger actions at a precise point. This has been used, often through the MOS protocol, to cue graphics or video clips for a very long time.

The next logical step is to create tweets in advance which are linked to specific points in the script. It is logical, because the content of the tweet is likely to be linked to the content of the script. An obvious example would be when a winner is announced at an awards ceremony.

In the latest version of WinPlus, the prompter reaches the right moment in the script, a cue is generated to send the tweet. Social media channels are updated exactly at the right moment, even in a live programme, completely automatically.

Enhancements like this are added to our prompting software because this is what broadcasters are asking for. As use cases for the cloud and connectivity arise, we will continue to develop the software to implement them simply and logically.

By Robin Brown, product manager, Autoscript

Dynamic digital signage

The most advanced of the latest-generation digital displays is the new dual-sided LCD display, that uses new engineering breakthroughs to both place the digital displays seamlessly into the environment and maximise the viewing angles and viewing space for your content. With images on both sides of the display, dual-sided signage gets the attention of the viewer “coming and going,” essentially doubling  the amount of exposure for your messaging, branding, or other content.

Understanding UHDTV and 4K formats and how to use them on the LiveCore series

After the relative failure of stereoscopic 3D to take hold in the consumer market, the new feature the industry is betting on is 4K. Surveys have proven that the market is more interested in high resolution images than in 3D content.

As there are many formats of “4K”, it is important to understand the differences to make the best choice for the hardware in your system.

This white paper aims to demystify the topic and explain what the impacts on the hardware are. Here Analog Way analyses why resolution is not enough to deal with 4K and focuses on LiveCore use cases.


The future of freelancing – Amazonification of services

Getting freelancers on board is a great idea. You save time and money, and get instant access to a highly skilled, hyper-specialised workforce. No wonder businesses from diverse industries are getting in on the action.

Online freelancing may have emerged recently but it’s already drawing in professionals from many different fields. Problem is, existing freelance platforms suffer from a lack of innovation and user-centric design. If freelancing is to really take off, there must be meaningful change. Both service providers and businesses must feel completely comfortable in an online work environment, and the whole process must be as streamlined as possible. The good news is, the industry is on a tipping-point and all signals point to one unmistakable sign: the marketplace is ready for disruption.

There are two main types of freelance platforms we see today. Each has its benefits and drawbacks, but taken together, they haven’t really been able to tap into the massive potential user base. Online platforms only serve a small percentage of users – that’s because they’re not really designed for ease and simplicity, and as a result, overlook a majority of business clients made up of SMBs, startups and entrepreneurs. Let’s take a look at the existing models.

First, there’s the RFP model. You have to write a carefully-worded project description that’s descriptive enough to attract just the right candidates. Too generic and you get loads of applicants, too specific and you eat up valuable time crafting the job post. In most cases what happens is the client puts up a more-or-less generic post. This post gets a ton of applications – many with inaccurate estimates and deliverables. And the more subpar applications you receive, the higher the chances of hiring someone not suited for the job. What complicates matters even more is that often the bids you get are not comparable with each other, as some applicants will quote their own prices, which are likely to be very different. To find just the right applicant from the pool, you may have to spend considerable time taking interviews and negotiating. It’s tedious stuff.

Then there are the gig platforms. This model solves many of these problems but comes with its own set of drawbacks. Say you’re looking for a logo design service. You’re bombarded with offers from thousands of freelancers, many of them not even relevant to what you’re after. And again, there’s no way really to compare across the gigs as each freelancer offers different terms, such as number of initial concepts, number of revisions, output format and the like. It’s quite easy to get overwhelmed.

A critical problem with both models is the lack of standardisation of services, and if the industry is going to evolve, it’s the first thing to address. Freelance platforms must find a way to standardise services, so users can quickly compare offers from different providers. Think of Amazon. Each product is packaged separately and assigned a unique SKU. When you search for a phone, you see a list of sellers offering the phone, along with fixed prices. Imagine if a freelance platform could package services with well-defined descriptions and fixed prices, along with a list of top-rated providers. You wouldn’t need to write out job posts, shortlist candidates, or take interviews. If Amazon worked on the RFP model you’d need to write out the specifications and technical details (and wait for sellers to send bids) each time you wanted to buy a phone! As a client, it’s not your job to write out the specifications, it’s what you should expect from the marketplace.

The prepackaged concept easily addresses the problem of standardisation and it’s a great time-saver. True, buying a service for a business isn’t the same as buying a phone for personal use but research indicates that 80 per cent of projects on traditional freelance platforms could be ‘packaged’ into readymade services. Say you need a logo for your new startup. All you really need to communicate to the freelancer is the number of concepts, number of revisions and output formats. If a freelance platform allowed you to simply specify your needs with one-click commands, you wouldn’t have to write and create a job post. You’d simply specify your requirements and be able to see a list of sellers who offer the service.

Standardisation is key to discovery. When you’re looking for a logo design service, you want the process to be as smooth and streamlined as possible. Pre-packaged services makes that possible – you simply choose your service, configure it, see a list of sellers and take your pick. The problem with most freelancing sites is that, in an effort to make projects fully customisable, they make clients do a lot of things that are clearly redundant. Customisation options may afford flexibility, but they’re also not really necessary in about 80 per cent of the time. For a client, standardised services with just the basic configuration options is often enough. You simply plug in your preferences and you’re good to go.

There’s a difference between buying an iPhone and a prepacked service, however. When you buy a phone, you get the same exact product whether you get it from Best Buy or Walmart. Not so for a packaged service. In an Amazon-like freelance platform, all freelancers may be offering the same logo-design package, but their deliverables will differ. You need a way to gauge the quality of their service before you buy it. That’s where a portfolio comes in handy. An easy-to-view portfolio lets you check out their best work and get a feel for the aesthetic sensibilities and design competencies of a specific provider. Along with rating and reviews from past jobs, the portfolio should give you a complete picture of a logo-designer’s level of skill. Coupled with standardised services, a quick portfolio viewing feature is just the kind of innovation that would attract new players.

It’s evident the prepackaged model can usher in a whole new freelancing experience for both buyers and sellers. Another great benefit of standardising is that it’ll eliminate friction and disagreements during the course of a project. All too often, a poorly-written job post, misinterpretation of the scope of work involved, or other factors leads to disputes. With a prepacked service, there’s no room for ambiguity or misinterpretation – each side knows exactly what to expect or what to deliver. The SKU is like a contract that binds both sides.

All in all, e-commerce is definitely a viable model for freelancing. A freelance marketplace that introduces the concept of prepackaged services and offers those services through a catalogue of SKUs would be able to streamline and optimise how freelancing is done online. It seems standardisation is the innovation that the freelance space has been waiting for, and it’s just the kind of disruption that will reshape the industry.

By Kleanthis Georgaris, co-founder of DigiServed

Myth and reality of auto-correction in file-based workflows

File-based workflows are ubiquitous in the broadcast world today. However, the adoption of file-based flows comes with its own set of challenges.  The first one, of course, is – does my file have the right media, in the right format and without artifacts?

Fortunately, the leading auto QC tools have kept pace with the growing technology advances to provide us with this peace of mind. However, there are still many unsolved video artifact issues that the auto QC tools grapple with. Firstly, there are several baseband issues that are not even detected automatically – forget about auto correcting them. Secondly, after the corrections are applied, through manual or automated process, if the transcode including re-wrap processes are not managed properly, auto correction will introduce fresh issues – the corrected content may even be worse than what you started with, resulting in an unproductive looping.

How then can you depend upon an auto QC tool to do auto-correction?

This paper attempts to clear the misconception and also sheds light on the extent to which auto-QC tools and other tools in the workflow can auto-correct issues in media content.

The value of history

Archiving in the digital age

Across the globe, the preservation and importance of archives has risen in prominence due to two important factors. The first driver is when archives and the archivist are under most scrutiny during moments in history when retrospection is at its most important.

The other reason is the continual degradation of the archive media. In an irony of our age, as technology has allowed more information to be stored in greater density, in some aspects, the durability of the medium has declined

This white paper outlines the first step in the restoration process and takes a look inside a national archive.

A conversation with your TV: closer than you think?

Much like science fiction has long portrayed humans travelling to space as the “final frontier,” it has also depicted voice recognition and interaction as the ultimate human-machine interface. While speech-driven interfaces have been used for decades, the reality of speech-driven interfaces has been anything but the natural, virtually human speech capabilities envisioned. Practical uses have, until recently, been limited to supporting basic structured queries and stock responses.

However, with the wider adoption of smartphones and tablets, and the broader advancements of interactive technology, we’ve seen a significant shift in how we interact with our devices. With the introduction of virtual assistants such as Apple’s Siri, speech interfaces go beyond basic menu navigation and data retrieval and have started to catch the interest of consumers.

Although there’s evidence of serious attempts to try and break through to the futuristic ideals of speech-driven interfaces, most tools still rely on structured menus for information retrieval or spoken keywords, which simply replace their keyed input counterparts. These are largely unintuitive and certainly don’t support our natural language patterns. When it comes to true conversational interfaces, we’re really only scratching the surface of what’s possible.

What are conversational interfaces?

Conversational interfaces are user interfaces that simulate natural communication qualities on devices and applications, allowing users to interact with them in casual language modes – similar to the way humans converse with one another.

Consumers increasingly desire the ability to speak naturally with devices and have them effectively understand and execute their requests. One of the essential enabling technologies for these new experiences is graph-based search and discovery. This graph – the ‘knowledge graph’ – is a semantic database of named entities, where the relationships between these entities are dynamically mapped for predictive and intelligent results for search and discovery.

Imagine what this level of interaction can achieve when applied to varied uses, such as trying to book travel, for example – juggling dates, flight schedules, and ticket prices – or deciding what to watch on TV between hundreds of live TV channels, thousands of VoD titles, and potentially millions of OTT options.

“What’s the film where Tom Hanks works for FedEx?”

The TV viewing experience is a prime example of where a knowledge graph-based semantic approach is of great benefit to consumers. As the landscape becomes increasingly complex with the sheer volume of content available, traditional lexical metadata and structured menu-driven search and navigation are beginning to prove increasingly cumbersome. Indeed, a recent Rovi survey found that 84 per cent of subscribers indicate they have turned off the TV without finding something to watch. Over half do so more than 20 per cent of the time.

A knowledge graph assists in this discovery by representing content options in the way people think about programmes rather than forcing traditional keyword or structured menu-based attributes on users.

Personal and contextual relevance, like we see in the world of mobile and web services, can also be intelligently mapped for television with similar effect.

Semantic technologies become even more interesting with conversational interfaces that enable semantic interpretation for natural language queries, and can discern when a user is drilling down into a context or has switched topics, such as moving from movies to sports. Not only does this mimic our everyday conversation styles, but is how users typically browse for programming, often not knowing exactly what they want to watch, or meandering through options.

Talking to the TV – fact or fiction?

Conversational interfaces are the next logical phase of development for the emerging era of smart-connected devices. Technology and market forces are driving towards conversational interfaces at a rapid pace.

Simply adding speech enablement to existing solutions isn’t enough. To become fully functional and effective for users, voice technologies must be backed by sophisticated search capabilities, such as knowledge graphs and deep metadata. By building these technologies effectively, consumers can expect to reap the rewards of fast, accurate and intuitive voice content search.

Amazon’s quirky commercial with actor Gary Busey for their Fire TV highlighted the device’s voice capabilities, through the device’s remote with built-in microphone. Samsung also introduced a similar remote control and Google launched one to accompany its Android TV. Expect remotes with built-in microphones to become mainstream in the next couple of years and to be available as part of pay-TV offerings.

Talking to inanimate objects used to be a sign of madness, not so in the future. From TVs and refrigerators to cars and alarm clocks, speech will undoubtedly be the new norm in advanced interaction.

By Charles Dawes, senior director, international marketing, Rovi

Enjoying this content? Sign up for free today to receive the latest white papers, opinions, and blogs from NewBay Connect directly to your inbox.

A buyer’s guide to Vehicle Activated Signs (VAS)

Vehicle Activated Signs (VAS) are proving to be an effective traffic calming method to improve road and pedestrian safety on highways and site roads. They’re popular with drivers as they provide an advisory message, as opposed to speed cameras which are considered to be ‘anti-motorist’, or traffic calming speed bumps which are not generally used on speeds over 20mph.

The signs are effective in reducing speed, particularly of fast drivers who contribute disproportionately to the accident risk.

Usually installed alone, they can also be used to complement other speed-reduction measures such as traffic calming or speed cameras.

VAS are non-enforceable yet are respected by drivers and therefore play an important role in road safety. Their role is to reinforce the statutory signage.

Compared to speed cameras, VAS are a fraction of the cost, require zero maintenance and are therefore popular with councils and facility managers.

Gearing up to keep pace with content everywhere

Media companies are increasingly facing pressure to quickly upgrade their networks to meet the demands of a content-everywhere society; however, upgrading legacy networks can be costly and delivering content in new ways to multiple devices in varying formats is complex.

In April 2015, Intelsat introduced a new service, IntelsatOne Prism, a fully automated, converged IP-based service that leverages a media company’s legacy system while at the same time, enables them to distribute multiple content transmissions, including linear video, file transfer, VoIP, internet access and data exchange, via one platform. Since its introduction, IntelsatOne Prism was quickly put to the test at the Amgen Tour of California which is a cycling stage race featuring 144 cyclists from 18 elite professional teams around the world. As competitors from around the globe – including Olympic medalists, Tour de France contenders and World Champion cyclists – spent a challenging eight days of racing through varied terrain and inclement weather, the movement of an entire television compound over a hundred miles each day presented a challenge unlike any other type of sporting event in the world.

In order to meet the production, information distribution and overall communication needs at the Tour, PSSI Global Services provided World Feed production services and all of the transmission services for the multiple television and internet feeds across the globe. Each day, PSSI successfully transmitted multiple paths over an Intelsat satellite in the United States which fed NBC and NBC SN, a world feed turned to territories covering all four hemispheres. This served as the source feed for the Tour Tracker on-line app and a daily news feed which went out to broadcasters in dozens of countries.

One of the most critical elements of production and transmission for this multi-day show is communication inside the compound and with the outside world. In previous races, PSSI had provided a satellite based communication system which brought phone and internet services to remote compounds throughout California. By utilising IntelsatOne Prism this year, it enabled a dramatic increase in the power, flexibility and success rate for all connectivity — not only for the multiple audio-video productions, but also for other key race activities.

An IP-phone system enabled by IntelsatOne Prism was used for production and transmission needs in all of the cities on the finish line and was invaluable for sites where even cell phones wouldn’t function, such as the compound at Mt. Baldy nestled at 6,500 feet deep in the San Gabriel Mountains in northern Los Angeles, California. Without IntelsatOne Prism, mission critical communication including transmission access, network coordination, race data collection and webcast initiation would not have been possible. NBC show content and graphics were transferred from the East Coast, and operational information such as show schedules and race maps were also sent and received using the IntelsatOne Prism web connectivity.

Outside of the compound, IntelsatOne Prism supported multiple ancillary elements of the race and race support. IntelsatOne Prism delivered internet connectivity for use in the VIP area and provided the Tour Tracker on-line experience to big screen televisions across the race festival, VIP sections and race finish areas. Internet connectivity was also critical to the function of vendor merchandising services and sales in areas where they wouldn’t have been able to use their automated systems without IntelsatOne Prism.

Cycling fans worldwide kept track of the race as it unfolded thanks to the seamless integration of IntelsatOne Prism into the PSSI Global Services presences. The system was built into a satellite van and PSSI also used a portable unit which was easily and quickly deployed and struck each day. With demand for internet and phone services starting before dawn and ending at nightfall, speed, ease of use and high functionality were a must. IntelsatOne Prism was able to provide this day in and day out to help bring this exciting race to fans worldwide. This is what Intelsat means when we say #SatelliteEverywhere.

By Peter Ostapiuk, head of Media Services, Intelsat

Securing your assets without confining your business: Achieving sustainable business models in a digital world

Today’s content consumption patterns are not linear, but unfortunately most processes that bring content to consumers still are. With the quest for business modernisation underway, Frost & Sullivan reached out to CXOs of some of the largest global media companies to uncover their most urgent needs. This research yielded the following top four prioritised areas of investment:


1. Managing content across its lifecycle: Media Asset Management with robust metadata schemas and tight integration with business and creative workflows

2. Transforming content for multi-platform distribution: Nonlinear editing, encoding and transcoding

3. Protecting revenue streams: Enterprise data security, conditional access, DRM, digital forensics, and regionaware access control

4. Monetising TVE: Deep analytics, personalisation and targeted user experiences


In the first of this series of white papers aimed at arming today’s CXO for tomorrow’s media world, Frost & Sullivan discussed best practices in managing content from creation to consumption within the context of real-world challenges, highlighting pitfalls to avoid and best practices to embrace.

In this second installment of the series, the focus is on security.


Keeping our cool in the SDV market

World-leading IT research and advisory firm Gartner recently issued reports that highlight Elemental as the leading supplier in the software-defined video (SDV) space, which is expected to reach a total addressable market of $10 billion (USD) by 2018.

Based on an 24 April 2015 report authored by Gartner research director Akshay Sharma, ‘Emerging Technology Analysis: Cloud-Based Solutions Change Video Delivery for CSPs and MSOs Globally’, it is clear that the benefits of software-based solutions, which have pervaded the IT industry, are poised to significantly impact the video industry. Over 600 content providers that have deployed Elemental video processing and delivery software agree.

A Gartner report published earlier in April, ‘Cool Vendors in Communications Service Provider Infrastructure 2015’, recognises Elemental as a “cool vendor.” The criteria for companies qualifying as “cool vendors” include development and deployment of innovative or high potential technologies and solutions. Issued on 16 April 2015, this report is also written by Akshay Sharma, along with colleague Sylvain Fabre, and places Elemental among the top five vendors with significant impact on the industry.

The Gartner CSP forecast corroborates that the era of traditional fixed-function hardware for video processing is at an end and SDV has arrived. Meanwhile, being named a “cool vendor” by Gartner is a validation of the value an SDV approach has across the broadcast, pay-TV and enterprise video industries. The conclusions of the two reports combined makes it clear that Elemental is in the right place.

While the software-defined video category is still new to most, leading video providers are adopting software solutions at a rapid pace. Infrastructure-agnostic solutions from Elemental are unique and we see ground-breaking implementations with market leaders around the world, including Comcast, HBO, MSNBC, Sky and Telstra.

According to Gartner, that makes us cool. And Gartner calling us cool makes us very happy indeed.

By Keith Wymbs, chief marketing officer, Elemental Technologies

Why OTT streaming really is the new broadcasting

Subscription TV is about to reach a very significant milestone; more than 1 billion users across the world is already in sight by 2020. Traditional delivery mechanisms to address this growing population have been mainly about broadcast technologies, including Satellite, Cable and Digital Terrestrial but IPTV has also become mainstream and the expectation is that TV is now truly an interactive 2-way experience, blending both broadcast and on-demand services. There is also a growing expectation that the next billion are going to be reached through much more affordable OTT connectivity – the open internet providing the connectivity not only to TV’s in the living room, but through the exponential growth of the switch to video being watched on mobiles, tablets and other connected devices. This applies especially to the Millennial generation in established markets and consumers in developing markets where traditional TV deployment models are being bypassed. Having the ability to address TV through an effective OTT process represents a significant and business critical inflection point.

The Millennial generation experience

The Millennial generation also has very different consumption, viewing and commercial expectations of TV too, which in turn are also giving rise to new business models. The next billion are less likely to pay high subscription fees for bundles of content where many of the channels are rarely watched, as they have largely grown up on free (possibly pirated) content. Legitimately supplied ‘free’ content is in reality likely to have been advertiser or sponsor funded with the connected viewing device, providing the back channel for better targeted advertising. In this way Millennials are also much more visible than the previous generations of passive device TV viewers. Much more granular addressability, data mining and targeting are all part of the advertisers toolbox today, but interrupt driven advertising is not the sole way to engage – scheduled ad breaks too may be a thing of the past. Now the Millennials are also using social media concurrently with their TV viewing experience, so the opportunity also exists to engage on the companion social media feed that accompanies the show on view – and this feed or service platform may not be owned or be controlled in any direct way by the programme or channel owners.

The rise of the destination brand and the growing use of content search and recommendation engines

Paying for subscription bundles is giving way to event or series-based content consumption. The rights owner is also now in a stronger position to offer direct OTT access to their point programmes that may in the past have been part of an aggregated TV bundle – but this really only works at either end of the scale. Premium content that is highly visible and sought after, whether it is a sports event or a popular series has the opportunity to be a destination brand outside of an aggregated bundle; as does niche, specialist or community content at the other end of the scale. But does this mean the end of aggregated subscription TV? Not yet, because branded service bundles is also shorthand for convenience to the lean-back TV generation. However, the more we interact and seek out content, the more we are likely to depend on personalised content profiles and customised EPGs with room for incremental recommendations.

Can the existing OTT infrastructure support the next wave of viewers?

The promise of using the internet to deliver a truly interactive video experience had always been a significant technical challenge when it came to delivering an OTT online alternative to emulate a broadcast-like experience – the many iterations of web-centric codecs and dedicated IPTV systems attest to this. One of the fundamental architectural imperatives behind the internet is that it was built around a distributed path delivery for basic text messages, meaning that this core architecture worked in direct contrast to the requirements of video, with its higher demands for continuity of connectivity and significant bandwidth. To try to overcome these demands, digital video delivery has passed through several iterations; from CBR, through VBR and now several flavours of ABR. All were built around the way control over the content and its delivery was in the hands and responsibility of the network provider and its technology suppliers. One of the core assumptions was that the client device at the end of the line was only ever going to be a ‘dumb client’ or ‘passive receiver’ (eg. telephones and TVs). But this approach is no longer appropriate or necessary. With the exponential growth of broadband connected devices; including smartphones, powerful tablet computers and the myriads of laptops, the client device is now more than smart enough to be part of the next generation solution of OTT video delivery and service management.

There is also a shift in the emphasis from purely pursuing optimal QoS to acknowledging that in the delivery of media, video in particular, that the content should be experienced as a continuum and that attention to QoE also helps maintain the users’ experience of the narrative, whilst the system copes with QoS fluctuations. If a picture is worth a thousand words, then the sight of buffering video highlights the limitations that even first generation ABR-based systems have yet to provide a solution for.

Quiptel – Streaming with Intelligence

Quiptel has taken an innovative intelligent end-to-end system approach that makes them the leader in the OTT space. Working with existing infrastructure and incremental technology, now the smarts are not just in the headend and the paths to the devices are also highly optimised and managed dynamically, due to the patented intelligent and interactive nature of the smart headend conversing with the smart client.

Quiptel’s advanced streaming and management mechanisms include:

-Intelligent Dynamic Routing with Concurrent Multi-path Delivery

-Intelligent Data Flow Control with Adaptive Transfer Rate (ATR) or HLS

-Intelligent Client Device Management using Dynamic multi-link capabilities

Intelligent Dynamic Routing with Concurrent Multi-path Delivery

The approach adopted by Quiptel combines the advantages of a traditional RTSP connection with the reliability of HTTP transport to offer the best of both worlds. Essentially, this involves encapsulating RTSP and RTP in HTTP requests and responses. The concept of tunnelling such protocols in HTTP over TCP is not new, but the Quiptel approach provides improved flow control, which can deliver significant improvements in the media experience over current adaptive schemes based on simple HTTP requests.

Intelligent Dynamic Routing – allows the player to determine which available servers to draw the data from based on geographical position and network conditions. By gathering this data ahead of time the player can implement a fall back strategy in the case of failure, increasing reliability of service and improving scalability.

Concurrent Multi-path Delivery – provides dynamic load balancing across a providers’ network by establishing two TCP links to each of the top three servers capable of providing the data. As the player probes the network and load characteristics the priority level of each of these servers will change dynamically, ensuring smooth load balancing across the network.

This hybrid approach also allows the same server infrastructure to support clients on managed network connections using RTSP. This means that a single system can deliver to clients on either managed or unmanaged networks.

Intelligent Data Flow Control with Adaptive Transfer Rate (ATR) or HLS

Data Flow Control with Adaptive Transfer Rate or HLS – uses the algorithm in the player which selects the most appropriate bit rate and flow control dependent on the condition of the buffer and network. We aim to ensure that the buffer stays between 20 and 80% full, with the highest bit rate possible for the available network. The algorithm strives to push the bit rate to the highest level whilst the Quiptel mechanisms maintain communication with the server while delivering media over the standard HTTP protocol. The underlying TCP connection is also configured to optimise delivery, while remaining completely compliant with Internet Protocol standards.

By bypassing the buffering of data when sending RTSP commands, Quiptel reduces transmission delays and increases responsiveness. This buffering is generally used to reduce transmission of small data packets, but by dynamically disabling this process for command data, latency can be significantly reduced. This means the Quiptel system is able to respond more rapidly to changes in network conditions, allowing intelligent flow control to manage the delivery of data dynamically, rather than simply relying on the client to download an entire chunk of media before it can react.

Intelligent Client Device Management using Dynamic multi-link capabilities

Intelligent Client Device Management using Dynamic multi-link capabilities are built directly into the Quiptel player. Whilst in Speed Up mode, the player is trying to source as much data as possible to fill the buffer and provide playback. To achieve this, the player establishes multiple TCP links as required to enable the download of multiple segments in parallel. The segments are also combined to create continuous playback, ensuring a great QoE.

Now the smart client device is an active part of the experience, with each connection ensuring the optimal delivery of the stream. The Quiptel approach allows the device to display the feed at its maximum capability, which in turn allows the system to manage an optimised routing. This means that a device never gets more data than it can handle and neither can a ‘greedy client’ demand more data at the expense of other clients on the network. This is the ideal approach in a world where devices are either part of a managed IPTV platform, or a BYOD in an OTT TV service.

The Quiptel approach pays off

The flexibility that Quiptel offers is that it can be applied to both existing IPTV and new OTT deployments – providing a technically elegant and cost-effective approach that meets the needs of both platform operator and viewer.

The Quiptel solution also uses the latest ‘Cloud-based’ architecture to provide a truly scalable and flexible platform. In the past, the barrier to entry to becoming a broadcaster was the requirement to make a significant upfront investment in dedicated expensive systems to distribute the signal. With broadband infrastructure becoming more pervasive, faster and cheaper and server and storage power also following Moore’s Law, the set-up costs can be fully scaled as both the channel line up and viewer population increases. The Quiptel platform is designed to enable new entrants to engage and build audiences in ways that were not practical or cost-effective with a broadcast approach. This means that even those specialist channels that are in danger of being lost in the lower echelons on broadcast platforms can now benefit by going on-line and creating an engaging interactive offer from launch.

Quiptel’s technology makes use of under-utilised paths across the network to avoid congestion and gain better throughput. This represents a truly intelligent approach to constructing a comprehensive software solution that is both technically advanced and fully scalable to meet the needs of operators of all sizes – to deliver OTT now.

Getting to the next billion is not dependent on waiting for new codecs or deploying expensive network upgrades – Quiptel’s intelligent approach means you can start winning new viewers today.

Quiptel’s OTT streaming solution really is the new broadcasting.

By Geof Todd, VP sales, Quiptel



Procam supports MAMA Youth Project

UK broadcast hire company Procam has donated £100,000 of HD equipment to the MAMA Youth Project to give disadvantaged young people hands-on experience of a range of broadcast technology and further their route to employment in the industry.

Procam’s investment has provided the charity with HD cameras, lighting and sound equipment. This means that filming of the organisation’s TV show, created by the young people themselves, will be captured in high definition for the first time.

The MAMA Youth Project (MYP) gives young people aged 16-25 from minority and disadvantaged backgrounds the opportunity to undergo hands-on training and gain real-world experience in the broadcast industry. As part of the scheme, Procam took on two trainees for a 13-week period, after which both were taken on as full-time warehouse technicians.

Trainees Jack Lucas (left) and Callum Tunmore give their own accounts of how they found the experience.

By Jack Lucas, audio trainee

I began my time at Procam not really knowing what to expect, but with big hopes and expectations and I wasn’t disappointed. Although I had gained some experience as a sound recordist during my time at the MAMA Youth Project, after spending just a couple weeks at Procam I realised there was still plenty for me to learn. The range and variety of equipment that they own is quite staggering and can seem a bit daunting when you first start, but all of the staff members have been really friendly and willing to pass on their knowledge and give advice.

Due to my interest in audio, I was able to spend a lot of time working in the sound department with Steve Peck and Louis Boniface who would always answer any questions I had about the equipment. It was here that I learnt the most about how Procam operates and how serious they are about providing the best service possible to all their clients. This would include meticulously testing every single piece of kit, making sure everything is clean and presented neatly. It is this attitude which I think helps make Procam as successful and well respected as they are.

I didn’t just spend 13 weeks in the kit room, I also got to go out with the projects team to Under The Bridge at Chelsea’s football stadium to help with the Made In Chelsea end of season show. This was a great learning curve as although I’ve had experience working on shoots, they were on a much smaller scale. I got to help with both the rig and shoot day working under Procam’s sound supervisor Nick Way, who really looked after me and made me feel part of the team.

Having met some of the Procam staff whilst at MYP, I remember asking them what it was like to work there and they all had good things to say. They made it clear that it isn’t an easy ride and that you have to work hard and be driven, but if you truly want to build a career as a camera op/sound recordist, then there’s no better place to do it.

My advice to those considering a technical job in the industry; if it’s something you are passionate about and enjoy then go for it. A hire company is your opportunity to arm yourself with all the know-how needed to make it. I didn’t study for TV at university and whilst my degree (Music & Audio Technology) helped with my understanding of audio, it didn’t give me the hands-on experience and knowledge that I’ve already gained from my short time working at Procam.

By Callum Tunmore, camera trainee

From the age of 16, I had been involved in a few small productions, and worked with a production company in my home town of Norwich. I was then in the sixth form, and made the decision to try and jump straight into the industry as an alternative to university. I was taken on by the MAMA Youth Project, and worked as a camera operator for a series on a Sky 1 commission. From this I gained a broadcast TV credit, valuable industry and kit experience, and above all a 13-week paid placement with Procam.

During my time filming the show, I was operating ENG broadcast cameras, full lighting setups and all the other components we needed to create a professional production. All of the kit I was using was from Procam, so it meant I had a head start in understanding some of their equipment. During the 13-week placement I had with Procam after the show I undertook a massive range of tasks, with no day being the same. I was able to get hands-on experience with Procam’s extensive range of equipment and I felt I was learning an incredible amount on a daily basis. I was also sent out as a driver on occasion, which helped me familiarise myself with a range of procedures and clients that Procam deal with.

After the placement, I was taken on at Procam on a permanent contract as a warehouse technician. The main benefit of the role is that I’m constantly using and testing every part of the kit, so I’m able to learn how to use it, as well as troubleshooting problems which are invaluable on set. Since being employed full time, I’ve been out on numerous shoots, ranging from a night time shoot in the London Stock Exchange to a high end commercial with a model. I have been working as both a camera trainee and a technical assistant on these jobs, but the work goes far beyond the job title. I’ve been training with the fantastic DOP Saul Gittens, and through working with the team that Procam employ I’ve gained a lot of expertise, as well as the friendships that have subsequently blossomed. I’ve been using a huge range of kit from the C300 to the F55, and been able to get hands-on experience with the industry’s best kit, which is incredible considering I was in college just last year.

Despite being Procam’s youngest employee at 18, I found their work is based entirely on professionalism and the sheer amount of hard work everyone puts in, so I wouldn’t hesitate to recommend a career with them instead of university.

It’s a common fact that graduates will come out and have to start at the bottom like everybody else, as the experience and credits I’ve got simply can’t be taught or learnt at an educational level. With a set career path to become a DOP, I believe I am in the right place to achieve this. I feel fully in my element at Procam, and overall it’s an amazing place to be, with great people who have helped me achieve so much in the six months I’ve been in London, and will undoubtedly continue to help me achieve my goal of becoming a lighting cameraman.

By Jack Lucas and Callum Tunmore, Procam

Eye on 2020

Historically conversations around broadband speeds and piracy have been focussed on utilitarian aspects, with rapid broadband allowing for free and easy access to illegal content. While this is true, by 2020 operators will need to consider how they can use rapid broadband to provide a luxury content model to customers, which emphasises the premium nature of purchased content over the pirate alternative.

One of the most important areas to be discussed between now and 2020 is the issue of Net Neutrality. For cable operators fighting increased competition from OTT operators, focus will shift to broadband provision. Operators control the infrastructure, so the decision on Net Neutrality will be a huge deal either way for them. With a ruling against Net Neutrality likely, operators will shift their gaze to broadband provision and implement increased speed and capacity plans as well as enforcing tolls on OTT services to bolster their position in the market.


IP-based KVM: The next stage in broadcasting evolution

The broadcasting sector has experienced a number of changes over the past few decades – from analogue to digital, from SD to HD, and not to forget about the short-lived 3D television and the popular 4K/UHD. While there will always be a focus on the consumer and getting viewers what they want, when they want it and on whichever device they want it on, behind the scenes, broadcasters are also experiencing developments that could change the way in which they operate; in terms of their operations and how they deliver customer demands.

The studio control room of the past, for example, featured a host of specialised equipment– often proprietary – designed to process, manage and transit content. However, as technology has evolved and organisations are driven to cut costs yet be more efficient and productive at the same time, there have been significant changes in terms of their infrastructure. One of the major shifts to help facilitate this has been the use of a standard IP network to transport signals around facilities.

An IP-based broadcast environment

KVM (keyboard, video and mouse) technology that is IP-based is one of the most effective use cases in broadcasting for the benefits of this transport method. IP is being widely used in all areas of broadcast, from the outside broadcast truck and studio control room, to post production suites. Specifically, IP-based high performance KVM removes the limitations of traditional AV equipment and brings real-time, accurate video operation to these areas.

Broadcasters are using switching and extension technology – not new by any means – to make operations leaner, more cost-effective and more flexible by basing it on IP.

IP-based KVM brings enhanced levels of reliability, scalability and versatility to the control room, and also has the power to deliver HD video quality at 60 frames per second. It is ideal because it is cost-effective, reliable and resilient and already forms the backbone of a gallery’s infrastructure. While technology on the network may be sourced from different vendors, the investment has already been made. The result is that broadcasters can leverage the IP investment without significant additional cost.

Switching and extension in the control room

IP-based KVM allow operators to switch control between many different systems and workflows, from one desk, and enable multiple users to share the same resources – without loss of quality or performance. It also improves the ergonomics of a working environment by freeing up space and eliminating excess heat and noise, as the computing equipment is generally located somewhere else, such as a secure server room.

From a switching perspective, the control room is a pressurised environment, particularly during live events, and the ability for users to switch between different resources while sitting at one workstation, using one mouse, screen and keyboard is a boon for productivity. Whereas the KVM extender functions in such a way that the user is not aware of it, running efficiently in the background, providing real-time extension, high resolution graphics, full USB compatibility and instant switching speeds.

This can assist in making more effective use of a control room’s staff complement, and may even lead to the reduction of staff needed at desired times.

The use of IP-based KVM in OB trucks yields the same benefits. Set up as a microcosm of the galley control room, OB trucks are used in broadcasting live events and require instant video and USB switching capabilities, reliability, device support and efficient ergonomics. Space is limited and there is a finite number of staff performing a multitude of tasks, such as monitoring video feeds, previewing shots, ensuring the quality of shots, guaranteeing the playback capabilities and transmitting the feed back to the studio or main truck that is controlling the broadcast.

Using an IP-based KVM solution in the environment ensures that multiple machines can be controlled by just one keyboard and mouse, with video signals switched and extended at the same high quality.

Post production and KVM

Post production environments must be comfortable and quiet, with talent and editors able to access the variety of hardware and software required from a single workstation. KVM ensures that the computers can be removed from the editing suite, freeing up space and eliminating additional heat and noise. As a result, extending these resources without loss of quality or functionality is crucial. In addition, IP-based KVM guarantees pixel perfect content and frame rates, a critical factor for post production houses.

Going forward, the use of IP may even be the solution broadcasters are seeking to efficiently deliver 4K/UHD content to the viewer. In the immediate future, however, the application and use of IP-based KVM is already delivering flexibility, scalability and improved reliability to installations across the broadcast industry. As more organisations move away from the reliance on proprietary hardware and adopt more software-orientated business models, IP-based KVM will play a key role.

By John Halksworth, senior product manager, Adder Technology


BYOD and the new demands for group collaboration in Education and Corporate

Emerging audio-visual and ICT technologies are rapidly changing our workplaces, schools and higher education establishments. Wireless technology makes it possible to share content from various participants using their personal tablet or smartphones and combine this on a centrally projected screen for discussion.

Increasing use of tablets is driving the trend towards what is known as Bring Your Own Device (BYOD) practices in business and education. The growth in use of smartphones and tablets in education and business has been assisted by ever faster and easier internet access and cloud storage such as Dropbox or Google Drive allowing easy storage of high data presentation files, video files etc. outside of a company’s IT network.

The next step is now to integrate the BYOD practices and faster internet infrastructure with a more streamlined way to collaborate and engage when people walk into the classroom or meeting with their personal devices.

This AV white paper examines the benefits of BYOD and looks at the possible applications.

The new game-changing interactive digital displays

Next-generation interactive digital displays are offering new features, enhanced total cost of ownership (TCO), and seamless integration into the existing technology platforms in your meeting room or classroom. This new white paper will help integrators and end users evaluate the new interactive displays and decide if they are right for your organisation.

Thunderbirds are go! How Adobe Premiere Pro is helping bring the British classic back to the small screen

By Niels Stevens, business development manager, Video, Adobe UK

Growing up, I was a huge fan of the Thunderbirds. Combining marionette puppetry and scale-model special effects, the show is still hailed as one of the best examples of supermarionation ever seen on screen.

ITV Studios and Pukeko Pictures are bringing the classic back to the small screen with a brand new series. Thunderbirds Are Go will use CGI to create animated characters with live-action miniature models developed by Weta Workshop, to give the show a retro feel and remind fans of the show’s puppet roots.

So how did the teams involved go about creating this reimagined classic?

Starting with pre-visualisation, Maya was used to render the scenes and assets while Adobe Premiere Pro CC was used to cut all of the rendered scenes with the pre-recorded dialogue. Once the pre-visualisation was complete, the teams moved on to post-visualisation. Using the XML output capability in Premiere Pro, they translated the work that they did back into Maya to generate detailed assets, rigs, and camera placements for each shot.  Once this was done, everything was brought back into Premiere Pro before the final comp, which was done by a company called Milk.

Premiere Pro CC helped the teams to speed up their workflow as it accepts all sorts of content, regardless of the format it’s in or whether the content is online or offline and doesn’t require any third-party apps to ingest the media. With so many different companies working on the episodes across different countries and time zones, Premiere Pro also enabled everything to be managed in just the one programme, keeping things streamlined and efficient.

The first Thunderbirds Are Go episode premiered in the UK with an hour-long episode special on ITV on 4 April 2015.

Welcome to NewBay Connect

Welcome to the new and improved NewBay Connect website.

NewBay Connect is the global resource portal for media technology content. We’ve extended the editorial reach of the site beyond whitepapers to bring you brand new content including insightful opinion pieces, stimulating blogs and engaging videos from industry figures and companies across the TV, pro AV and pro audio markets.

We encourage you to take a look around the site and welcome any comments/feedback. We would also love for you to get in touch with ideas for new content and requests to contribute to the site.

To keep up with the latest content on the site, make sure you follow us on Twitter and Google+, join our LinkedIn group discussions and sign up to our two weekly newsletters.

I hope you enjoy the refreshed new look site.

Melanie Dayasena-Lowe


NewBay Connect

If we don’t collaborate, we can’t innovate

By Kevin Usher, director of product and segment marketing, broadcast and media, Avid

In March we announced Adobe Premiere Pro CC is now fully supported on our shared storage systems, Avid ISIS | 5500 and Avid ISIS | 7500, as a result of unprecedented collaboration between the two companies.

I’ve been asked why two companies who effectively market competing professional editing applications would want to collaborate? After all, by doing this, aren’t we just encouraging the market to use Premiere Pro CC over Media Composer?

The answer is no. For a long time the commoditisation of professional editing products – most editorial software is now available for as low as a £39 per month subscription fee – has given editors access to a wide range of professional tools.

It’s about giving customers the ability to choose, and interchange, between editing tools safe in the knowledge that their performance won’t be hampered by inefficient connectivity to the rest of the workflow.

With post production and broadcast infrastructures built on solutions from many manufacturers, industry collaboration is essential to making the overall workflow more efficient. This leaves our customers to concentrate on doing what they do best – creating compelling content.

The flexibility to integrate third-party products easily into an Avid workflow is the main premise of the Avid MediaCentral Platform and delivers on the promise of Avid Everywhere. With the Avid Connectivity Toolkit, third-party vendors can seamlessly integrate their products and services into the workflow of content creators and distributors across all solutions on the platform. The openness of the MediaCentral platform enables partnerships with companies like Adobe to happen and gives post production houses and broadcasters vital productivity efficiencies.

So what does this latest collaboration really mean for the industry? Adobe Premiere Pro CC has worked on ISIS storage for several years, but its performance wasn’t optimised for ISIS, resulting in a lower bandwidth rating per ISIS engine compared to Media Composer.

Working together with Adobe to deliver enhancements to Premiere Pro CC, we’ve more than doubled its performance on ISIS. For example, a system that could support 10 streams of playback (so five editors each with two streams) now supports 20 streams, allowing ten editors with two streams each – or more streams per editor. This enables more complex edits and the creation of more compelling content, no matter what editing system it is cut on.

As a result of this latest collaboration, video professionals can now experience the most flexible and efficient workflows regardless of their choice of editing application.


The future of online video

By Alon Maor, CEO, Qwilt

A look at the challenges facing online video delivery from the network point-of-view

The 50 year old, highly asymmetrical, model of broadcast television is being shaken at its core by consumers, technology and new service provider business models. Rivalling some of the greatest technology disruptions in history, the online video phenomenon has catapulted the market into an unprecedented – and very exciting – transformation. Undoubtedly, it was the advent of long form HD video from sources like Netflix, Amazon and Hulu that ushered in a new generation of viewers and transformed consumer viewing behaviour. The novel, early online-video days of watching clever three minute YouTube clips have been eclipsed and the notion that you once had to wait for a TV show to come on air in order of broadcast already seems laughable.

High quality video streaming is not only making its way into our living rooms, via the popular uptake of smart TVs, but increasingly onto our mobile devices too. Today’s viewers now not only expect to be able to watch their favourite TV show or film, anytime, anywhere, but also on any device of their choosing. Unfortunately, existing IP networks, already straining under the pressure of serving this volume of streamed content, are being pushed to their limits. This consumption trend is confirmed in research that estimates half of a mobile viewers’ time is already spent watching videos that are longer than 30 minutes. Interestingly, many cable operators are beginning to recognise – and, in some cases, admit publicly – that as time goes on, their broadband internet service offering, not their cable TV service, will be their strategic product line.

Faced with the inevitability of online video streaming going mainstream, it is critical that network operators move now to prepare their networks for the future of online video, or else both viewer satisfaction and, crucially, subscriber dollars will be at stake. It’s important to remember here that at every step in the video streaming ecosystem, from the content provider’s origin server, to commercial CDN, to transit and peering exchange points, to the network operator and finally to the home, a video stream must navigate a diverse group of commercial and business interests and gateways.

The challenges, therefore, are manifold; for one, any surge in traffic load during peak video consumption hours – like around the time of the season finale of House of Cards, for example – can clog up the network and trigger an annoying ‘buffering, please standby’ notification. Aside from the viewer irritation of having to wait for content to be delivered, traffic pressure inevitably impacts video picture quality and Quality of Service (QoS) for other services – including other latency-sensitive applications. Unfortunately, in a highly competitive industry, slower downloads, delays in response, lower bit rates and subpar viewing experiences simply result in consumer churn – and, on occasion, a public flogging via social media.

In the past, conventional wisdom would guide an operator to address the problem with brute force – buying more routers, switches and links to increase capacity. However, just throwing money at the problem is proving to be an approach as unsustainable as it is costly.  A long-term solution for online video is not just about building bigger networks, or becoming trapped in closed, proprietary systems, it’s about building an intelligent, open network. There is broad-based agreement with the architectural principle that caching video content close to the consumer is an essential part of the overall strategy to deal with the online video problem. However, what’s missing from this conversation is the choice in caching architecture that must be made along the way.  This choice and the implications that flow from it are critically important and worth a deeper discussion.

The notion that there is a choice in caching architecture is often obscured today by the ongoing deployment of content-specific caches. These closed and proprietary caches handle popular video traffic but solve the problem for only one content provider at a time. For example, the Google Fiber team describe their support for various closed cache systems in a recent blog but do not address the option to deploy an open cache architecture which would benefit all content providers and consumers. Ultimately, when addressing a future in which the content provider landscape is both diverse and dynamic, ‘open’ simply beats ‘closed’. Moreover, the imperative to build an open architecture for streaming video appears to be an essential part of the recent Net Neutrality ruling from the US Federal Communications Commission. This ruling affirms the ‘Open and Free Internet’ and clearly requires the US Internet Service Providers, for both mobile and fixed broadband networks, to treat all content without preference. As this ruling is fully implemented, any content-provider-specific cache deployed inside the last mile network will likely be called into question as preferential treatment that is prohibited by the new FCC rules. Further, we expect the EU and many other regulatory bodies around the world to adopt much of the FCC’s net neutrality doctrine.

Support for the open caching movement is growing fast as evidenced by the announcement last fall of Streaming Video Alliance whose 17 founding members have made clear the need for an open architecture to allow online video to flourish. To be sure, open caching of content will eventually become part of the core network infrastructure just as routers and switches are core today. Indeed, we already have the open internet, TCP/IP, routing and switching as shining examples of how open technology as infrastructure can benefit operators, industry and, most importantly, consumers.

Given video is swiftly becoming a standard fixture in the consumer web experience, and live streaming of sporting events tipped to be the next tidal wave to hit mobile networks, any compromise in end-user Quality of Experience (QoE) is clearly unacceptable. As internet-connected devices and streaming services rise in popularity together in a virtuous cycle, each reinforcing the value of the other, the OTT industry growth will continue to accelerate. Faced with these facts, it’s up to the network operators to now make a strategic choice in terms of how they prepare their networks for the future of online video.


How to profit from accessories

What every retailer needs to know

Retail experts share their strategies to help retailers maximise accessory profit and improve customer satisfaction in this new white paper.

What common core standards mean for college and career readiness

When educators use relevant, self-directed, learning-by-doing instructional practices, we see student engagement increase. Far too often we focus on the ‘Concept’ but lose the ‘ConTEXT’ in our instructional models and then wonder why students are bored and uninterested. When students understand WHY they need to understand a concept and can put that concept to use in a real-world project or problem, we see learning and engagement improve at a much more rapid rate.

There is definitely a place for lecture in our classrooms, but if we really want better outcomes our classrooms cannot only be “the place students come to watch adults work.” Technology can be a powerful enabler of engaging, relevant, student-centred, and personalised learning. We just have to rethink the status quo and its outdated instructional models.

Video wall technology

Are you ready to embrace the latest LCD display technology for your space?

Download this free white paper for an overview of some of the major video wall display technology including Rear Projection, Direct-View LED (light-emitting diode), and LCD (liquid crystal display) to help identify the right video wall solution for your particular application.

The data analytics revolution

It is widely understood that big data, enabled by IP and cloud technology, has the potential to transform business practices across a broad range of sectors, including media. However, the use of viewer behaviour analytics (the collection and analysis of a variety of metrics about the way users interact with audiovisual media) by companies varies greatly. Many organisations are at the beginning of their journey to explore the wealth of data on offer and the ways it could transform their business practices and contribute to their revenues.

Media organisations believe that viewer behaviour analytics will be crucial to their business in the future, with a significant minority citing this as the single most important factor in determining the future of their business. Many organisations are already capturing data, with a smaller group analysing it in detail. Lack of resources remains a barrier to realising the potential of data.

The cloud-based user interface bringing new flexibility to UI/UX deployment

Pay-TV operators are embracing a cloud-based user interface (UI) model that provides several advantages over the traditional practice of hosting the UI in set-top boxes and other delivery devices.

A cloud-based UI allows the operator to quickly respond to changes in viewing habits and deliver to new viewing platforms, such as tablets and mobiles, while continuing to support legacy devices.

Effective cloud UI offerings can also enable rapid software-based updates and on-line deployments offer lower costs by reducing the need for expensive field upgrades and truck rolls.

Orange and Viaccess-Orca: A voyage into the TV world

The world of television is experiencing a rapid transformation. Television viewing is no longer a lean-back, static experience.

Viewers want interactive, personalised content on all of their devices, including TVs, smartphones and tablets.

In this brave new media landscape, what are the operators’ challenges? And how can they manage the growing complexities involved with delivering enriched and immersive multiscreen content while maintaining a high quality of service and experience for end users?

As a leading global provider of content protection, delivery, and discovery solutions, Viaccess-Orca is helping to shape the new television experience by offering content service provider Voyage, a unified TV Everywhere solution for delivering secure, personalised, high-quality multiscreen services.

Understanding Ultra High Definition television

Ultra high definition television (UHDTV) combines 4K resolution, high dynamic range (HDR), high frame rate (HFR) and wide color gamut (WCG). At the same time, it poses some of the biggest questions about the future of the broadcast industry. Should we use 4K resolution, or do we need to wait for 8K resolution? Is some form of enhanced HD with HDR and WCG more likely to be commercially viable than 4K? What will have consumer appeal? Ultimately, will UHDTV be successful?

The Digital Video Broadcasting Project (DVB) has taken the technical standards of the Society of Motion Picture & Television Engineers (SMPTE) (ST 2036-1) and a number of International Telecommunication Union Radiocommunication Sector (ITU-R) Recommendations and produced a tiered practical commercial approach to enhancing television services beyond HD, known as UHD-1 Phase 1, UHD-1 Phase 2, and UHD-2.

To keep things simple, this white paper will use the term UHD-1 when referring to the standards and 4K TV when referring to the actual services and televisions.

Digital Rights Management: Understanding key technologies and drivers impacting premium content distributors

This white paper aims to demystify much of the technology and partisan vendor noise to provide an overview of the market conditions. Senior executives have important strategic decisions to make and the options are varied. Without a defacto standard and with multiple standard and with multiple service delivery options; selection and implementation of DRM will have a lasting impact on the long-term success of any content service.

DRM means different things depending on where you sit within the delivery cycle and the motivation behind its deployment. In its most fundamental definition, DRM protects content to ensure that it is accessible only to legitimate subscribers. The technology also helps to defeat potential piracy and is the adjunct to techniques such as watermarking that help to identify sources of pirated content, and Conditional Access to manage device access to content.

Preference is just the beginning: video recommendations in the age of mobility

Consumer’s today face a huge number of TV and video viewing choices, expanding at a seemingly exponential rate. Even if a viewer knows what they are looking for, it can be a daunting task to find it. But how do viewers discover new programs to watch in today’s TV/video world? And how do content providers cut through the clutter to surface programs to the right potential viewers?

Recipe for disruption

With even more new OTT competitors cropping up, cable companies need to stop testing their TV Everywhere solutions and start doing. Here’s how to serve up TV Everywhere and do some disruption of your own.

HBO Now, Apple TV, Sony Vue, and Verizon have all garnered headlines in recent weeks for their breakthrough Direct-to-Consumer moves, providing viable TV programming to traditional pay-TV services.

Whether it be TV Everywhere or Over-The-Top, it’s clear these strategies are shaping the future of how audiences watch video content.

Offering your content across devices and through the net is one of the most daunting undertakings ever attempted by content creators and pay-TV operators, yet it doesn’t have to be so difficult. By following a clear, rational recipe assembled by Digiflare experts in this white paper, pay-TV operators and TV networks can turn the net from a threat into an opportunity and a boost for new revenues.

After you have read through this white paper and you are interested in learning how Digiflare can demo your next TV Everywhere or OTT application, please submit a request at

Consumer trends and their impact on broadband networks

Three Critical Consumer Bandwidth Trends, Their Impact, and How Service Providers Can Cope

This white paper explores three specific consumer trends, and their impacts on broadband networks. It also describes the technological solutions in the works to help service providers deterministically anticipate and navigate the changing video consumption scene.

TREND 1: Consumers want high quality, untethered experiences in the home
TREND 2: Massive increases in video traffic are triggering significant network capacity expansion
TREND 3: Users want to record and store more content

Innovate your workplace

Mezzanine integrates traditional video conferencing and screen sharing in meeting rooms and collaboration spaces in order to link locations, teams, and content through a shared visual work-space. Mezzanine provides exactly what traditional conference and telepresence rooms lack: an effective means to simultaneously engage multiple users and their devices and data.

This is Infopresence – the integration of locations, users, devices, and streams of information for concurrent collaboration. Every participant in a Mezzanine session – whether local or remote – can display and modify information fluidly from any device, making meetings more interactive and responsive.

The Business Case for Shifting Live-to-VOD Media Processing to the Edge with Just-in-Time Transcoding

It is imperative that operators build out live-to-VOD media processing capabilities at the network edge in order to cope with the explosive growth in time-shifted and place-shifted consumption of live, linear content. Operators must plan new or expanded deployments of OTT content delivery infrastructure to minimize churn, maximize subscriber engagement, and meet projected demand volumes for time-shifted content.

In this white paper the strategic choice for operators who want to remain competitive and minimize the TCO of live-to-VOD media processing is to deploy ultra-high-density transcoders at the network edge, such as those from Variant.

Controlling and monitoring studio displays

Based in Atlanta, the Weather Channel decided to build a new studio for some of their flagship shows. The studio design was based on multiple studio displays in different sizes, aspect ratios and shapes. The Weather Channel looked for a solution that will drive the content of those displays and will flawlessly integrate with their existing infrastructure. Orad’s TD Control studio display control was selected, and provides the Weather Channel with control for all their studio displays. A single Weather Channel operator can now easily manage  the content distribution to all of the studio’s displays regardless of aspect ratio, orientation, resolution and shape, from a single user interface.

As TD Control integrates with all commonly used video mixers,  routers, video servers, newsrooms and more, it assimilates easily into The Weather Channel’s existing workflow and even enhances it.

TD Control provides The Weather Channel with a holistic system that can handle effortlessly their entire production environment well beyond their expectations.

How cloud technologies change linear broadcast playout

This white paper aims to demystify key aspects of cloud computing as they apply to linear playout and related linear television technologies. With rapid changes in video content monetisation and distribution business models, it is inevitable that the broadcast industry will have to adopt cloud technologies and resources for playout and the rest of their workflow.

Using cloud playout to simplify your disaster recovery plans

Disasters, whether natural or man-made, pose an increasing threat to business viability for broadcasters and media companies. Just one hour of off-air time can be costly. On top of financial losses, mostly related to advertising and other unfilled contractual agreements by company mandate or law, broadcasters can suffer damage to their reputation that results in audience declines.

Many broadcasters are required to have effective Disaster Recovery Plans (DRPs) in place, which typically include the most critical components of their Business Continuity (BC) mandates. The standard approach of maintaining a mirrored playout system at the same location, or a remote location, can be complex and costly. Cloud Disaster Recovery (CDR) from Imagine Communications is an exciting new capability that provides private or public cloud-based disaster recovery for broadcast and video playout operations, and because CDR is cloud-based, it is always available and accessible – in addition to being geographically independent from the disaster location.

CDR ensures a virtually seamless, rapid transition from primary playout to a virtualised, remote playout that ensures business operations are not interrupted, and it is the most practical choice for business continuity in the event of disaster.

Video Dropouts and the Challenges they Pose to Video Quality Assessment

File-based media QC workflows increasingly span a variety of native and transformed content. The added complexities due to media transformations such as transcoding, file delivery and editing lead to greater challenges for content video quality (VQ) monitoring. Many VQ issues are due to the loss or alteration in coded or uncoded video information, resulting in the distortion of the spatial and/or temporal characteristics of the video. These distortions in turn manifest themselves as video artefacts, termed hereafter as video dropouts. While the end VQ can be measured and verified using manual checking processes, this type of monitoring can be tedious, inconsistent, subjective and difficult to scale in a media farm.

Automatic detection of video dropouts is the subject of intense ongoing research. It requires complex algorithmic techniques which are at the heart of an “effective QC tool”. This background paper discusses various kinds of video dropouts, the source of these errors, and the challenges encountered in detection of these errors.

Interra Systems’ Baton is the leading file-based QC tool on the market today. Baton supports the detection of a large variety of video dropouts. The detection algorithms in Baton deploy appropriately selected and patented advanced image processing and computer vision techniques.

The three pillars of audio networking

In the world of professional, installed and commercial audio, interoperability is what allows designers and users to freely choose the brands and devices they wish to use, and to easily connect configure and manage them in practical settings. When interoperability is compromised, systems are cobbled together with fragile and complex workarounds, leading to increased costs and errors.

This white paper examines what constitutes and drives useful interoperability in audio networks, and examines the state of the audio industry from this perspective.

Broadcast control and monitoring system

Working in live broadcast environments – whether in studios or OB vans – with its daily changing production requirements, time is an essential factor. With the help of L-S-B’s VSM system, an IP-based broadcast control and monitoring system, broadcasters around the world rely on the world leading control system today.

With the new features explained here, short preparation times before a production, easy and fast changeable workflows and the possibility to save and recall complete setups enable engineers on site to work with ease of mind.

In this white paper you will learn how to streamline your workflows in order to:

• Speed up preparation

• Quicken setup times

• Use simplified functionality for day to day workflows

• Increase efficiency

• Use decentralised workflows

• Be ahead of your competitor

On Controls Installation Spotlight: Monarc Tree Systems

VanKirk Electric is a leading electrical contractor with a national presence specializing in MDU apartment complexes throughout the U.S. Examining their market and opportunities for growth, the company decided to add low-voltage connected home technologies to the tens of thousands of units that they already service.

In order to properly address this growing market segment, VanKirk formed a division called Monarc Tree Systems (MTS), offering builders and developers of MDUs “a catalog of products ranging from lighting to security that they could choose from to create seamless individual automation experiences for their tenants on a project-wide scale.

To be successful in the MDU market, contractors must have easily repeatable systems that electrical contractors (typically less familiar with low-voltage technologies such as home automation) can install in volume with no complications. For this reason, Monarc Tree went with On Controls, one of the fastest growing brands in the dealer-installed home automation and control marketplace.  Learn how Monarc Tree applied a proprietary user interface to the On Controls framework to offer renters at the Millennium Del Ray in Los Angeles, California a customized and simple home automation solution for lighting, temperature, and A/V.

Today’s Reality for Moving Large Content Files

Why every media firm can (and should) use enterprise-class file transfer software.

Cloud computing is revolutionising software for media and entertainment, making technology that was once only available with enterprise budgets and multiple data centres accessible to small and mid-sized companies. This recent evolution has had a direct impact on the way companies send and share content around the world. With modern SaaS enabled large file transfer solutions, every media company can afford to participate in the global media marketplace.

In this white paper, you’ll learn about:

  • Three major trends driving the need for large and fast file transfer software
  • The ad hoc content delivery practices that impede many companies
  • The benefits of SaaS and how it scales to every sized business
  • Signiant’s Media Shuttle, the only SaaS solution on the market for large, fast file transfers

The value of quality – beyond standards and datasheets

Axis products are designed, from the first draft, for reliability. The best components and materials are chosen for every purpose. The products are tested for their ability to withstand mechanical wear-and-tear, water and humidity, vandalism, extreme temperatures, vibration, and so on. They are certified against external standards, and every single unit is tested thoroughly during production.

All these efforts are made since quality is something that does not necessarily show up in the datasheets of a product. With Axis’ quality assurance, reliability and quality is carried beyond standards and datasheets.

How to get more people using new technologies in your organisation

One of the biggest challenges faced by the modern IT department is ensuring the adoption of new technologies and solutions across the company; from finance to HR, facilities to marketing, every employee needs to be equipped and motivated to make the most of IT investments.

This white paper examines six key areas that the IT department should consider when deploying a new technology or looking to drive adoption of an existing one.

Arming the cloud service provider to compete

The competitive cloud marketplace is becoming fiercer as the adoption of public cloud increases. Hosters, service providers and telcos are all battling against each other for cloud business creating a thunderstorm of services and solutions, but are they all different? There are differences regarding the scope of the provider, eg a hoster vs. a telco. However, across an industry where there is a significant risk of commoditisation and, potentially, poor delivery against customer requirements, rolling out a generic cloud is no longer an option.

This white paper looks briefly at a popular example of where extensibility, customisation and an ecosystem played an important role in one company’s success. It also outlines some of the requirements needed to gain market share and stay competitive and shows how a cloud orchestration solution can help and a list of available integration solutions is included that demonstrate how easily plugin, integration technology within a cloud orchestration solution is handled.

How to become a next generation cloud provider

In 1993, there were 14 million internet users. Today, there are nearly three billion according to Internet Live Stats. The global smartphone market will reach 1.75 billion in 2014 according to e-Marketer. And there are 10 billion internet connected devices today, predicted to swell to 50 billion by 2020.

This white paper aims to clearly demonstrate that the big cloud opportunity is here and it lies in hosting next generation applications (or apps). After reading this white paper, service providers will be better placed to position themselves for the next generation app market, understand how to become a next generation cloud provider by looking at the market opportunity, understand why the app development and DevOps communities are the ones to target and identify what solutions are necessary to capture this demand and grow the business quickly.

Specification and use of in-line filters to reduce interference in broadcast bands from mobile base stations (SB2122)

With the “digital dividend” spectrum reorganisation in Europe, LTE and digital broadcast television bands have become very close neighbours. DTT channel 60 is now separated by only 1MHz from the lowest downlink LTE band. Also the LTE transmissions overlap the frequencies used by cable networks.

This white paper examines some aspects of interference from LTE to television receivers and proposes various filter masks which might be used as a guide by filter manufacturers to create external filters that could be fitted to mitigate several interference issues into fixed DTT reception. In particular this white paper covers the broadcast television standards DVB-T and the second-generation DVB-T2. These standards are widely deployed in Europe and throughout the world.

Dutch Parliament enters the future

When the Dutch Parliament looked to install a new broadcast system in their parliamentary buildings in The Hague, there was no opportunity to improve an existing infrastructure – there simply wasn’t any. They had been using a third party for simple audio and video recording of parliamentary meetings and wanted to bring meeting room video and audio access, control and monitoring in house. The aim was to create a broadcast infrastructure providing completely autonomous monitoring and management that could be easily altered and updated as requirements evolved.

The desire to open up parliamentary meetings to the public was the catalyst for the new system, helping to bring transparency to parliamentary activities whenever possible. For closed internal meetings, they required a system that would allow secure remote participation of meetings for those with authorisation.

The evolving user and emerging landscape

Television is changing. New commercial and consumer technologies are changing the way television is distributed and consumed. For many years ‘television’ and ‘broadcast’ were synonymous.

TV can now be delivered and consumed in a variety of ways; as a live linear schedule over the air or over IP, as recorded content on a personal video recorder (PVR), or as an on-demand programme.

It can be delivered as part of an all-encompassing TV proposition, with linear, PVR and on-demand integrated by a single provider, or it can be accessed using web technologies from new providers who can cherry-pick movies and TV programmes.

TV had previously broken out of the living room to colonise every room of the house. Now, as consumers increasingly buy new screen-based mobile devices, television has travelled out of home and has successfully made the transition from wired to wireless.

It is not just full programming that is part of the revolution; viewers increasingly expect access to short-form video, and other complementary web and social media content around a programme brand.

What is connected broadcasting?

Any cursory scan of the trade press will show that we are in a period of persistent technological innovation within the TV industry. At Arqiva, we’re focussed on this and as we deliver to more devices and platforms we are innovating to stay ahead in the evolution of TV and video distribution. As the TV converges with the web, new services and device innovations appear to launch weekly. Rapid take-up of this new technology is steadily changing the nature of broadcast TV; both its delivery and consumption.

Global Supply Chains

Globalization has benefitted many organizations via the creation of new markets; it also has presented serious new challenges for supply chain executives who often struggle to achieve desired customer service, quality, cash, cost, responsiveness, and innovation standards.

This white paper examines the ways to capture best practices for supply chain leaders seeking to design and manage their global supply chains.

The digital divide

The Digital Divide is a real and increasing challenge across many parts of the world.

Indeed even within developed nations, access mechanisms and speeds vary significantly creating “haves” and “have-nots”. This is hugely amplified in developing nations.

Access to the power of the internet – the “Internet of things”, as it is becoming in emerging markets – is not distributed evenly. This disparity is a problem because vast swathes of populations are being left behind, whether in terms of being able to access important information or commercial or entertainment services in an inclusive way.


How to choose the right display with LED Get The Facts

With such a huge number of LED manufacturers and no industry uniformity when it comes to specifications and production standards, it is difficult to know what factors are best to judge a display on.

SiliconCore has produced this guide to answer commonly asked questions covering topics such as pixel pitch and how it relates to resolution, what defines image quality and what causes a display to emit heat. It is intended to help users establish a framework so they can compare different brands against solution requirements.


The death of analogue and the rise of audio networking

RH Consulting was commissioned by Audinate to conduct a fully impartial appraisal of the audio networking market. The resulting white paper explores developments in audio networking technology, business models and the rate of market adoption.

Accelerating the next wave of revenue growth, service excellence, and business efficiency

Becoming a digital business is imperative for all organizations that wish to deliver the next wave of revenue growth, service excellence, and business efficiency. Today’s enterprises need to connect “experiences” to outcomes, encompassing the entire customer engagement lifecycle. Line-of-business (LOB) and IT leaders have come to agree on the key business priorities: to grow revenue, acquire and retain customers, and improve customer satisfaction—all while reducing costs and minimizing risk. It’s widely understood that the digital experience (DX) has become the cornerstone for all brand experiences. It’s crucial that organizations get this right since the stakes for getting it wrong are extremely high.

This white paper explains the value of this approach both for business leaders and IT professionals. As you read further, consider the following questions:

  • What is your digital business strategy and how will it drive the next wave of business growth?
  • How are you delivering multichannel customer experiences? Are they seamless, consistent, and secure?
  • How are you delivering targeted content, offers, and experiences? Are they relevant? Are they automated?
  • How easy is it for customers, partners, suppliers, and employees to exchange information and interact with your brand?
  • How responsive are you in getting new products and services delivered through your channels of engagement?

The benefits of off-the-shelf hardware and virtualization for OTT video delivery

Over the last few years, the television world has gone through a radical transformation. Gone are the days when consumers would leave work, come home, and sit down to enjoy a full night of entertainment on their living room TVs. Today’s consumers are now watching a mixture of live, VOD, catch-up TV, and other advanced services, such as OTT, on an ever-increasing number of devices, including smartphones and tablets.

As consumer demand for video content anytime, anywhere, on any device continues to grow at an increased pace, pay-TV operators are seeking content delivery network (CDN) solutions that are more efficient and flexible to speed up the time to market for new services, and decrease capital and operating expenses.

This white paper examines the benefits of a software-based approach to content delivery allowing the use of off-the-shelf hardware. In addition, the paper looks at network functions virtualization (NFV) and software-defined networks (SDN), which are key trends for OTT video delivery.

Road map to web accessibility in higher education 2015

Web accessibility is one of the most critical issues facing higher education. Although new web technologies and online media have been a boon for distance and online teaching, students and staff with disabilities have become increasingly disadvantaged. The access gap is exacerbated by the skyrocketing growth of the disabled population due to medical and technological advancements.

While the need for equal access in education is at an all-time high, there are no easy solutions and questions abound. How can universities align departments to make accessibility a priority? Where should the budget come from?

What is the best approach for allocating resources and responsibilities?

This white paper delves into these questions and provides guidance for making online university content accessible to as many stakeholders as possible. Through in-depth research and advice from university administrators, accessibility coordinators, faculty, and disabled students, 3Play Media has compiled the best practices for creating an accessible web infrastructure.

Technology Trends to Watch in 2015

Ideas are the juice that powers our economy with innovation happening fast on multiple technology fronts. Rapid developments are in play in areas as diverse as 3D printing, Ultra HD, sensors, health care, automotive electronics, agriculture, transportation, biotech and genetic mapping.

This white paper identifies the top trends in technology to watch out for in 2015. To find out what these top trends are download the white paper.

Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2014–2019

The Cisco Visual Networking Index (VNI) Global Mobile Data Traffic Forecast Update is part of the comprehensive Cisco VNI Forecast, an ongoing initiative to track and forecast the impact of visual networking applications on global networks.

This white paper presents some of Cisco’s major global mobile data traffic projections and growth trends. It also takes a look at how the mobile network in 2014 and how it fared out during the year.

Putting the Everywhere in TV Everywhere

The typical approach to TV Everywhere is for the operator to provide an app that runs on, say, a tablet which then enables the viewer to watch already-subscribed content wherever that tablet can get network access.

This white paper examines the hardware and software characteristics HDMI sticks. It also suggests that HDMI sticks require a few extra accouterments not found with USB drives, as well as the HDMI stick capabilities.

To read more on HDMI, download the white paper.

How and Why Video and Audio Files Go Wrong

Ever Wonder How and Why Video and Audio Files go Wrong?

A new white paper from Vidcheck, the leader in software-based applications to automatically check and correct video and audio files, explores why media file errors occur and how broadcasters and post production teams can simplify the QC process.

The challenge is that digital displays and the many different file formats available today use different color spaces. The allowable levels of video luma and chroma in broadcast have to be carefully controlled so that when converted and displayed on the screen they reproduce the original picture and colors.  Audio loudness is one of the most common cause of complaints to broadcasters. So much so that it is being increasingly subjected to legislation in many countries including North America, UK, Europe and Australia.  Audio files can be impacted by ambient sound levels, multi-channel loudness weighting variances and other factors.

Previously, every new piece of audio or video content had to be manually checked before transmission.  With Vidcheck, this testing can now be automated, dramatically reducing the level of effort required for quality control, and significantly reducing the cost to identify and rectify any problems.

Download the whitepaper today to see how your organization can simplify post-production and broadcast workflows.

To schedule a demonstration of the Vidcheck windows-based solution, call 1-800-493-1552 or email

Vidcheck U.S. Reseller Partners:

Edge Solutions


The use of DVB-S2X for DTH applications, DSNG & professional services, broadband interactive services and VL-SNR

In October 2012, the Commercial Module (CM) requested investments to the DVB-S2 system to enhance performance in the core markets (Direct to Home, contribution, VSAT and DSNG) and to increase the range of applications of the standard to cover either emerging markets such as mobile (air, sea and rail) or professional applications. In order to allow for rapid market launch, it proposed extensions were to be an evolution of the DVB-S2 standard rather than a fundamental change to the architecture.

This white paper describes the advantages of the DVB-S2 extensions and aims to provide guidance to broadcasters and operators considering the adoption of this system.

Forscene gives Pegasus works a real-time, collaborative edit-and-review workflow

Pegasus Works is a corporate communications and live events company based in Twickenham, UK. The company specialises in designing, planning, producing, and managing customized live and hybrid (live/digital) events for 50 to 5,000 people across the spectrum of internal communications and marketing.  The company’s success depends on being able to deliver events that dazzle its clients, and do it quickly, efficiently, securely, and within tight budgets. For that, it relies on leading-edge technology and a wealth of expertise to increase efficiency, optimize its staff and equipment, and improve review and approval cycles.

This white paper identifies a challenge, the best way to deal with that challenge, and by putting certain parameters in place be able to overcome the challenge.

An Integrated Approach to TV and VOD Recommendations

A key part of content discovery, video recommendations used to be simple. Indeed, TV purists will correctly tell you that broadcast channels were the first „recommendations engines‟.

A broadcast channel is based on an editorial choice of programmes, delivered in a sequence which takes into account time of day, basic family viewing rhythms and clustered around a brand with a meaningful theme  with search too difficult to deliver to early set-top boxes, the way that TV viewers have found and made choices about content has always been a combination of browsing via simple channel hopping or scanning an EPG.

As TV systems have become larger and more complex, the need to help viewers wade through the confusion has grown, as viewers drown in choice. TV systems, which can now include thousands of VOD assets, hundreds of recorded shows on top of hundreds of broadcast channels, have to work hard to help the consumer make choices.

In this white paper, examines the different approaches to recommendation systems for TV and VOD.

Hybrid Routing and Advanced Hybrid Processing Considerations in Real World Applications

Routing, in a broadcast environment, has always involved the routing of video and its associated audio. Some years ago, this meant almost exclusively SD, with an associated discrete stereo audio signal. Routing was relatively simple – a separate router for each signal type, two levels to control, and on occasion a need to synchronize some signals.

It’s clear that embedding audio tracks into the video signal simplifies the transport of the associated signals, but for production applications the two signal types still need to be separated.

This white paper discusses what the requirements of an advanced hybrid routing system. It also discusses signals vs. formats in relation to Video – Coax vs Fiber.


Detection, Measurement, and Alarming on Lip Sync Errors in Distribution of Television Signals

One of the most prevalent issues in television distribution networks today is the loss of synchronization between the video and audio portions of a television signal. Commonly referred to as “lip sync” errors, incorrect audio/video synchronization is today one of the most widely invasive impediments on the quality of television broadcast signals.

In this white paper, Miranda Technologies explores the mechanisms and benefits of its newly developed solutions for detecting, accurately measuring and alarming on Lip Sync Errors on HD and SD broadcast signals.

How advances in visual display technology are benefiting a wide range of industries

The global market for digital signage ― which includes the displays, media players, software and installation/maintenance costs ― is exploding.

The popularity of digital signage is growing in tandem with technology improvements, which are resulting in displays that are more functional and flexible and provide crisper images. And these improvements are coming as the cost of the devices drops. This paper will look at the advances in digital signage, as well as innovative ways in which different industries are deploying it to stand out from competitors, drive sales and satisfy customers.

This white paper looks at the advances in digital signage, as well as innovative ways in which different industries are deploying it to stand out from competitors, drive sales and satisfy customers.


High-Speed Bridge to Cloud Storage

The heart of the internet is a pulsing movement of data circulating among billions of devices worldwide — between computer systems, people, and into and out of cloud infrastructure including cloud storage. Much of that data has a rather boring and neglected life, created only to be forgotten moments later.

According to IDC’s recent report on the ‘digital universe’, by 2020, there will be as many digital bits in existence on the internet and in storage devices as there are known stars in the universe. And, as we move forward into the future, we will need more storage for the valuable bits we want to keep, as well as much improved digital mechanisms to move them around.

HEVC & Broadcast Content

High Efficiency Video Coding (HEVC) is a new video compression format that effectively doubles the data compression rate compared to H.264/MPEG-4 AVC at the same level of video quality.

HEVC has been found to be a very efficient codec for all content types including interlaced content; if you think otherwise, this white paper will help you understand where the misconception that HEVC is only for progressive was initiated. Today, ATEME products offer HEVC with interlaced coding performance and the company is poised to facilitate the ever-increasing demand as the codec is developed for use along the entire content delivery chain.