Tuesday, June 30, 2009

Shared Playout Facilities - Time to Reconsider?

Maybe it's not a question, maybe it's a statement?

The model of having a separate dedicated 'master' control room for every broadcast television station may be going the route of impractical and potentially obsolent. Go to any television market in the U.S. - and you'll find the 'factory' portions of the industry replicated with complicated infrastructures dedicated to collecting, arranging and preparing for transmission essentially the same types of content on a station-by-station, market-by-market basis.

This activity, deeply anchored in similar hardware compliments (switchers, routers, graphics, video servers and monitoring equipment) requires a dedicated operations staff that essentially performs the same function at each location day in and day out. With the majority of these systems considered mission critical, the stations depend upon this operational model for their revenue streams. This in turn forces a leverage of redundancy and/or disaster recovery - all in turn driving the costs of deliverying the content upward in both OpEx and CapEx perspectives.

So why continue this model? It keeps hardware and software vendors in business for one - and that's a good thing. But the anxt of continually staffing, repairing, upgrading or improving the systems to 'keep up with the Jones' is a never ending challenge, one that is counterbalanced by the emphasis to reduce costs, improve services and simplify the operations.

If one took off the blinders, and dropped the guard just a little, one would see that protecting the assets and competitive advantages of broadcaster #1 vs. #2, etc., could still be maintained regardless of the where or how the factory side of the program assembly and delivery business was run from. Not unlike the re-emergence of hub-casting or central casting or centralized command-and-control --- putting the play-to-air mechanisms together in a shared location with a common set of tools (fully redundant and protected) --- is not beyond the technical capabilities of any set of hardware or software available today.

Broadcasters have located transmission equipment (transmitters, antennas, towers and support infrastructures) in a common setting for decades. Why? The cost to build separate towers and buildings (not to mention the impact on the environment) mades no economic sense. So what is the difference between the equipment that delivers the signal and the equipment that assembles the signal's content?

If a group of stations in a market shared a common facility with common components such as video and network routers, video and graphics servers, redundant critical components, and the like; the cost of operating and maintaining that equipment (including upgrades and performance improvements) for each individual station goes to zero. That's a number bean-counters understand! The risks become no greater than those of the TV transmitter site. In fact, the operational risks actually deminish if the centralized playout center is properly built, staffed and operated.

So why not share the services and reduce the headaches?

The answers are SIMPLE!

First, one must remove the fear of loosing a station's local identity - but isn't that a myth? I'd bet 99% of the viewing public has no idea how or where the content is aggregated. They might think they know from where the signal is transmitter (but only if they're in that diminishing percentage of over the air - OTA - viewers). The station's identity is established by the programs and/or on air talent that those viewers see and become loyal to. Certainly that has no relation to asembling bit streams for television transmission pruposes.

Second, the station accepts it will not and does not loose it's presence. If they do news, they still do news - from the same location they've done it from all along. If there do commercial production, they can still do commercial productions. If they have a web site, it stays the same. In other words, nothing has to change except where the master control staff works and where the bit buckets reside. They can maintain the store front of the local station, but they relocate a portion of the services to another place. Not unlike adding a news bureau.

Third, drop the fear that another station will learn what the other is doing (aka 'security'). Oh come on now ... "learn what the other channel is doing??" keep that one for the news department! It's rediculous in today's digitally connected world to think you can't isolate data, signals and even operators from one another - and keep the integrity of the product sound and valued. Do you think any user cares (or knows) that their Internet traffic lands in nearly every provider of network services' NOC every day? So protecting the operational workstations for a common set of broadcasters is as simple as setting up a firewall. Fabricating an environment that definitely segregates one station's pristine content and functions from another is not rocket science. And if one wants the added protection of completely isolated systems, then they just make that request and it goes into the cost model for their services.

And what does it take to get this ball rolling? It takes guts and broadcasters in the same market with the sensibility and innovation to break the mold and reduce their costs for ever! It also takes a third party entity to step up to the plate and capitalize the venture in exchange for a committment from those broadcasters with the stomach to try the endeavor.

Any takers?

Saturday, June 27, 2009

Understanding Issues in Disk Drive Latency

"Seeking Hard Disk Drive Latency" - The latest installment in my column at TV Technology (http://www.tvtechnology.com/article/80556) discussed the issues with disk drive latency, how latency is calculated and briefly highlights the history of the HDD.

by Karl Paulsen, 05.05.2009 -- The year 1956 marked the beginning of an era that would dynamically and dramatically alter the landscape of technology forever. It was in San Jose, Calif., where the first hard disk drive (HDD), called the IBM 350, was invented by a group of by IBM scientists under the direction of Reynold Johnson. The 350 HDD would accompany the IBM 305 RAMAC (Random Access Method of Accounting and Control) computer and had a total capacity of five million 7-bit characters. A single head assembly with two heads accessed all 50 24-inch diameter platters, and the average access time was just under one second.

Be sure to look up other articles in the Infrastructure Section at http://www.tvtechnology.com/ under the features column 'Media Server Technologies'.