April 23, 2014, 7:30 p.m.
Sam Rosen, Practice Director
The U.S. Federal Communications Commission (FCC) is mandated with managing regulatory spectrum, and is constrained with maintaining the public interest in spectrum allocation and use (including where to allocate the spectrum, managing interference, etc.) However, this is generally narrowly interpreted in terms of spectrum allocation policy, and the FCC has been reluctant to engage in additional rule-making about content availability, notably, leaving retransmission consent to business relationships between content owners and distributors. Further, the FCC has relatively narrow jurisdiction over the Internet (generally extending to net neutrality); in fact, one could argue the Internet is underregulated and regulation is fragmented, with the Fair Trade Commission (FTC) and FCC.
Landmark retransmission legislation is crafted in the terms of fair use laws (notably, the Sony Betamax case) as well as laws around private performances (notably, the Cablevision case on remote storage DVRs). Aereo, a company which offers free-to-air TV and DVR to multiscreen devices, is attempting to leverage those rulings to build an over-the-top, free-to-air distribution network as a subscription service. The legal cases appear to be going well outside the Ninth Circuit (nine western states). Distributors, especially satellite distributors which have at times used antennas on their set-top boxes to work around satellite spectrum issues, are taking note and considering the cost/benefit of content licensing from the major networks. Some smaller telcos are looking at offering services around FTA channels along with smaller subscription offerings.How Long Is the Free-to-air Rights Window?
In Europe, the larger role of public service broadcasters has dictated that content is becoming available via catch-up services for 8 days or 14 days. In the United States, only Hulu (to a PC) has given consumers access, with free-to-air DVRs suffering a lack of customer interest, challenges around access to metadata, etc. The value of advertising, including in important demographics such as the growing Hispanic market, drives free-to-air platforms while content owners look for revenue models including subscription/retransmission for multiscreen and advanced services (DVR, free VOD, catch-up, multiscreen) in addition to advertising. However, as advertisers sign on to multiscreen and Nielsen-C8 (8 day ad credit) relationships, broadcasters and affiliates may think more along the European lines of content availability. Leveraging Hulu as an online (PC) extension of the free-to-air viewing, perhaps even allowing some content onto multiple screens without a Hulu Plus subscription, could be considered.But Consumer-friendly Broadcast Decisions Will Backfire
The FCC and courts, however, are in a tight spot. They could attempt to be the voice of the consumer (public interest) in ensuring that rights made available for content that has been broadcast leverage free-to-air broadcast spectrum (a valuable public asset) are meaningful to consumers. Specifically, consumers are looking for rights to watch the content at a reasonable appointment-based time and on an array of devices that are rapidly replacing TVs. However, increasing access through regulatory means to this content would drive the valuable content away from free-to-air platforms. The current regulatory direction, to push responsibility to private business relationships, is likely to be maintained. One example is that the FCC is said to be considering ending sports black-out rules (which likely stepped over the public/private responsibility line in any case).
I recently corresponded with my Senator, John McCain, about his proposed “Television Consumer Freedom Act” legislation. One section of this bill "responds to statements by broadcast executives that they may ‘downgrade’ the content on their over-the-air signals, or pull them altogether, so that the programming received by multichannel video programming distributors (MVPDs) customers is preferable to that available over-the-air." Senator McCain states, "A broadcaster will lose its spectrum allocation, and that spectrum will be auctioned by the FCC, if the broadcaster does not provide the same content over the air as it provides through MVPDS." This will simply clarify the rules that a specific broadcaster channel must provide similar content between its free-to-air and MVPD distributed versions. The major networks have both free-to-air and cable-only channels on which to distribute content, so it will not fundamentally alter the conclusions.
The chain of consequences that would result from any regulatory or court decision to extend consumer rights around content, unfortunately, would likely back-fire and decrease the value of content available for free-to-air programming, as well as decreasing the viability of local programming.
Meanwhile, distributors who seek to develop offerings based on free-to-air content (possibly including in-home DVR or network DVR) may be able to address the value end of services but would lack rights that enable cohesive and responsive experiences, including free VOD, multiscreen, and catch-up rights.
April 23, 2014, 5:17 a.m.
John Devlin, Practice Director
News broke last week that security researchers from SR Labs had been able to hack the fingerprint sensor on Samsung's Galaxy S5, allowing them to conduct PayPal transactions from the device. Whilst this may be alarming to some, it should not be seen as a major security flaw for several issues.
Firstly, as with any biometric solution, these are implemented to increase security and very few are completely foolproof, particularly in a standalone implementation. The Galaxy S5's implementation still does this, and whilst a work-around has been found, not many people have the know-how and capability to use a camera phone image of a latent print to create a mould from wood glue to then be read by the scanner. This immediately reduces the level of risk by narrowing the field of potential hackers, especially when you consider that this had to be conducted under laboratory conditions. And, as with any system, if someone really wants in then there is little you can do to stop them - you simply make it as difficult as possible.
Secondly, there are still measure to shut down and lock off the device or certain functionality. This can be done via the PayPal app or through the mobile connection and service provider. Both of these points limit the timeframe that a device can be exploited.
Thirdly, the level of information accessible is limited. Furthermore, with secure elements and trusted execution environments featuring in more devices, then the fingerprint sensor is simply the first layer of hardware-based security in mobile devices. Software and back-end analytics can further aid this by detecting and acting upon any abnormal or suspicious usage.
This isn't the first case, Apple's iPhone 5S was subject to a similar hack after its release last year. When both devices were announced my own view was that this was an interesting development, but not primarily one relating to security. They do that but the bigger benefit is that it enables a more convenient and quicker way for users to access applications and authenticate themselves to authorise transactions. Samsung took this further and worked with PayPal to incorporate this capability into the app. It may not solve all problems but for mass market consumer applications it appeals to end-users and, when all is said and done, it still increases security whilst making the service better to use. And we can't ask for much more than that.
March 26, 2014, 10:53 a.m.
Aapo Markkanen, Principal Analyst
Facebook’s acquisition of Oculus VR is big news on many fronts. My fellow analysts are deconstructing it as we speak, whereas in this post I want to share a few words on what the purchase tells us about the future of innovation. Its symbolic significance is quite striking, and I’m not only talking about how symbolically it will bring together two innovator generations from different decades under one roof in a new decade. For someone who experienced entertainment being reshaped by the 1990s PC games and social life being redefined by the 2000s social networks, the fact that John Carmack and Mark Zuckerberg will be now working together is significant by its own merit. But it’s not as significant as that other thing I’m referring to.
That other thing is crowdfunding, and I can’t stress enough how big a deal it is in this context. Oculus is the first billion-dollar company to take off from a crowdfunding platform (Kickstarter), but you can be dead sure that it won’t be the last. The way crowdfunding is democratizing access to finance will send some very fundamental shockwaves across the entire technology market, and that is particularly the case with anything that relies on hardware. Oculus Rift is a perfect example: its bet on virtual reality is so experimental that it most probably (would have) had a hard time trying to get enough seed funding from VCs to turn its concept into a product, at least on favourable terms. The VC money kicked in once the developer version of Rift was available, but its first crucial productization steps were taken thanks to the Kickstarter backers. It’s easier to impress with a proof of concept than a mere concept.
The thing is that in hardware the leap of faith from the concept to the product tends to be quite much more dramatic than in software. In today’s platform economy a start-up can largely create software out of toil, tears and sweat (let’s not include blood in there), and thus have something reasonably viable and impressive ready when it goes knocking on VC’s doors. That is not the case with hardware, which requires purchasing physical goods before anything concrete can happen. If the product is “connected”, “smart”, etc. then those goods can be prohibitively expensive for people who operate without external funding. That’s where a crowdfunding campaign can really make a difference. Importantly, besides providing access to funding it can also be an invaluable market research exercise, as the start-up will get some kind of sense of how many of the prospective customers might actually be willing to put their money where their mouth is. It’s all about derisking the productization process.
All this will also make crowdfunding a major enabler for IoT, which involves a plethora of invented and reinvented physical products. I’m currently conducting a study about the IoT developer landscape, and that enabling effect is indeed one of my main premises. In addition to Kickstarter and Indiegogo, the two leading crowdfunding platforms, makers and IoT start-ups can nowadays also turn to alternatives that specialize namely in hardware products, such as Crowdrooster and Dragon Innovation. These players curate their campaigns quite closely, and besides the funding platform provide incubating support (e.g. by way of workspace, marketing, design) to the covered entrants.
What it means to the big picture is that hardware is about to get an extra shot of innovation in its arm. More and more of the new game-changing products will in future come from younger, scrappier and more experimental companies, instead of large corporations that have traditionally been the ones able to stomach the old productization risk. It’s a fascinating change, really, with many implications on the market.
March 25, 2014, 1:56 p.m.
Jason McNicol, Senior Analyst
The enterprise mobility market and IoT could see a very fruitful collaboration in the years to come. Enterprise mobility has been popularized with key terms being thrown around like ‘consumerization of IT’, BYOD, MDM leading to BYO’X’ and M’X’M variations. Workspace solutions (i.e., containers, app wrappers, virtualized containers) are the current buzzwords but it still feels like enterprise mobility has only begun to scratch the surface of true capabilities and efficiencies. IoT, the Internet of Things, is at a similar point in progression as it seeks to build upon CES 2014 and MWC 2014 exposure where new technologies are being displayed to connect this new technological world.
The reason why enterprise mobility and IoT has attracted my eye is due to the possibilities that the marrying of the two brings. Enterprise mobility has been focusing on securing content and providing access to secure content via mobile devices. IoT looks to build upon existing capabilities as communication between connected devices leads to big data analytics and real-world interaction. The trick becomes marrying these two components to facilitate new workflows that employees are creating by mobilizing.
Employees creating new workflows with real time information from IoT could be a wealth of efficiencies and capabilities development, but it still needs security. By marrying enterprise mobility with IoT, mobile security becomes a critical piece for future success. These new workflows will prove beneficial as enterprise safeguard against new security threats that emerge with IoT. That is why it is best to consider security enablement now and integrate it into future solutions to permit continued development; having to worry about security later will only slow down enterprise mobility progression.
March 18, 2014, 2:18 p.m.
Aapo Markkanen, Principal Analyst
Going/deciphering through my notes from this year’s Mobile World Congress, I have as of late become growingly keen on lensless cameras. My early sightings of the concept have come mostly from Alcatel-Lucent’s Bell Labs, which publicized some fruits of its research on the topic around mid-2013. At MWC, it was one of the key technologies for Rambus, which in general tends to be one of the show’s best sources for insight on the more forward-looking, pre-emergent tech areas – such as, for example, IoT security, which is something that my colleagues at our Cybersecurity practice have explored recently.
The point about lensless cameras – or lensless image sensing – is that images aren’t captured directly, but created subsequently by computing. In Rambus’s case, a critical link in this is a diffraction grating, which is done by using a spiral-shaped optical pattern. This pattern gives the captured light the shape that is needed to process it into an individual image. The main advantages of the lensless approach are the extremely miniature form factors that it allows, and potentially dramatic decrease in costs. Lenses require a relatively lot of space and are famously expensive to manufacture, so extensive computerization can shake up the market on both fronts.
Something to bear in mind is the fact that lensless sensing isn’t really meant to replace lenses when it comes to actual photographing, as it would seem unlikely that entirely computed images can ever match lensed ones in terms of resolution. However, the concept’s real potential is namely in detection – of physical changes and motion. And that’s where it also touches strongly on the IoT. Tiny, cheaply produced cameras could allow countless types of physical objects to “see” their dynamic environments, as well as to notify and act on changes in them. Security and monitoring are the most obvious use cases that come to mind, but there are also many, many others, ranging from gesture recognition in wearable UIs to contextual awareness in connected vehicles.
Importantly, lenslessly created images have also a far smaller data footprint than the traditionally captured ones, which is an advantage when it comes to both transferring them wirelessly and analyzing them further. This aspect ties up closely with one major theme that we at ABI Research are these days paying much attention to in our Internet of Everything research: distributed intelligence in IoT architectures. This research theme concerns where analytics and automated actions will ultimately take place: in the cloud, in endpoint devices, or in the “foggy” edge (routers, switches) between the two. (This is something that we have covered e.g. in an earlier blog as well as our free MWC whitepaper.) Most likely, it will largely depend on the parameters of each use case (“why, where and when is the data needed”), which in turn will result in rather diverse architectures. Lensless sensing could address pain points in all parts of the IoT continuum, by making both low-power, low-latency endpoint analytics and remote cloud analytics more efficient and advanced.