Contact me!

You have questions? I have answers! Email me by clicking here.

Become a fan!

Follow me on Facebook by clicking here.

Tweet @ me!

Follow me on Twitter by clicking here and clicking the follow button.

He's mad as heck!

And he's not going to take it anymore? See what has got Mike's goat (who knew he had a goat!?!?) and checkout his infamous rants by clicking the Read More link below.

Read more

Nothing to see here!

This little guy has absolutely nothing to do with this website. The webmaster just thought he looked cool.

Andy Looks at a Few Winners, Losers and Trends

Category: Andy Marken's blog Published on Sunday, 26 July 2015

By Andy Marken
Marken Communications

EDITOR'S NOTE: As we all know, our man Andy has opinions on almost everything. With that in mind, here are a few thoughts he's shared with us on topics ranging from IoT to Human Resources.


Who will rule IoT? Top 5 tech titans fighting for the IoT crown

These are five of the biggest companies that want to connect the world.

Forecasts differ between 20 billion and 50 billion connected devices by 2020. Despite the disparity, the numbers indicate that we are on the cusp of a smart revolution.

But who will lead the revolution and become the outright victor of the IoT? CBR runs down the 5 biggest companies vying for the top spot.


1. Intel

Since 1968, the company has been engineering computers, smartphones and now wearables with its chips. The firm is now working on next generation devices to improve efficiency and battery life of connected products.

The company has launched the IoT Platform to help developers understand how components of a solution work together and where security and analytics capabilities happen.

Over the last few years, Intel has also developed connected solutions for the smart car, utilities, retail and healthcare industries. In the automotive space, the company has partnered with Hyundai, BMW, Infiniti and Kia to deploy its technology that allows infotainment systems to be more responsive.

The chipmaker also designed an IoT ecosystem to smarten up buildings and ease their management, as well as solutions for Industrial Automation. Collaborating with Wind River and McAfee, Intel has produced solutions to be run on Linux and Windows and help factories improve efficiency with real time analytics.

2. Microsoft

In the wearables space, Microsoft has put into the market its Band solution. The smart fitness tracker hopes to rival Samsung's and Apple's smartwatches. The company also designed a pair of virtual reality glasses, HoloLens, which recently secured a space in the International Space Station and will be used by NASA to train astronauts.

The firm believes digital transformation will transform the way companies operate, from real time analytics to using robots to improve workplace safety.

In April 2014, the company unveiled the Azure Intelligent Systems Service, designed to connect, manage and capture machine-generated data from sensors and devices. In March this year, Microsoft CEO Satya Nadella announced the Azure IoT Suite.

Looking to the future, Microsoft announced a university degree based on the IoT in June, which the firm expects 3,000 students to take part in until 2025.

3. Cisco

Cisco is responsible for the "Internet of Everything" (IoE) term. The company is one of the biggest tech IoE preachers of the moment and has aligned business strategies to meet demands of a digital and connected world.

Cisco is targeting the data centre space to increase the efficiency and ability to manage the explosion of data fostered by the IoE. The company designed a Unified Computing System (UCS), which reduces provisioning times by 86%.

The tech titan is also investing in the cloud space, to host all data extracted from connected devices and machine.

Looking to the next stage of data management, the company has invested in edge computing, aka Fog. Cisco is looking at ways edge computing can be used in real-time smart car communication, among other applications. Edge computing, in this instance, would eliminate the need of data flying inbetween the data centre and the vehicles.

Phil Smith, Cisco UK&I CEO, is a big IoE apologist and has been tasked with passing on the company's ambitions in this space. For him, IoE will deliver the digital transformation UK businesses so desperately need and enable the workforce to deliver new and better services.

4. Google

Until Nest's $3.2 billion acquisition in February 2014, Google was seen as a slow adopter of IoT technologies, despite being one of the biggest tech companies in the world.

Since then, the company has launched a new IoT body - Thread Group - targeting standards for communication between smart home devices.

In June this year, Google announced a new OS for the IoT, which allows developers and manufacturers to build connected devices. The Brillo OS includes Weave communications protocol, which offers developers a common language for locating devices on a network.

Google is a big player in the wearables space, with special emphasis towards augmented reality. The company developed its Google Glasses and it is now looking to develop smart contact lenses.

Smart cars, including driverless vehicles, are currently being tested. The company developed driverless cars that do not include pedals nor steering wheel. It expects these to become commercially available by 2020.

5. Samsung

Since September 2013, the company has been releasing to the market different smart watches, and is now working on a rounded device for the wrist.

In December 2014, the company launched the Samsung Gear VR, a pair of virtual reality glasses. Built in partnership with Oculus, the gadget works with Samsung's Galaxy Note 4 smartphone.

In 2014, it spent $200 million to acquire SmartThings, a company working to build an open platform for smart homes.

In January, the company's CEO BK Yoon announced a $100 million fund to help developers and boost the start of an open system to the connected world. In May, it introduced the ARTIKTM platform to allow faster, simpler development of new enterprise, industrial and consumer applications for the IoT.

Samsung has warned the tech industry that the IoT will not achieve its full potential, and might even fail, unless electronics firms collaborate more.


Apple Inc. Is Unlikely to Ditch Intel for AMD in the MacBook

It just doesn't make sense.

Writing for Seeking Alpha, Mark Hibben suggests that Apple (NASDAQ:AAPL) might enlist the semi-custom design services of struggling chipmaker, Advanced Micro Devices (NASDAQ:AMD), to build a custom-tailored processor for Apple's MacBooks. In fact, he pegs those chances as "good."

I, on the other hand, think that this is highly unlikely.

If Apple wants a semi-custom MacBook processor, Intel could do the job, too

One key argument that Hibben makes is that Intel's (NASDAQ:INTC) PC processor designs are designed in a "one-size fits all" fashion to serve the requirements of a wide range of PC customers. By commissioning a semi-custom chip from AMD, Hibben argues, Apple could get exactly what it wants/needs without any of the additional "bloat" present to support other PC makers' designs.

The first (but certainly not the biggest) issue here is that Intel, too, offers semi-custom design services for big customers. Intel's semi-custom pitch has been more targeted toward the data center (where major customers are willing to pay to get exactly what they want), but there is no fundamental reason that Intel could not build "semi-custom" processors in the PC segment for the right customer.

There is little need for semi-custom MacBook processors

Although Hibben argues that Apple might be better served with custom-tailored parts for the MacBook, I disagree. Take a look at the following diagram that Hibben pointed to in his article:

Notice on the big green rectangle that there are two black rectangles? The longer one is the main processor complex. This contains the CPU cores, graphics/media engine, and the rest of the performance-critical components.

The smaller black rectangle is the PCH or the "platform controller hub." This is an auxiliary input/output chip that that, among other things, allows the processor to talk to other components. Hibben argues that Apple "isn't using a lot of that stuff," and for the new thin-and-light MacBook this is true, but all of this functionality is stuffed into a cheap-to-make auxiliary silicon die and can be disabled if need be.

The idea that Apple should ditch Intel and commission a totally new, semi-custom chip from AMD because it doesn't need to use the USB and Serial ATA functionality present in Intel's Platform Controller Hub seems ludicrous.

Would Apple really trust the Mac to AMD?

Another issue here is that AMD's notebook processors haven't been competitive with Intel's for a while, all things considered. It is not a coincidence that AMD has continued to bleed share to Intel in the notebook market for a long time.

In order for AMD to have a chance at winning the MacBook contracts (either via off-the-shelf parts or a semi-custom arrangement, as Hibben suggests), it will need to be able to deliver leadership PC processor products (MacBooks are premium products and Apple is likely to want to use the best chips possible). Given Intel's vastly greater ability to invest in its architectures and designs, as well as its manufacturing leadership, it's hard to see this happening.

It's all very unlikely

I would peg the chances of Apple swapping Intel out for AMD in the MacBook as extremely low. Intel's PC processors are, in my view, the "gold standard" in the industry and Intel's track record suggests that the company knows how to consistently deliver winning PC products. AMD, on the other hand, has a lot to prove and I doubt that Apple wants to be its guinea pig.

The $18 million fortune about to be ripped from your credit card

Bad news for your credit card company. The plastic in your wallet may soon be gone forever. And once it is, it could cost major credit card companies as much as $18 million a day! Good news for you. Because when you're finally able to say "goodbye" to the cards stuck in your wallet, a little-known tech company responsible for finally putting an end to plastic could hand its investors life-changing profits.

Apple sees growing Mac shipments in 2015

Apple has revealed it shipped 4.8 million Mac products in its fiscal third-quarter 2015 (ended June 27, 2015), up 5% sequentially and 9% from the same period a year ago. Sources from Apple's upstream supply chain noted that their orders for Mac products for the second half are stable and the overall volume is expected to surpass that of the first half.

The sources also noted that Apple's share in the PC market is continuously rising as its MacBook Air products have attracted demand from Wintel users, especially after Windows 8's release. This is because Apple has been adjusting the Mac pricing to make the products' friendlier to consumers.

Currently, Apple's iMac, MacBook Air and MacBook Pro are manufactured by Quanta Computer, while the new 12-inch MacBook is made by Pegatron Technology, the sources added.

Apple has also revealed that iPhone shipments reached 47.53 million units in the fiscal third-quarter 2015, down from 61.17 million units in the previous quarter, but up from 35.20 million units during the same period a year ago. Meanwhile, Apple shipped 10.93 million iPads, down from 12.62 million units a quarter ago and 13.28 million units a year ago.

Samsung Gets Silicon Valley Support in Patent Fight With Apple

Facebook, Google, HP, Dell, eBay and others enter a "friend of the court" brief to support Samsung in its ongoing patent war with Apple.

Samsung has some unlikely allies in its ongoing patent fight with Apple as a coalition of Silicon Valley giants have filed a "friend of the court" brief in support of Samsung's position in its legal battle with Apple.

The friend of the court brief, which argues that if Apple would win the remaining legal issues in the case that it "opens the entire industry up to mass patent infringement lawsuits" was filed by companies as diverse as Google, eBay, HP, Dell, and Facebook, according to a July 20 story by InsideSources.

The brief asks the U.S. Federal Circuit Court of Appeals court that is hearing the case to review an earlier decision that orders Samsung to turn over profits from a handful of Apple patent infringements, the story reported. The Silicon Valley coalition argues that if the court upholds the previous ruling that it could stifle innovation and limit consumer choice.

"If allowed to stand, that decision will lead to absurd results and have a devastating impact on companies, including [the briefing draftees], who spend billions of dollars annually on research and development for complex technologies and their components," the group wrote in its brief, according to the story.

Patent cases have become out of control, the companies wrote in their brief, because they often hinge on only a few design elements in products that are built from thousands of parts, the story reported. "Under the [court] panel's reasoning, the manufacturer of a smart television containing a component that infringed any single design patent could be required to pay in damages its total profit on the entire television, no matter how insignificant the design of the infringing feature was to the manufacturer’s profit or to consumer demand," the group argued in its brief.

That can also apply to software design, as in the Samsung case with Apple, the brief continued.

The latest Apple Samsung patent fight has been going on for about four years. In its original patent-infringement lawsuit against Samsung, Apple argued that Samsung's smartphones mimicked Apple's design, with a rectangular body, rounded edges and other features. In August 2012, a California jury found that Samsung infringed on Apple's patents in the design of its tablets and smartphones, including the Samsung Galaxy Tab and Galaxy 10.1 tablets and such smartphone models as the Captivate, the Galaxy S line, the Fascinate and the Epic 4 G. The jury ordered Samsung to pay Apple more than $1.05 billion in damages, which was later reduced to about $930 million, according to earlier eWEEK reports.

Then in May 2015, an appeals court threw out another $382 million of the award related to aesthetic design elements like rounded corners on smartphones and the shape of apps. That means that Samsung is still liable to pay Apple more than $500 million in connection with the original jury verdict. That amount represents the total profit of Samsung's infringing Galaxy products to Apple to make up for profit Apple lost in sales to Samsung Galaxy devices, according to the InsideSources report. Samsung then asked to court to review that decision in June.

Apple has argued that the friend of the court brief should be dismissed, the story continued, because of the inclusion of its Android operating system and applications on Samsung devices. "Google has a strong interest in this particular case, is not an impartial 'friend of the court,' and should not be permitted to expand Samsung's word limit under the guise of an amicus brief," Apple argued to the court.

For both Apple and Samsung, the stakes in the ongoing case are huge because of its implications for competition in the multibillion-dollar mobile device market.

Interestingly, Samsung reached a 10-year patent deal with Google in January 2014 to share patent licenses with each other for existing and future innovations, according to an earlier eWEEK report. Google and Samsung have been close partners in combining Google's Android mobile operating system and Samsung's mobile device hardware for several years.


Qualcomm Slices Earnings Forecast Again, Will Cut 15 Percent of Workers

Qualcomm said Wednesday that it has reached an agreement with activist shareholder Jana Partners that will see the company cut $1.4 billion in costs, reshape its board and agree to consider structural changes. The move comes as Qualcomm reported revenue and earnings significantly down from a year ago and said its current quarter earnings will fall below expectations amid weakness in its chip business.

In a presentation posted ahead of its conference call, Qualcomm said that it will lay off around 15 percent of its workforce as part of the cuts. It also plans to cut $300 million in stock-related compensation. (Qualcomm had 31,000 full- and part-time workers as of its last annual report in September.)

As part of the deal with Jana, Qualcomm is adding two outside directors ­ Palo Alto Networks CEO Mark McLaughlin and former Fox executive Tony Vinciquerra, and will soon add a third new director, while two current board members are retiring and two others plan to step down at the end of their current term.

On the earnings front, Qualcomm posted adjusted earnings of 99 cents per share on revenue of $5.8 billion ­ both down significantly from a year ago. Analysts had been expecting earnings of 95 cents per share and revenue of $5.85 billion, according to Yahoo Finance.

However, Qualcomm again cut its earnings forecast for the year after having already done so twice this year. Part of that comes from new restructuring costs. Qualcomm said it expects those charges to be between $350 million and $450 million, of which $100 million to $200 million is included in its outlook for the current quarter.

In addition to the restructuring costs, Qualcomm said it is cutting its outlook for the chip side of its operations due to lower demand for high-end phones with its chips as well as weak sales in China of some of the phones that do have its processors.

“We are making fundamental changes to position Qualcomm for improved execution, financial and operating performance,” CEO Steve Mollenkopf said in a statement. “We are right-sizing our cost structure and focusing our investments around the highest return opportunities while reaffirming our intent to return significant capital to stockholders and refreshing our Board of Directors.”

The company had said in April that cost cuts could be coming.

“In addition to our ongoing expense management initiatives, we have initiated a comprehensive review of our cost structure to identify opportunities to improve operating margins while at the same time extending our technology and product leadership positions,” Mollenkopf said back in April.

However, Qualcomm has been resisting anything that could lead the company to split its chipmaking and technology licensing businesses.

Qualcomm did say it plans to reduce its technology investments outside of its core chip and licensing business, limiting investment in other areas to a few projects such as small cells, data centers and certain vertical markets in the Internet-of-things arena.

The moves come as Qualcomm is facing stepped-up competition from Chinese chipmakers at the low end of the business as well as a battle with Samsung at the high end of the market. Samsung this year decided to use its homegrown Exynos chip for the Galaxy S6 rather than Qualcomm’s Snapdragon processor, as it had in the past.

Re/code reported in April that Qualcomm is looking to regain that business next year by building its next-generation high-end chip, the Snapdragon 820, in Samsung’s factories.

Qualcomm slashes jobs and costs, says may split itself up

Chipmaker Qualcomm Inc (QCOM.O) said it may break itself up as it delivered its third profit warning this year and announced plans to slash jobs and spending in the face of rising competition.

The company said it would reduce costs by about $1.4 billion, cut about 4,500 full-time staff, or 15 percent of its workforce, and boost capital returns to shareholders.

Qualcomm shares fell 1.8 percent to $63.05 in after-market trading on Wednesday. The stock has lost a fifth of its value in a year.

The move comes after hedge fund Jana Partners called for Qualcomm to spin off its chip business from its highly profitable patent-licensing income, among other changes the activist asked for.

"We decided we were going to take a fresh look at the corporate structure of the company," Qualcomm president Derek Aberle said in an interview, adding that the chipmaker has reviewed its options twice already in the past decade.

"The environment is constantly changing so the analysis done earlier may not be valid anymore, so it's in that context that we're taking a look at it again now," Aberle said.

The company said it expected to complete its strategic review by the end of the year and also agreed to add three new board members in cooperation with the activist.

For the review, Qualcomm is being advised by investment banks Goldman Sachs Group Inc (GS.N) and Evercore Partners Inc (EVR.N), according to people familiar with the matter.

Qualcomm's Aberle added that M&A in the semiconductor industry is at a "heightened level."

Semiconductor dealmaking has reached $79.7 billion so far this year, the highest level since 2000.


Qualcomm makes software and chips used in smartphones, tablets and gaming devices and is known for its Snapdragon processor used in high-end smartphones made by Samsung Electronics Co Ltd (005930.KS), HTC Corp (2498.TW) and ZTE Corp 000063.SZ.

It faces intense competition from Taiwan's MediaTek Inc (2454.TW) and a handful of small Chinese companies that specialize in making chips for low-priced phones.

This year, Samsung said it would use its own processor for the new Galaxy S6 smartphone instead of Snapdragon.

Qualcomm agreed in February to pay a fine of $975 million to the Chinese government's National Development and Reform Commission for anti-competitive practices..

The company cut both its full-year revenue forecast and the outlook for its semiconductor business.

Revenue fell 14.3 percent to $5.83 billion in the third quarter ­ the first quarterly fall in five years ­ and missed the average analyst estimate of $5.85 billion, according to Thomson Reuters I/B/E/S.


Staking Your Claim in the Healthcare Gold Rush

Revolutionary changes in the delivery, financing, and management of healthcare present a choice: Do you want to be a gold miner or a bartender?

The U.S. healthcare sector, which represents one-sixth of the nation’s US$17 trillion economy, is experiencing a number of simultaneous upheavals. Indeed, it’s difficult to think of another industry of this size that is facing as much disruption and change in the way its services are delivered and financed, and even in how it is regarded, in such a short time. The causes are a unique combination of technology and innovation, and of regulation and reform.

As it is changing nearly every other business, the revolution in mobile communications and information technology is changing the way healthcare is delivered, consumed, and managed. Healthcare information is quickly transitioning from its traditional repository in static, handwritten charts that reside in a doctor’s office. It is moving to devices that sit in the palm of the hand while reaching back into the cloud, where patients can access and add to their records at any time. Continuous monitoring through wearable technologies and smartphone apps is creating a wholly novel, totally accessible, 24/7 digitized picture of people’s health. The volume of health and medical app downloads is projected to reach 142 million in 2016, according to Juniper Research. And by 2018, IDC Health Insights predicts, 65 percent of consumer transactions involving healthcare will make use of a mobile device.

At the same time that healthcare data is on the move, healthcare’s locus of delivery is shifting out of the doctor’s office and the hospital and into everyday life, through retail clinics, home-based diagnostics, and telemedicine. Thanks to enhanced connectivity, companies’ capacity to touch patients at every moment of the healthcare journey has never been greater, for both established providers and new entrants. When diagnosis and treatment can move to where the consumer lives and works, and patients’ health can be tracked anywhere in real time, it opens a new frontier in managing health.

In a third, related shift, consumers are becoming more influential and empowered. Thanks to several factors ­ the proliferation of high-deductible plans, the trend of employees paying larger shares of premiums, and the increased number of people purchasing insurance on the federal and state exchanges ­ consumers are funding a larger portion of their own healthcare. These factors, combined with greater transparency, are pushing healthcare to become more of a consumer good. Historically, patients were caught between providers and payors; providers had incentives to facilitate more care, whereas payors dictated what would be paid for and how much would be paid. Patients had to rely on referrals from primary-care physicians to see specialists. Consumer antagonism reflected the realities: as patients had few care alternatives, had little to no capacity to shop for value, and continually encountered gatekeeping of services. Today, because the system can gather data in real time !

and in any setting ­ and can make it transparent to the patient before anyone else ­ patients may be able to call the shots as to how (and with whom) they manage their health.

Finally, these changes are all taking place against the backdrop of a profound shift in the way that medical care is financed, paid for, and regulated. The comprehensive reforms of the Affordable Care Act of 2010, the expansion of Medicaid, and the continuing growth in the Medicare-eligible population means that the federal government is taking on a bigger role as a payor, a setter of rules, and a shaper of markets. That is placing direct pressure on incumbents to change their business models. The emergence of public and private exchanges has led to more direct-to-consumer channels and the goal of creating “customer for life” relationships. Retail alternatives for urgent care have forced traditional providers to improve their customer service and convenience in order to keep patients within their integrated delivery systems. The federal government, flexing its muscle as both payor and regulator, is demanding that reimbursement be determined by quality, outcomes, and evid!ence-based value ­ not just the volume of care. And that is driving incumbents from a position of managing utilization to one of managing population health.

This combination of technology, business innovation, and reform has led to immense opportunity amid severe challenges. This competition for resources at a new frontier can be analogized to a modern-day gold rush. New markets are creating new profit pools. To an unprecedented degree, healthcare spending is up for grabs ­ up to $1.5 trillion in spending and $150 billion in profits, according to our models.

In this gold rush, $1.5 trillion in healthcare spending and $150 billion in profits is up for grabs.

The shifts will also inspire critical thinking about the business of healthcare. Confronted with the changes, incumbents will have to reconsider their competitive positions. And upstarts and those in adjacent industries will be compelled to assess where ­ and even whether ­ they can fit in.

Players that thrive in this boomtown will do so by decreasing medical spending in a consumer-oriented manner, and by capitalizing on newly informed consumer choices by improving outcomes. As companies approach these issues, we think it is useful for them to analogize themselves to one of two professions that thrive in actual gold rush environments: gold miners and bartenders.

Gold Miners and Bartenders

In today’s healthcare industry, gold miners and bartenders represent two business models that operate in parallel. They have different time horizons for success.

Gold miners. These vertically integrated players take ownership of healthcare. They profit by mining value out of a resource ­ for example, by managing the health of a specific population, such as patients with diabetes, heart disease, or cancer. The gold miner strategy is closely aligned with population health management, which takes a deep understanding of chronic care to promote a 360-degree, long-term management approach. Dealing primarily with people who are sick, these large institutions ­ insurers, hospitals, and physicians groups ­ profit by improving outcomes and sharing in the savings. After all, their strategy is based on the fact that the top 30 percent of utilizers of medical services account for 75 to 80 percent of medical spending. This medical management model surrounds the patient in a system of ongoing data monitoring, analytics, and outreach by care providers.

In a successful gold miner strategy, care expands into the patient’s world, shifting to timelier, more convenient, and less costly settings. Population health management extends the traditional command-and-control view of clinical decision making. Patients are empowered by the transparency of daily data, but care coordination is still physician-driven.

Gold miners prosper by harnessing technology to develop new processes that let them conduct established business more efficiently and effectively. In healthcare, one of the key imperatives is to bolster the system ­ and outcomes ­ by containing medical spending on a given pool of patients. Population health management strategies, which are delivered via primary care and other care-coordination activities, are likely to become more widely adopted. Research already shows that the chronically ill, who often make health decisions on a daily basis, experience improved outcomes when connected to their provider via remote monitoring and other channels of communication. By using such tactics with its diabetic population, Geisinger Health System, a large, integrated provider in rural Pennsylvania, achieved an 18 percent reduction in admission rates, a 31 percent reduction in readmission rates, and a total cumulative savings of 7 percent.

Bartenders. These players take a fundamentally different approach from that of gold miners. Bartenders profit by providing a service, by offering advice and information, and by managing customer experiences and relationships. They serve healthcare consumers by offering customized and convenient options to address routine or everyday needs. Bartenders often have a narrow focus, providing specific services to rapidly growing niches. They prosper by selling goods and services with a margin. Unlike gold miners, whose primary innovation often lies in developing new processes, bartenders innovate with new products, and with marketing and design. Although a bartender company may also serve the chronically ill, its intent is not to preserve the existing patient–doctor relationship but to run parallel to it or to nibble away at it.

Their consumer contact can be both physical and virtual, on a scale from big-box urgent-care clinics to apps that track blood-sugar levels. Like a gold miner, a bartender may use mobile health applications to capture an individual’s vital signs and lifestyle data. But its role is more advisor and service provider than director and manager. Consumers are responsive; they recognize the value of quick answers that provide better awareness of their health status and risks, enabling them to direct their own care and manage healthcare finances, and helping them adhere to treatment plans and goals. Margins shift into either consumer savings or retail revenues.

Bartenders often hail from nontraditional sectors such as retail, software, electronics, and apparel. Example include the drugstore chain Walgreens, which now operates a network of immediate-care clinics; Theranos, a Silicon Valley startup that offers cheap, pinprick blood testing; and developers of fitness trackers such as Fitbit. The home health and wearables market is expected to reach nearly $160 billion in sales in North America by 2023, according to the Centers for Medicare & Medicaid Services, when margins of 15 percent will yield $35 billion in profit.

Coexistence and Competition

Just as was the case in gold rush towns, gold miners and bartenders generally complement each other. Rather than offer a stark either–or choice, these typologies help provide a framework for understanding how to design business models to compete in the evolving market.

Let’s take an example. Imagine a patient is troubled by inconsistent heart palpitations, which do not have the courtesy to show up at an annual visit. But there are devices, such as the AliveCor EKG attachment, that turn the patient’s smartphone into an EKG machine.

In a gold miner scenario, during the primary-care office visit or after an electronic health record review, the patient is enrolled in a preventive cardiology program. A clinician prescribes a smartphone EKG app like AliveCor, and the results are similarly transparent, which helps motivate compliance. But the decisions involving data interpretation, diagnosis, and action plan design are made centrally by a clinical team. The team closely monitors the patient’s daily progress and adherence to the treatment plan, which reduces the potential for a critical-care episode. Follow-up might include app-generated texts or outreach from a nurse or social worker, perhaps through video calls, depending on the patient’s preferences and needs. Delivery of care shifts out of healthcare settings, upending the traditional revenue-generating sequence of appointments and tests. The healthcare manager profits by avoiding the incidence of an expensive surgery or visit to the emergency room.

In a bartender scenario, by contrast, this EKG app is marketed directly to the consumer. He or she makes daily recordings and uses the app’s many easy-to-understand options for how to interpret the data ­ it could be sent to a doctor, or to a software vendor’s experts, or to a computer for analysis. The app also logs exercise, sleep, diet, and medications. As data accumulates, an algorithm might find a correlation between the taking of certain prescription drugs and heart palpitations, or it could flag potential symptoms of congestive heart failure; the app suggests next steps. In this scenario, the consumer has the prime decision-making role. Wait and see? Return to the physician or find another expert? Track other suggested symptoms? Going forward, the app offers a range of interventions, including text alerts reminding patients of diet and medication schedules or notifying a specified contact of emergent conditions. Every consumer choice represents a potential revenue stream that is up for grabs between incumbents and new players.

Blurred Lines

It may seem that gold miners have little to fear from the bartenders’ capture of the mostly healthy consumer. But technological advances will keep pushing the boundaries of consumer options and the generation of health intelligence outside traditional settings and relationships. Walgreens’s immediate-care clinics have partnered with Theranos to greatly expand the diagnostics available in those locations. Device and analytics firms such as WellDoc and BlueStar use mobile self-management programs to monitor blood sugar and offer coaching to diabetic patients, resulting in significantly fewer hospital visits and improved blood-glucose levels. A more informed consumer experience will prompt healthier choices and smarter testing. As a result, nearly $200 billion may be saved by shifting care from hospital-based inpatient and outpatient settings (gold miners) to retail clinics, including those at big-box stores and pharmacies (bartenders).

Indeed, patients who subscribe to bartender offerings will profoundly challenge providers to demonstrate their value in new ways: Patients who own and can interpret their own data will enter every healthcare encounter armed with meaningful, personalized expertise (not just a few pages printed from the Internet). They may even choose to crowdsource their diagnosis in a forum such as CrowdMed. When expertise becomes tailored to the individual and broadly accessible, providers must add value to the patient encounter through relationship building and a more profound understanding of their patients’ needs. This task is complicated by electronic health records. Why? Although they have improved the overall standard of care, they also decrease the need for conversational interaction ­ which tends to drain the patient–physician encounter of some of its depth and richness.

Which Strategy?

In the eyes of the consumer, the population health management (gold miner) approach and the consumer-focused (bartender) approach may initially look very similar. But they offer distinctly different views of where decision making resides. And they will compete for customers along those lines. Both industry incumbents and newcomers are likely to test-drive elements of each approach (see exhibit). IDC Health Insights estimates that 70 percent of healthcare organizations will offer some combination of wearables and virtual healthcare by 2018. Providers without risk-sharing arrangements as well as primary-care population health managers may offer a robust assortment of retail and virtual services in order to “own” the patient across various life stages. Large employers ­ especially those that self-insure ­ may experiment with gold miner activities to lower costs or address health issues specific to their employees.

Generally speaking, new entrants such as consumer product and software companies will probably have an advantage in bartender strategies, because they possess both expertise in product development and freedom from established industry relationships, allowing them to design a consumer-focused healthcare experience.

Large integrated healthcare systems will have a natural affinity for the gold miner strategy. Payor and provider partnerships are best positioned to move to risk-sharing profit pools and develop a delivery of care model that encompasses the home, particularly given the growth of accountable care organizations (ACOs) and Medicare’s outcomes-based reimbursement efforts.

Unlike previous risk-sharing HMO models, the new gold miner models will give primary-care physicians the tools they need to orchestrate comprehensive care for better medical outcomes and cost management. Perhaps more strategically significant, adopting elements of a gold miner strategy will allow providers to keep pace with the federal government’s increasing emphasis on value, as expressed by continued Medicare/Medicaid rate cuts, public audits of rate increases, and medical loss ratio regulation for payors.

The Path to Success

To prosper, both gold miners and bartenders will have to make significant investments in mobile health and analytics. Data capture and interpretation needs to be specific to the customer and able to identify relevant trends across multiple symptoms, both pre- and post-diagnosis. Consumers already have access to general healthcare information via the Internet. What they need is reliable and secure tracking of their own health statistics and a way to translate that data into meaning and action.

For gold miners, success is contingent upon developing a medical model that optimizes care and the ability to track changes in medical spending and value. This includes making investments in business capabilities, such as:

• The physical capacity to coordinate care across home, primary-care, and specialty care settings; e.g., training social workers to conduct the follow-up required for high-risk patients.

• Virtual capabilities that can provide consumers with the right tools to motivate their adherence to treatment plans. Gold miners may be better served by incorporating brand name software/hardware than by building their own tools.

Successful bartenders will cut through the wealth of options by designing products and services that compete with the best of consumer experiences. Ideally, components of their offering will include:

• Evidence-based analytics that combine with the patient’s preference for risk, and that are then translated into easy-to-understand, personalized care directives.

• A bridge from providing data and intelligence in the early stages of the patient’s journey to connecting patients seamlessly with specialty care as their needs intensify.

Place Your Bets

The gold rush, which has the potential to fundamentally shift profit pools in the industry by changing where intelligence is gathered and expertise is delivered, is already under way. Sitting out isn’t an option for anyone. Given the potential for industry destabilization, every player’s existing profit pools are at risk. If you’re a hospital or specialist, collaborating with primary-care physicians or payors may be essential to long-term survival, as gold miner strategies reduce the use of intensive care settings. Payors need to facilitate population health platforms or face the risk of being disintermediated by providers that take on risk through ACOs. Providers cannot rest on their incumbency, even if their incentives don’t favor population management. New entrants (particularly in medical analytics) and certain payors will carve out manageable chronic care populations, leaving the provider with only the most expensive and dire cases.

The gold rush promises fortunes, but it is a fluid environment. Already, we're detecting the presence of a third set of players. So-called railroad pioneers build the infrastructure that binds bartenders and gold miners together, construct the platforms on which the new solutions stand, and develop services and technologies that eliminate bottlenecks and help systems run more smoothly. Examples include Citigroup’s Money2 for Health, a payment processing system, and Epic Systems, which builds health data management systems. Tech startups, consumer firms, and innovative health systems all have the chance to strike rich veins in this new frontier, where healthcare is more continuous and less episodic; where it is more tailored and less one-size-fits-all; and where analysis and decision making are shared among consumers, clinicians, and artificial intelligence. The future will be more virtual and yet intensely personal and close to home.

But a hundred staked claims mean nothing if one of them doesn’t hit it big. Success will require, more than ever, clarity about who you are as a company. Decide how you want to approach the market. Then, set about staking out your investments in a focused way using your preferred business model, forging new collaborations, and reshaping the value chain of healthcare. Rewards will abound for those that are brave enough to strike out for new territory.


5. Toxic Behaviors that Poison Teams

Toxic behaviors connected to communication:

Assume silence is agreement.

Overstate teammate’s opinions and question their motives.

Sweep difficult topics under the carpet.

Speak for others. Begin sentences with “you” – you always and you never.

Polish terminology until the message is lost, obscure, and acceptable to everyone on the planet.

Toxic behaviors connected to lack of humility and disrespect:

Tolerate drifters.

Allow power-members to drone on and on.

Share your feelings without regard for others.

Make decisions in private meetings, before team meetings begin.

Fight for everything you want.

Don’t adapt, as a matter of principle.

Start over when late-comers arrive.

Interrupt each other.

Use sarcasm to put people in their place.

Refuse to admit you’re wrong and pretend you know more than you know.

Toxic behaviors connected to diversity and innovation:

Don’t mix genders.

Marginalize new members who don’t know that you’ve always done it that way.

Invite the same people to the table, year after year.

Explain why new ideas won’t work as soon as they are introduced.

Toxic behaviors connected to planning and execution:

Get lost in the weeds.

Don’t identify project-champions.

Don’t talk about purpose and goals.

Assume things won’t work and remind everyone when they didn’t.

Solve every problem and address every imaginable contingency before you try something.

Toxic behaviors connected to meeting agendas:

Don’t state the purpose for the meeting.

Write long agendas.

Deal with a few “quick” items before you address important topics. Don’t leave enough time for the big stuff.

Discuss, but don’t decide.

Four top tips for making teams work:

Identify the reason for the team’s existence.

Connect everything you do to the reason for the team’s existence.

Assign champions and establish deadlines for every project or initiative.

Monitor energy. When you feel energy going up or down, ask, “What just happened?”

Enemy within

Rogue employees can wreak more damage on a company than competitors

EMPLOYEES are often said to be a company’s biggest resource. It is equally true that they are its biggest liability. Scarcely a week goes by without a company falling victim to employees-turned-enemies-or-embarrassments. On July 20th Ashley Madison, a website for married people looking to have an affair, announced that it had been hacked. Noel Biderman, the company’s chief executive, says that he thinks the attack was “an inside job”. On July 6th HSBC fired a group of employees when it emerged that they had filmed themselves engaged in an “ISIS-style mock beheading” of an Asian colleague dressed in an orange jumpsuit.

The most familiar type of enemy within is the fraudster. The Economist Intelligence Unit, a sister organisation of The Economist, conducts a regular poll of senior executives on the subject of fraud committed by insiders. In 2013 the poll discovered that about 70% of companies had suffered from at least one instance of fraud, up from 61% in the previous survey. Fraud is often petty: a survey of British employees for YouGov in 2010 found that a quarter of staff eligible for expenses admitted to inflating claims. But fraud can also be more audacious and more harmful: think of former employees setting up rivals using stolen technology and purloined client lists.

Even more dangerous than the fraudster is the vandal. Thieves at least have a rational motive. Vandals are driven by a desire for revenge that can know no limits. David Robertson of K2 Intelligence, a company that specialises in corporate investigation, recounts the story of a British manufacturing company that was undergoing restructuring. A member of the information-technology department discovered that his name was on the list of people whose services would no longer be required. He built a “backdoor” into the company’s IT system from his home computer and set about wreaking damage­deleting files, publishing the chief executive’s e-mails and distributing pornographic pictures.

Some enemies-within start out as star employees. A striking number of the worst corporate scandals in recent years have been the work of high-flyers who bend and then break the rules in order to please their bosses. Barings, a collapsed British investment bank, showered Nick Leeson with rewards before it discovered that he had produced his outsized results because he took outsized (and unauthorised) risks.

Other enemies-within are the very opposite of high-flyers. The HSBC execution squad are only the latest example of low-level employees who have either wittingly or unwittingly used the power of the internet to blacken their employer’s reputation. In April 2009 two employees of Domino’s, a fast-food chain, posted videos of themselves “abusing takeaway food”. And in July 2012 a Burger King employee posted photos of himself online which showed him standing in a tub of lettuce in filthy shoes along with the caption “This is the lettuce you eat at Burger King”.

One of the most effective ways for outsiders to damage a company is to strike up a relationship with an insider. This can sometimes be fairly crude: bribing a cleaner to replace a keyboard with a carefully-modified lookalike or swapping a USB stick for a virus-laden doppelganger. But it is often more sophisticated. Many of the biggest corporate disasters in recent years are likely to have involved collaborators. Security experts suspect that the hackers who stole the personal information of about 40m customers from Target, an American retail chain, in 2013 may have had help from insiders (the store refuses to comment).

What can companies do to reduce the threat from these wolves in sheep’s clothing? A lot depends on which particular sorts of wolves you are dealing with: traps that work for vandals may not work for fraudsters, for example. And even the best-managed companies are fighting an uphill battle. Information is getting harder to control. A single USB stick can contain more data than 500m typewritten pages. A mobile phone can be hijacked and turned into a listening device. People regularly log in with their electronic devices in crowded places where they can be watched, filmed or hacked.

Fifth column, three principles

Yet three precepts are always worth bearing in mind. The first is that firms need to focus on the people who have the greatest capacity to do harm­those who control the money and information. The more complicated companies become, the harder it is to identify where power really lies. But one thing is clear. The more dependent on information firms get, the more IT specialists can compromise the whole business. The least companies can do is to keep a careful watch on the IT department­and, if you’re going to sack somebody from that team, do so immediately.

The second is that the human touch is still invaluable. Companies can certainly strengthen their hand by installing software that can identify anomalous behaviour or monitor e-mail, or by employing forensic accountants to double-check the accounts. But rogue employees are usually a step ahead of their employers: they will simply shift to text messaging if they think that their e-mails are being watched. Companies can probably do more by listening to company gossip. Corporate-security firms get some of their best results by using “spies” to hang around in the smoking room or go out for drinks after work.

The best way to fight the enemy within is to treat your employees with respect. And this third principle is where many firms fail. They may embrace the rhetoric that nothing matters more than their people, but too many workers feel that nothing matters less. According to a recent survey by Accenture, a consultancy, 31% of employees don’t like their boss, 32% were actively looking for a new job, and 43% felt that they received no recognition for their work. The biggest problem with trying to do more with less is that you can end up turning your sheep into wolves­and your biggest resources into your biggest liabilities.

Networking. Do this instead

Networking has its advantages, but be prepared to put in the effort.

Today’s job market is a smorgasbord of networking opportunities–because it has to be. Networking, as both a practice and a concept, is a non-negotiable skill; one we’re all tasked with learning as our careers progress. It’s an investment in your future and an important one to make: a referral by someone you know is much more likely to result in an interview and potential job than simply applying via traditional applications.

But between social media, in-person meetings and good, old-fashioned email blitzes, there are plenty of right (and wrong) ways to network. How can you tell the difference between a waste of time and an advantageous opportunity? And once you’ve figured that out, what’s the right way to proceed? Well, there’s no networking rule book. The most important thing is that your way of networking is unique to you. But, to ease worries a bit, here are a few tips I’ve picked up along the way:

It doesn’t happen overnight

It’s tough to hear, but it’s true: networking is a lifelong practice, not a one-shot deal. It takes continual maintenance and perseverance to get it right, so establish realistic expectations–it doesn’t really ever end. But it’s not a death sentence. In fact, thinking of networking in never-ending terms shouldn’t be daunting–it should be exciting. It’s like a friendship. They come and go, but there will always be new ones (or new opportunities) around the corner.

However, to reap the benefits you do need to be prepared to do the work. Put in time on a monthly basis to track against networking goals. Plan a meet up every couple of months. Or send an email out every other week to a contact you haven’t seen lately. However you choose to approach the basics of networking, remember to keep doing it.

A little hard work goes a long way

It’s not a secret, but it’s essential to remember: being mature, personable and respectful is what motivates people to network with you. Finding commonalities and points of shared interest are key to turning a connection into an opportunity, so make sure you leverage those. But I think we often forget the other side of networking: delivering results.

While culture fit and personality dynamics play more of a role in the job hunt than ever before, let’s not forget what makes a great employee even more stellar–their skills. Don’t expect someone to refer you with a mediocre employee profile or lack of success in previous positions. Give your connection the evidence they need to back you up, and they’ll feel much more comfortable doing so.

Networking is a two-way street

While we often think of networking as what others can do for us, we can’t forget the other side: what can we do for our connections? Whether former or current, your colleagues and contacts will be far less inclined to help you out if you don’t return the favor. Networking is not a one-way street rather it’s a cycle, and when you’re at the top you should help others get there too.

The value of networking comes from building relationships, not just contacts. The vanity metrics of 500 plus LinkedIn LNKD -0.17% connections or friends on Facebook FB -1.06% don’t really mean anything at the end of the day. The key to successful networking is investing in your relationships. Spread the word about job openings at a friend’s company or reach out to connections that could be good fits at your own. If you’re proactive and not reactive, doors will open for you in response.

Improving Your Company's Communication

Many professionals think they are pretty good at managing their internal communications. Maybe you have a system of color-coded, prioritized folders in your email inbox, of which you’re particularly proud. Or you’re that senior leader who makes a point to get coffee with your direct reports individually once a month, so you’re confident that everyone on your team is in the loop.

If you think either of these methods of communication are enough to make employees­and ultimately customers­happy, you’re good intentions are severely misguided. In fact, outdated communication strategies could be the very thing holding you, and your company, back. Solutions like these don’t address how we think and work today. There’s a reason you don’t use a fax machine to send files anymore . . . there’s simply a better way.

But with a conscious effort to evolve, you can help you shift your mindset and become a better, more efficient communicator. Here are the first three steps:

1. Detox from Your Addiction to Email

With today’s complex, fast-moving business environment, and the growing number of virtual organizations, it’s not realistic to gather around the same table and collaborate face-to-face. But that doesn’t mean important team conversation needs to take place over email­in fact, that’s one of the worst places for it.

Email is great, but it’s no longer an optimal tool. It was designed to replace memos and one-way, one-time communication, not rapid, deep, ongoing productive conversations. How many times have all of us misplaced or not read an important team communication because it was buried inside a deluge of less important one-line, bantering exchanges or spam? The speed that business moves today is far faster and complex than in days past, and requiring a communication nervous system for instant communications and rapid-fire discussions is becoming paramount.

Relying on email alone to be your only communication tool will leave you in the dust every time. Use email when appropriate, but embrace emerging technology that offers much faster, better communication to deal with the growing complexity of business. The lesson here is that even if you are in the habit of doing things a certain way, recognize there could be, and most likely is, a much better way, so get ready to embrace it.

Step 2: Change Your Internal Structure

Most companies will give a spiel about being transparent and valuing all members of the team, but when you really look at your organization, can you say this is absolutely true? Many businesses start out with great intentions, but as they grow, it becomes more difficult to see these ideals realized. Instead of thinking departmentally and hierarchically, it’s time to think in terms of teams. Team collaboration instead of hierarchical communication is the most efficient and effective way to share information and get results.

For example, General Stanley McChrystal eloquently argues in his book Team Of Teams the need to decentralize team communication and decision-making. Layered upon this notion is the importance of truly transparent communication. McChrystal effectively illustrates the power of giving small groups freedom to experiment while being transparent in communicating progress and goals up and down the larger organization.

I’ve employed this mentality personally and experienced great success. Instead of micromanaging employees or getting hung up on titles, I’ve emphasized the fact that everyone should know everything they might need to know. Granted, there are delicate conversations that need to be private, but most should be open and transparent so everyone can subscribe to the "signal" that is important to them while filtering out the "noise." Anyone that’s tried to unsubscribe from reply-all hell in email knows how much of a time and energy suck their inbox has become.

How do you change your internal structure? Start by making your leadership accessible to everyone, encouraging open communication across departments, and reorganizing your business to function as a "team of teams." Teams should be fluid and rapidly adaptable. Employees that have great insight through team communication can be empowered to make wise and rapid decisions on their own and avoid the frustration of being kept in the dark. This approach also eliminates a business epidemic that must be stopped immediately­keeping information from management.

Step 3: Make Communication Flow Far and Wide

Communication needs fluidity across an organization so everyone has insight into the goals and progress of the business. Employees that can seek and quickly acquire new information are able to efficiently connect the dots and avoid huge mistakes by simply having access to important information about what other people and teams are doing. Information is power, and teams that have easy, quick access to as much of it as possible make decisions to beat their competitors to the punch. The only way to do this is by radically embracing technology to facilitate team communication that is not dependent on email.

It’s simple: If your business is to succeed, your goal has to be to radically improve the communication within the walls of your company. Start by freeing yourself from your familiar and habitual relationship with email. Open your mind to new ideas. Then, reinvent your organization as a "team of teams," whose people are empowered to act within defined limits because they know and understand what is going on. Seek out ways to incite dialogue with every member of your business. Very quickly you will see meaningful and fluid communication go from a far-off fantasy to become the new backbone of important communication for a faster and more effective organization

Overcoming energy drainers

Low energy levels, caused by internal and external factors can have a negative impact on multiple levels and can leave us demotivated and frustrated. Although most of the time we can control our energy supply, we often continue with our habits that leave us drained rather than being focused on energy boosting activities. Some of the most common energy drainers include:

1. Multi-tasking

Contrary to a popular belief, multi-tasking is not an effective way to get things done. A study found that people suffer from something like a writer’s block each time they switch from one activity to another, requiring them to take time to “reset” their minds. The more complex the task being switched to or from, the higher the time cost involved in switching. Even very brief distractions add up. An effective way to overcome this issue is to stop doing activities that don’t generate a return on investment for results, estimate how long it will take to accomplish an activity and block dedicated periods of time for it in your calendar and train yourself to focus on the task at hand during time-blocked periods.~

2. Lack of clear goals and conflicting priorities

A lot of work gets done without the benefit of clearly defined goals and objectives. But, without clarity, it is difficult to know whether the right work is getting done and without a clear focus on goals and objectives priorities easily conflict. To get more focus, list your goals and objectives as you understand them and highlight conflicts among them. Then make yourself reminders – post your business and personal goals and objectives in a place where you can see them, or choose representative artwork or other objects to place in your office space as a reminder.

3. Over commitment

People over commit for a variety of reasons: they don’t want to disappoint others by saying no; they feel they have no choice but to commit; they have an unrealistic idea of current commitments or of what is involved in the new commitment– to name a few. Being overcommitted can quickly lead to burnout and exhaustion. Saying no in an appropriate way does not communicate that you are unwilling; rather, it communicates that you are responsible and take your commitments seriously. Avoid the automatic yes when asked to make another commitment. State that you need to check your other commitments and time frames before you can give an answer. Before committing to anything, be sure you have a realistic and detailed idea of what the commitment entails. Don’t say yes when you mean no.

4. Distractions

We are constantly bombarded by distractions and interruptions in the workplace. Think of these events as forcing the mind into a multi-tasking mode, with each event either preventing or breaking concentration. The result is time lost to constant task switching. To eliminate distractions, find a quiet place to work on projects that require concentration, set aside specific time periods for specific activities, and discourage interruptions and save e-mail and voicemail checking for the transition time between other tasks.

5. Lack of Organisation

“Everything in its place and a place for everything” is a good energy-boosting adage. For some people, organisation means files, drawers, cubbies, neat stacks or no stacks at all, and a complete lack of clutter. For others, organisation simply means knowing where to look and being able to find what they need right away – for them a neat desk is alien. The point of organisation is not to fit someone else’s definition of “organised,” but to have what you need in an easily accessible place. Recognise that disorganisation is an energy drain and organise yourself in a way that makes sense to you.

6. Lack of reflection time

Failing to reflect is a vicious cycle that leads to less time for reflection, because without reflection time, it is difficult to know whether one is working on the right activities; it may even be difficult to have a clear idea of what one’s goals and objectives really are. A lack of time to reflect, refresh, and rest can also lead to stress and work overload. Use an existing activity such as regular workouts, walks, gardening, or another hobby as an opportunity for reflection or find a coach or mentor. This doesn’t have to be someone you hire; it could be a manager, colleague, or friend outside work. Set aside specific time periodically to reflect on your work, self, long-term goals and objectives, and so on.

7. Sense of meaninglessness

An important source of energy for many is the pursuit of meaningful goals and objectives. As we become busier and busier, however, it is easy for meaningful goals to be displaced by urgent things. The longer this goes on, the more stress one feels. To re-establish your goals, build fun activities into your schedule. Set long-term personal goals, but don’t become imprisoned by them. Put them in a prominent place – they will become implicit priority-setters and create a standing, flexible weekly schedule in terms of categories of activities: job, chores, exercise, family, unstructured relaxation, and so on.

8. Perfectionism

The drive for perfection can be very draining. Perfection is an indefinable and unobtainable goal that while it can increase the quality of one’s output, also increases workload. Establish objective quality measures; ask others to help you define “good enough” and identify the point of diminishing returns – that point when you stop adding measurable value by continuing to work on something. Before you “make it better,” ask yourself whether a person whose opinion you respect would notice a meaningful qualitative difference if you invest more time and effort.

The biggest lie employers tell employees

In his new book The Alliance, Reid Hoffman argues that the relationship between employers and employees is built on "a dishonest conversation."

Hoffman would know. As co-founder and executive chairman of LinkedIn, he sits atop the largest, most data-rich hiring platform the world has ever seen. As a venture capitalist who made early investments in everything from Facebook to Airbnb, he's helped some of the era's most successful companies grow.

And now he wants both workers and employers to begin having honest conversations with one another ­ conversations that admit employment isn't for life, that loyalty only lasts so long as it coincides with self-interest, and that the relationship doesn't have to end when the worker leaves.

1) The biggest lie that employers tell employees

"The biggest lie is that the employment relationship is like family," Hoffman says.

He goes on to describe two versions of the lie. "One is where the employer is actually deluding themselves." Employers may want to believe their workplace really is like a family, and, in that moment, they may convince themselves it actually is like a family.

"You don't fire your kid because of bad grades"

The other version of the lie comes because the employer wants the employee to believe it. "They really want the employee to be loyal to the company," Hoffman continues. "That's when it gets deceptive."

But the employer-employee relationship isn't like a family. "You don't fire your kid because of bad grades," Hoffman says.

2) The biggest lie employees tell employers

But it's not just employers who lie. Prospective employees do, too.

"They know that employers want loyalty," Hoffman says. "They know they want to hear, 'Oh, I plan on working here for the rest of my career.' But most employees recognize that career progression probably requires eventually moving to another company. But that never comes up."

This is core to Hoffman's idea that both employers and employees should look at a particular job less as a lifetime contract and more as a "tour of duty" ­ a limited-time engagement meant to achieve specific ends on both sides. But until employers stop pretending employees are family and employees stop pretending their aim is a job they'll never leave, neither side can have that conversation.

3) The most unusual question LinkedIn asks prospective hires

LinkedIn is an organization dedicated to helping other companies hire talent. It has access to more hiring data than arguably any other corporation on earth. So I asked Hoffman: how do they hire? What do they ask that most companies don't?

"All of our managers and recruiters ask about how working here will be transformational to your career," Hoffman says. "For example, our SVP of engineering, Kevin Scott, will ask, 'What's the next job that you would like to have post-LinkedIn?' That's not because we don't want our stars to stay at LinkedIn for a long time. It's because we're so committed to the idea that we're going to be transformative in the prospective employee's career. So we need to know, what's the next job after this? What do you want it to be?"

But don't job candidates find that weird?

"No," Hoffman says. "It's framed as, 'We're planning on having a huge impact in your career if you're working here.' And they find that liberating. It brings some honesty to what is otherwise kind of a collective self-deception dance. And it also means that when they leave, we still care about them."

"I was at an Airbnb board meeting and I ran into two former LinkedIn employees who walked up to me and said, 'Hey, how's it going? I'm working here now. I'd love to tell you about some of the stuff that I'm learning.' They know the way that we operate, is not, 'Oh, you've left LinkedIn, so you're no longer part of our tribe.' We continue to be allies. We can continue to try to help each other. That lets them come up and start telling me things about things could be really helpful to LinkedIn."

4) Employers put too much weight on interviews and too little weight on references

A key part of every hiring process I've ever been a part of ­ both as the applicant and as the employer ­ is the job interview. And I've never felt very good about it. Don't job interviews bias you toward gregariousness? Is there any real reason to believe shy employees perform worse than extroverted ones?

"If you told me, 'Pick one ­ you could either get references or an interview,' I would pick references every day of the week"

"I think you can learn some useful things from an interview," Hoffman says. "You just have to be clear about what it is you're actually trying to learn. I think you can learn about chemistry and fit. I think you can learn about a person's immediate response to a challenge. But if you told me, 'Pick one ­ you could either get references or an interview,' I would pick references every day of the week.

"I advise all the companies that I affiliate with to take reference checking very seriously. References actually tell you how people work, what their work ethic is. That is a critical piece of data that cannot be put aside or done casually. Frequently employers are so casual about references they either a) don't check them, or b) only check the ones the prospective candidate gives them. In fact, you want both those references and others."

5) The case for hiring your friends

Hoffman's former chief of staff, Ben Casnocha, wrote an interesting piece on leadership lessons he's learned from Hoffman. This one in particular surprised me: "If you’re choosing between working with someone who’s a trusted friend and a 7 out of 10 on competence, versus a stranger who’s a 9 out of 10 on competence, who should you pick? Answer: if the trusted friend is a fast learner, pick the trusted friend."

The normal management guidance is don't hire your friends. According to Nick Bilton's history of Twitter, when Evan Williams asked legendary CEO coach Bill Campbell what the worst mistake he could make is, Campbell replied: "Hire your fucking friends!" So I asked Hoffman why he believes in hiring friends.

"You need to handle it well," Hoffman replied. "If I get to the point where I'm hiring a friend, I say, 'Look. Here's how we keep the friendship and the work stuff different. Here's how I'm going to treat you a little differently as a friend. Here's how you're going to act a little differently as a friend.' I'm going to be clear about the fact that I'm not going to privilege them at all in the continuum to the job and promotions and bonuses. All of that will be done in a very fair way.

"On the other hand, I will actually, as a friend, go out of my way to invest even more energy than I normally do to make this work. I'm committing to put in a little bit more energy. In return, one thing is I want you, as a friend, to do the same. The benefit you get from this is both a) a higher level of trust, and b) you get to work with people that you actually really like to spend time with. Which usually facilitates a generally positive working relationship anyway."

6) How philosophy training makes you a better investor

Hoffman's background isn't typical. He didn't study computer science or get an MBA. He studied philosophy. And he thinks he's better off for it.

"One of the things that philosophy is very helpful on is how to think pretty precisely about arguments, and an investment thesis is fundamentally an argument. Part of philosophical training is making you really understand how good an argument is and how to think through the alternatives. Philosophy is really good at posing the question, 'If the universe were such that this data would be different or the universe was such that this framework would be wrong, what happens to the argument then?' Questioning those premises really helps you figure out why someone smart might actually hold a different point of view.

"We live in a probabilistic universe, and we tend to think in determinist ways. If A is data-driven and I think I have that data, how certain am I that I have that data? What could I discover that might actually tell me that that data is formulated wrongly? When you dig into it, most of your arguments are actually probabilistic. They're not certain, even when you have data. You're really trying to get a sense of whether you have a reasonable bet on the probability."

3 little words

What are the most important three words for any relationship between a manager and employee?

No, it’s not “I love you.” Now that would be inappropriate, although not everyone would agree with that opinion. Love their jobs, yes. Love their managers or employees? Eew!

No, the most important three little words are: “I trust you.”

Trust is the foundation that a positive manager-employee relationship is built on. The absence of trust leads to micromanagement, fear, risk-aversion, backstabbing, destructive rumors, a lack of innovation, mistakes, and a lack of engagement.

What does trust look like? It’s all in the eye of the beholder, but here’s a starter list from both the manager’s and employee’s perspective:

When an employee says “I trust you” to their manager, it means:

When I share good news and accomplishments with you, you will let your boss and others know.

You won’t claim credit for my accomplishments.

When I admit a weakness, you will work with me to improve myself, not hold it against me on my performance review.

I can come to you when I make a mistake. You’ll treat it as a learning opportunity, but also hold me accountable when needed.

You’ll look me in the eye and give me honest, fair, direct feedback when I need it. You won’t sugarcoat it. I’ll know where I stand with you and won’t be blindsided during my performance review.

You won’t ignore performance issues – my own, as well as the rest of my co-workers. If I see a co-worker slacking off, I’ll assume you are dealing with it. If I have to bring it to your attention, I know you’ll look into it and deal with it fairly.

You won’t “shoot the messenger” if I bring a problem to your attention.

You’ll do what you say you’re going to do. I won’t have to remind you more than once.

You’ll look out for my best interests. Yes, I know you have a business to run and have to make tough decisions, but you will do whatever you can to make sure I’m treated fairly and with respect.

You’ll tell the truth and not hold back critical information.

I can discuss my career aspirations with you and you won’t hold it against me.

When a manager says “I trust you” to their employee, it means:

When I ask you to do something, I know you’ll do it. I won’t have to follow-up, inspect, ask again, etc…

You’ll tell me when you think I’m wrong or about to make a stupid mistake.

You won’t throw me under the bus in front of my boss, or behind my back.

If you have a problem with me, you’ll come to me first to discuss it.

When I ask you to do something and you say you can’t, I’ll know you have good reasons.

When we discuss your career aspirations, you’ll be open and honest with me so that I can support you. I shouldn’t be blindsided when you give me your notice.

You won’t cover up mistakes. If you screw up, you’ll admit it, take ownership, and focus on solving the problem.

You’ll give me a heads up regarding any urgent issues or problems so that I’m appropriately informed and not surprised when I hear about it from others.

If your workload slows down, you’ll let me know, or offer to help your teammates with theirs.

When I ask you how long something will take, you’ll give me a realistic and honest estimate. No padding.

When you complement me, I’ll know it’s sincere. No sucking up.

What would you add to the list? What does “I trust you” mean to you?


It's OK to Lose your Best Employees

Like it or not, everyone is replaceable.

I have a slightly different perspective than most in terms of how to lead a company through change. My experience comes from leading a high–growth tech startup, which is arguably one of the most competitive environments to keep a team focused and on payroll. Startups today can be a wild ride for human capital, and the talent required at each company milestone can vary dramatically. From my vantage point, I see all too well how this works among engineers. It typically goes something like this:

You build a team of engineers to help get your tech startup to market. You hire seasoned rock stars at the top of their field. These employees may or may not play well with others, but they deliver on the company’s vision. This is how the Facebook’s FB -1.09% of the world got started.

Now you have customers. And customers are demanding. They expect results and attention. You’ve transitioned from a pre-revenue minimum viable product (MVP) building engine to one that has a customer base, real revenue and expectations for both monthly recurring revenue (MRR) and customer growth. The focus is now on stability, usability and delivering features that support further development. This is the best type of change a company could ask for, but what are the next steps you need to take into consideration? You have to be flexible. You have to pivot well in times of change. You can’t panic. And you have to trust and empower your engineers.

Be willing to let the ‘good one’ go

Like it or not, everyone is replaceable. If you’re running a tech startup, some of your MVPs will operate like CEOs, meaning they will thrive most in times of ambiguity. However, in later stages of the company there is less room for individual ‘rock stars’ and a greater need for high-functioning, process-oriented teams. The best companies have a deep bench of engineers and other human capital to leverage at different stages of the company lifecycle. You need to trust that they will be just as capable.

It may be heartbreaking to watch a key player transition out of your company, but if they’re no longer happy, productive humans it’s time to say goodbye. Hug them (I’m kidding, they’re engineers) and help them find a new job. There’s always a chance your paths will cross again in the future.

Celebrate successes (big and small)

I’ve never met an engineer who valued anything more than humans actually using their product. Keep your engineers focused on delivering value to the end users. Celebrate company wins, share anecdotes from happy users and remind employees why their work matters.

Be transparent about objectives

I’m guilty of taking business logic for granted more times than I would like to admit. Maybe it’s more important to crank out a product feature to secure a foothold in the competitive marketplace than work on tech debt in a given sprint. Be sure to share why you’ve made a decision, its expected results, and how your employees will contribute to the new objective. Your team will be much more likely to support the initiative if they feel informed.

Throw them to the wolves

Okay, not really. But your engineers are not your children, and you should not over protect them. Engineers prefer to work towards complex engineering feats than perfect old ones. To counter that mentality, invite them to join customer calls so that they can hear the pain points that the end user is experiencing. It will also give them the opportunity to hear positive feedback directly from the source. Your customers will (likely) also love engaging with the people that build your product and contribute to its evolution.

Hire proactively in anticipation of growth

In a high growth company, employees are either going to grow with the company or decide it’s time for them to grow somewhere else. Be cognizant of the job requirements: cultural, technical or otherwise, in both the current and next phase of your business. Use that knowledge to plan accordingly. When your company is experiencing growth, knowing the requirements of the business and your human resources is germane to success. Know when to let go. Know how to cultivate your team. And most importantly, know how to plan for what’s next.


ARM Revenue Misses Estimates as Smartphone Market Cools

ARM Holdings Plc, the chip designer whose technology powers almost all smartphones, reported sales that missed analysts’ estimates after device shipments by customers including Apple Inc. trailed predictions.

Second-quarter revenue rose 22 percent to 228.5 million pounds ($356 million), the Cambridge, England-based company said Wednesday. Analysts had predicted 234.9 million pounds on average, according to data compiled by Bloomberg. Measured in dollars, revenue rose 15 percent.

Apple and Samsung Electronics Co., both of which are ARM customers, reported device sales for the past quarter that fell short of expectations as more people already have smartphones and cheaper Chinese devices gain in popularity. Any indication for a slowdown in demand at Apple, which reported quarterly earnings late Tuesday, could also impact European suppliers including Dialog Semiconductor Plc and AMS AG.

ARM shares fell 3.8 percent to 1,000 pence at 8:08 a.m. in London, giving the company a market value of 14.1 billion pounds. Dialog dropped 5.1 percent and AMS lost 1.1 percent in Zurich. STMicroelectronics NV, Europe’s largest chipmaker, declined 3.2 percent in Paris and Infineon Technologies AG fell 3.8 percent in Frankfurt.

Global semiconductor revenue is expected to decelerate this year as device makers rein in new spending on memory, researcher IHS said in an April report.

Royalty Revenue

The second half of the year may show some improvement in revenue, ARM said. Data for the second quarter so far, which is the shipment period for ARM’s third-quarter sales, shows a “small sequential increase in industry revenues,” the company said. If economic uncertainty doesn’t further damage customer spending, full-year revenue will be “in line with current market expectations” of about $1.48 billion, ARM said.

Royalty revenue, the money ARM gets when products using its licenses are sold, rose 30 percent to $175.9 million in the second quarter from a year earlier. Sales from licenses of ARM technology rose 3 percent to $151 million.

The quarterly results are the first financial announcement since Chief Financial Officer Tim Score’s departure in June. Score, who retired after more than a decade at ARM, will be replaced by EasyJet Plc CFO Chris Kennedy. ARM said Kennedy will start Sept. 1.


HEVC Patent Pool Issues Its Bill

Manufacturers of 4K Ultra HDTVs, Ultra HD Blu-ray players, mobile devices, 4K Ultra HD software and other products using high efficiency video coding (HEVC) compression and decoding technology are about to be handed another bill.

The HEVC Advance, which is a second patent pool that surfaced last March looking to administer royalty collection for HEVC technology, also known as H.265, announced a new price sheet and payment schedule for manufacturers who now must pony up for using the technology.

HEVC is an advanced video compression scheme used to make more efficient use of bandwidth in order to send digital bit streams (including those carrying data-rich content like 4K UHD and 8K video) over narrow transmission conduits, like over-the-top (OTT) streaming broadband services, as well as cable, telephone and satellite TV systems.

More on the HEVC Advance patent fees after the jump:

While not necessarily bringing a new level of confusion to HEVC users, HEVC Advance is creating a stir by dropping a new schedule of fee costs on manufacturers and content producers just gearing up to expand availability of 4K Ultra HDTV streaming and Ultra HD Blu-ray content.

The HEVC Advance popped on the scene around this year’s National Association of Broadcasters (NAB) Show, announcing patent royalties would soon come due from initial members including: GE, Technicolor, Dolby, Philips and Mitsubishi Electric.

Prior to its appearance, HEVC licensing had been handled by a pool of 27 patent holders administered by the MPEG LA. However, compression technology industry observers said there are a number of additional potential intellectual property holders who still could make claims for patent royalties, and some of those are remain unaffiliated with either patent pool.

One of the stated goals of the HEVC Advance is to bring some of those other potential claimants into its patent pool and to help establish fair and balanced fee payments that should help speed along implementation of HEVC in more devices and content.

Why 2 Pools?

Industry observers told us that there is no major issues between the IP holders in the two different pools, rather members of the MPEG LA pool have different motivating factors than do members of HEVC Advance. For example, Samsung, which is both an IP holder and a major HEVC licensees is motivated to keep both raise income and keep fees balanced to lessen the burden on the HEVC-enabled: TVs, mobile phones and Ultra HD Blu-ray players it intends to sell at competitive prices.

Members of the HEVC Advance, for the most part, will make money only from royalties on their IP.

Moller told HD Guru that “there were many licensors and patent users who came to us and said they did not feel the MPEG LA offering struck the right balance between the rights that patent owners enjoy and patent users. They didn’t feel it met their needs and it was clear than an alternate pool was necessary and it had to be official in order to attract a large number of those companies that made it clear to us they were not planning to join the MPEG LA patent pool.”

Other Compression Solutions

Some have suggested that the additional patent royalty claims could cause some content producers and hardware manufacturers to select alternative efficient compression technologies including Google’s VP-9 system, which is offered royalty free. Google developed the video compression scheme in part for 4K video streaming on its YouTube service. But industry observers told HD Guru the lack of royalties isn’t the motivating factor that you might expect it to be, because from a technical standpoint, it doesn’t perform as well in some areas as HEVC, and has some quality issues that some content producers find problematic.

On top of that, some potential licensees have been skittish of the potential underlying intellectual property issues left over from Google’s VP-8 (AVC claims in particular) compression scheme that some fear could surface down the line in the form of IP property holders making unexpected claims against VP-9, one observer told us. Some potential licensees didn’t elect to use Google’s earlier VP-8 format, even though it was also royalty free, because there was no indemnification. Some fear that unlicensed underlying IP in VP-8 might exist in VP-9, sources told us.

New Fees

Peter Moller, CEO for the HEVC Advance, said the group has developed two fee scales that will be applied to different regions of the world. So-called Region 1 takes into account most of the developed countries, such as the U.S., U.K. and members of the European Union. Region 2 represents mostly developing countries such as India.

HEVC Advance also divided up the royalty rate structure into two categories: one includes devices (4K UHD TVs and other devices) and content, including content providers transacting with consumers on streaming content, over-the-air content etc.

The group also established royalty rates segmented on the profiles that are used, including the base profile (Main 10), which has the most features and will be used in the large majority of units. Then, the Profiles are priced separately. Also included is a category in the H.265 standard called “Optional,” and if selected licensees must conform to certain requirements.

The fees for Region 1 Main Profile devices are: 4K UHD+ TVs ($1.50), mobile devices (.80 cents) and others ($1.10), corresponding Region 2 Main Profile rates are: .75 cents, .40 cents and .55 cents.

“Even though it’s added some complexity to the structure, we added an optional category to make sure we had a complete structure without any holes,” Moller told HD Guru. “We tried to balance the administrative complexity with the value that H.265 brings to certain categories.”

On the surface the fees don’t seem steep, but they are being added to a long list of bill of material costs device manufacturers must pay to market devices, and this is the only fee just to use HEVC encoding or decoding.

Moller said he didn’t expect at this point that the licensing schedule will come as much of a shock to licensees.

“I think most companies recognized that there are many companies out there that may require a license on essential patents, whether they get it bi-laterally with some companies or more effectively and efficiently through a patent pool, like we are forming or MPEG LA,” Moller said.

HEVC Advance will hold its first meeting of HEVC essential patent users and owners Sept. 2, 2015, in Tokyo.

HEVC is overcoming a lot of barriers, but there’s still a long way to go for the new 4K compression standard

High efficiency video coding or H.265 or finally, HEVC as it’s most commonly called is the next generation standard for compression of video transmissions that was developed with 4K resolution squarely in mind. However, despite some serious inroads into the world of 4K ultra HD display technologies, HEVC still has a long ways to go and its future still isn’t 100% clear.

Basically, while existing users of HEVC report that the technology is running relatively smoothly and improving as it becomes more widespread, deployment as a whole is taking a bit longer than expected.

Of course, this could partly be due to the simple fact that HEVC is designed more than anything for 4K resolutions and 4K itself is still not thoroughly deployed in the display and media player world. However, so far at least, all new 4K TVs that have emerged since the middle of 2014 have indeed included HEVC. It has essentially become a must-have feature. Likewise for media players. The upcoming 4K Blu-ray players and all existing models of similar devices also use HEVC across the board.

And this makes sense. The advantages of utilizing HEVC are obvious if 4K is your game. The technology is already well established in the resolution standards industry role and key OTT players like Netflix, Amazon Instant Video and just about all others who stream 4K video in any form indeed use HEVC.

However, according to Guillaume Arthuis, CEO and founder of BBright, “Latency had been the biggest showstopper for live 4K content” and accordingly, Arthuis claims that his company BBright has reduced said latency to less than 5 seconds for live content in 4K by developing 12-bit HEVC encoding for file-based programming. Furthermore, he states that the company has seen reduced form factors and declines in price at the same time.

Furthermore, the quality of video is being elevated as overall bit rates decrease at the same time. Thus, what we’re now seeing become a possibility for satellite and even IP transmission is the potential for two different signals being shipped across a single satellite transponder or even a single internet connection, as compression reduces 4K video down to 18 or 20 Mbps, and for high quality 4K coverage of fast-action events like sportcasts thanks to its unique compression algorithm:

HEVC compression diverges from older methods partly due to due to specific asymmetric block formations.

As Thomas Burnichon, file transcoding product marketing manager at compression equipment company ATEME recently said, “We have improved compression efficiencies to reach 50% reduction that the standards expect.” The 50% reduction he was referring to was over the levels already attained by the previous H.264 standard used for Full HD content. However, this 50% applies only to files (content that is already in hard or cloud storage) and not to live broadcasts. For live video feeds in 4K, compression remains at 30% but is slowly increasing for the sake of more feasible transmission.

The other side of this entire coin is the matter of transmission technologies themselves. While enhanced compression is definitely the quicker and cheaper route to more widespread 4K video delivery, expanding bandwidth in internet and other types of OTT transmissions is also a major potential game changer. The only problem is that this second avenue of greater UHD traffic will cost a lot more to achieve and require greater infrastructure investment.

Nonetheless, the appeal of improved HD and ultra HD and the future job of HEVC or a process like the standard (like Google’s rival VP9 compression standard) in bringing these technologies to a wider market is interesting. HEVC doesn’t simply have to be used for ultra HD –though this seems like the most probable future of digital video. It can also be used to deliver deeply augmented HD video, either with HDR built into it or in the form of HD delivered at much faster frame rates, which also have the capacity to create extraordinary picture quality, possibly of a kind even more notably superior looking to regular 4K content.

Ultra HD Forum gains momentum

The Ultra HD Forum, a global advocacy body with a mission to facilitate the adoption of UHD and related technologies, announced today that its membership has grown significantly, as the industry ramps up for UHD commercial deployments. The Ultra HD Forum’s membership has tripled to 20 member companies from the original founding charter members who included Dolby Laboratories, Ericsson, Harmonic, LG Electronics, NeuLion, and Sony.

The Forum also announced that as part of the IBC 2015 Conference, the Ultra HD Forum will conduct a MasterClass focused on the use of UHD technologies to deliver the next-generation consumer experience. The MasterClass will be held on Friday September 11 in Rooms G102/G103 at the RAI from 4:00 pm to 5:30 pm and is open to all show attendees. Speakers including operators, major technology companies and standards bodies, will provide an informative range of perspectives.

Early adopters of UHD TVs have created a demand in the market for UHD services. In order to move to broad deployment of live and non-linear UHD content, the industry must adopt standards to ensure interoperability across the complete UHD delivery ecosystem. To this end, the Ultra HD Forum is establishing guidelines for the implementation of a broad scope of new UHD technologies including Wide Color Gamut (WCG), High Dynamic Range (HDR), High Frame Rate (HFR) and Next Generation Audio (NGA).

“There is an increasing awareness in, and the demand for, the phased introduction of these technologies,” said David Price, Chairman of the Ultra HD Forum Communications Working Group and Forum Vice President. “The combined capabilities of our membership will help transform the consumer experience and the Forum will provide real impetus to facilitate Ultra HD deployment.”

“Ultra HD is now entering a phase where content, technology and consumer experience have to be aligned,” said Thierry Fautier, President of the Ultra HD Forum. “The Ultra HD Forum will be the driving force to make this happen. We have gathered members from around the world encompassing the entire ecosystem to show how Ultra HD can be delivered end-to-end.”

‘Ultra HD investment hitting new highs’

Ultra HD programming is expected to see strong take-up rates globally, with failure to invest now will mean a failure to tap into its growing premium revenues, according to Alan Crisp, Analyst at satellite industry market research and consulting services firm NSR.

Writing in NSR’s ‘Bottom Line’ Blog, Crisp says that despite continuing concerns about OTT threatening the future growth of Linear TV (which are for the most part unwarranted), Ultra HD, with its premium nature, is being seen as a fresh way to grow DTH, Cable and IPTV businesses further worldwide.

“While 3DTV never really took off, Ultra HD investment has risen to new highs, with seemingly every few weeks another DTH platform announcing its Ultra HD intentions, trials, or commercial broadcast. Just last week Sky Deutschland secured more capacity in order to commence Ultra HD broadcasts, and the Polish public broadcaster, TVP is trialling Ultra HD on terrestrial television in Warsaw. Don’t be surprised to see more Ultra HD announcements coming to a DTH platform near you,” he advises.

According to NSR’s recently released Linear TV via Satellite: DTH, OTT & IPTV, 8th Edition, the number of Ultra HD channels broadcasting will accelerate longer term with growth of the new format in every region worldwide, developing and developed

On DTH platforms, by 2024 NSR expects Ultra HD linear content to consume approximately 70 transponders globally from 315+ Ultra HD channels. This equates to an estimated additional $185 million in leasing revenues from Ultra HD content on DTH alone. This means in 2024, Ultra HD represents 1.2 per cent of capacity globally on Ku-band DTH – a niche market, but one which is highly sought after by the premium market.

For video distribution to Cable and IPTV headends, a similar trend emerges: 180 Ultra HD channels on Ku-band and 75 channels on C-band, leading to a combined total of 57 transponders, roughly 4 per cent of global distribution capacity on all bands attributed to Ultra HD, leading to an even larger $219 million in leasing revenues.

“NSR previously noted that SD programming is the largest driver for subscribers, revenues, and channels in developing (high growth) regions. Whilst NSR sees SD to be the largest growth opportunity in these regions, Ultra HD plays a key supplemental role at targeting those with increasingly higher levels of disposable income available. This is the view that DTH platforms in India took when they publicly announced their Ultra HD plans, with sports content now available in the format, likewise with Tricolor TV in Russia when they are expected to launch their Ultra HD channel in the next year,” reports Crisp.

“On the other hand in developed regions, where subscriber growth remains low, and in the United States in some instances declining, Ultra HD is poised to be a way to move customers from basic TV packages, primarily SD content, towards premium and ultra-premium services, thus increasing ARPU and revenue growth. KT SkyLife in Korea and Sky PerfecTV in Japan are already broadcasting 24 hours per day a variety of content on their linear streams. Both markets are already quite saturated with pay-TV, but they intend for Ultra HD to increase revenues from their existing subscriber bases,” he notes.

“Ultra HD isn’t limited to the realm of Linear TV – in fact far from it. The popularity of the format has already been demonstrated in North America and elsewhere with the success of the Ultra HD subscriber base on Netflix, where not only is Netflix able to charger higher monthly fees for Ultra HD access, but actually been successful in convincing customers to join this highest tier. Higher ARPUs from Ultra HD content have already been demonstrated,” he observes.

According to Crisp, this combined with the fact that Ultra HD TV sets are now lower in price than ever before, with Sharp now selling Ultra HD TV sets for under $600, means that Linear TV will follow in the footsteps of OTT services and start broadcasting content soon in the new format.

“Although these OTT services are cutting into viewing hours of traditional Linear TV content, it is NSR’s view that there remain very compelling reasons for consumers to continue subscribing and paying for Linear content – most notably movies, sports, and other live events. What’s notable is this is the exact type of content that is first being filmed in Ultra HD – movies and sports. Without an Ultra HD service in the medium-term, Linear TV may appear to be a lower quality service compared to OTT in regions that are offering Ultra HD content. Thus implementing Ultra HD for sports content adds yet another compelling reason for customers to sign-up and remain subscribers for pay TV services. With Netflix, YouTube and other OTT platforms already serving Ultra HD content and consumer awareness and familiarity of Ultra HD rising, consumers in developed regions will come to expect Ultra HD content on their pay TV services, sooner rather than later. No wonder satellite operators are upping their investment for these high quality products,” he says.

In conclusion, Crisp says that Ultra HD programming is expected to see strong take-up rates globally, with fastest growth expected in North America and the weakest in Sub-Saharan Africa. “Although SD content is driving growth in developing regions, providing Ultra HD content is important to capture the higher levels of spending that some in these markets can afford. While the satellite capacity requirements over the long-term are a niche market, it is an important niche that will drive Linear TV platforms towards higher ARPUs and revenues. The upward trend for Ultra HD is clearer than ever, and a failure to invest now will mean a failure to tap into its growing premium revenues,” he war.


Netflix Replaces Live TV as Youngsters’ Viewing Choice

New findings from Hub Entertainment Research show that while cord cutting remains low, online TV is becoming the default for key TV segments and scenarios.

In a very short time, online TV sources have become more common than not: more than three-quarters of TV consumers watch online to some extent, and the average pay TV customer uses two or more online TV sources in addition to their MVPD subscription. So now that viewers have multiple options to choose from, which sources are emerging as the TV ‘default’ ­the first source they turn on when they want to watch TV?

The latest wave of Hub’s Decoding the Default study reveals important shifts in consumers’ go-to source for TV content. Among those who watch at least some online TV content:

Live TV is still the single most common default source. 34 per cent say Live TV is the first thing they turn on when they want to watch­higher than any other platform.

However: that share is dropping significantly. In 2013, 50 per cent of viewers named live TV as their default – 16 points higher than this year

Online sources now account for as much share-of viewing as live TV and DVR, combined. Across users of all TV platforms, viewers allocate 32 per cent of their total TV viewing to live TV (down from 41 per cent in 2013) and 15 per cent to shows on their DVR (down from 21 per cent in 2013). Online platforms now account for 46 per cent of all viewing time (up from 34 per cent in 2013)

Among young viewers, online sources have replaced live shows as the ‘home base’ for TV.

40 per cent of viewers age 16-24 use Netflix as their home base. Only 26 per cent default to live TV.

Millennials (age 18-34) are equally likely to default to live TV (33 per cent) and Netflix (31 per cent)

Online platforms have become the default in what some might consider the most valuable viewing scenarios. Among those who watch any online content:

Live TV is still the go-to source for channel-surfing scenarios.

“When I don’t have anything specific in mind, I just want to watch something”: 40 per cent of viewers say that live TV is their default source, vs. only 27 per cent who say Netflix.

“When I want a TV show on in the background while I do other things”: Half (50 per cent) of consumers say live TV is their default, and only 15 per cent say Netflix.

But Netflix is now the most common default source for engaged TV viewing

“When I have a specific show in mind I want to watch”: 26 per cent say their default source in this scenario is Netflix, vs. just 15 per cent who say live TV.

This is a reversal from how consumers answered the same scenario in 2013 (Live TV 29 per cent, and Netflix 18 per cent)

“When I want to focus on what I’m watching without any distractions”: More than a quarter (26 per cent) of all viewers say they default to Netflix in this situation, vs. only 20 per cent who say Live TV.

Again, just two years ago, highly focused viewing was Live TV’s territory: 26 per cent named it as their default, vs. just 19 per cent who defaulted to Netflix.

“A change in default sources is not the same as completely cutting a pay-TV provider,” said Jon Giegengack, principal at Hub and one of the authors of the study. “However we think it’s an important psychological threshold. People love choice­but when it comes to TV, there are more alternative sources than any one person could use. They crave a home base, and the position of ‘first source turned on’ will be an increasingly enviable one as the market evolves.”

“It’s important to note that along with an overall decline as consumers’ go-to viewing source, Live TV is losing ground in what one might argue are the more valuable viewing occasions.” Added Peter Fondulas of Hub. “The shows where people are most engaged, versus the occasions when they’re just looking for something to have on in the background.”

OTT viewing higher in Gen X, kid households

A new study from market and consumer information source GfK sheds light on the how streaming video viewing is upending TV business models, including which Internet-connected devices and services different viewers prefer.

The 2015 Ownership and Trend Report, from GfK’s The Home Technology Monitor, reveals that households with at least one member of Generation X (roughly ages 35 to 49), and those where children (ages 17 and under) are present, are much more likely to stream video and view other content using an Internet-connected device attached to a TV – OTT viewing.

More than half (54 per cent of homes with kids view OTT content on a TV set, compared with a national average of 40 per cent of all TV households. Child households are also significantly more likely than those without kids to be using all four key devices to watch OTT on a set.

The research shows that using streaming video is now the third most common online activity, behind social networking and online shopping. This means that streaming is now reported to be more prevalent than listening to music online, instant messaging, and Internet gaming.

Among ethnic and racial groups, Hispanics (42 per cent and whites (40 per cent are at roughly the national average in their OTT use, while African Americans (29 per cent are significantly below. In terms of devices, Hispanics are much more likely than whites to use smart TVs and videogame systems for streaming OTT content to a TV set.

“The old stereotype of an OTT viewer hunched over a laptop or tablet is very much out of date,” says David Tice, Senior Vice President in GfK’s Media and Entertainment practice. “Rapid adoption of smart TVs and digital media players over the past three years has pushed OTT to the biggest screens in the home, with attendant expectations from consumers that OTT quality should be as good as regular TV service, and as easy to use as mobile OTT options.”


Cracking Down on Hackers Would be Bad for Innovation

Getting tough on hackers would have little impact on foreign hackers, who are reportedly behind many of the highest-profile hacks of government and business. (Jonathan Ernst/Reuters)

Every week seems to bring a new hacking story – the massive hacking attack on the U.S. government’s databases and the attacks on the U.S. health care system are just two of the bigger stories ­ so it’s perhaps no surprise that the knee-jerk reaction is to take the fight directly to the hackers. By making the penalties tougher, by expanding the scope of federal anti-hacking statutes and making it easier to prosecute wrongdoers, it’ll convince hackers that it’s just not worth the risk, right?

The problem is that simply toughening the laws on hackers by extending their scope and reach or extending the prison sentences of hackers is not going to help catch the real hackers ­ the criminalized, anonymous hackers who operate in places such as China. Instead, they’re more likely to ensnare the likes of hacktivist heroes such as Aaron Swartz.

Getting tough on hackers by extending the definition of what a hacker is would theoretically mean that people who even so much as retweet or click on a link with unauthorized information could be committing a felony. Moreover, the white hat hackers (the “good guys”) could be ensnared as well, since their work, at its core, is indistinguishable from that of the black hat hackers (the “bad guys”).

And that could have a chilling effect on innovation.

That’s because laws and regulations can’t keep up with the pace of technological change and end up either prosecuting the wrong people or prosecuting the right people, but on charges that far exceed the scope of the crime. Consider that the current anti-hacking federal statute, the Computer Fraud and Abuse Act (CFAA), was enacted back in 1986, well before most politicians had ever heard of the Internet.

As a result, you get odd rulings where it’s obvious the law hasn’t kept up with the technology: “In a case that began in 1993, the U.S. State Department ruled that Daniel Bernstein, then a graduate student at the University of California at Berkeley, would have to register as an international weapons dealer if he wanted to post an encryption program online.”

If tough hacking laws had been around 20 years ago, it might have stopped Google from launching its method of indexing web pages or Apple from launching many of its innovative consumer gadgets. As Rob Graham, chief executive of Errata Security, points out, “Had hacking laws been around in the 1980s, the founders of Apple might’ve still been in jail today, serving out long sentences for trafficking in illegal access devices.”

And there’s another reason why tougher laws on hacking would have a chilling effect on innovation, and that’s because it would not require corporations to do more on their end to correct fatal security flaws before they are found by hackers. As we already know from experience, the last thing corporations want to do is to add an extra cost layer to their products by taking action to correct security flaws – even when they know the potential implications of a major security breach. If they know that the law will make it easier to recoup damages from hackers, they could have fewer incentives to find all possible security flaws.

In the case of Ashley Madison, the current hacking case du jour, the company didn’t even bother to encrypt the underlying data, which means that once a hacker got into the company, it was a simple task of scooping up names, addresses and credit card information. You could argue that the hackers who broke into Ashley Madison are criminals, but you could just as easily argue that the company itself was criminally negligent in allowing the security breach to happen in the first place.

If anything, the race to punish similar types of hackers would encourage corporations to deepen their intelligence and security sharing with each other and the government, and that means, you guessed it, even more security surveillance on the Internet. And the more that the tech sector becomes infected with a security surveillance mind-set, the worse it is for innovation.

To see how all this might play out, consider President Obama’s proposed crackdown on hacking, first announced during the 2015 State of the Union after the high-profile hacking case of Sony Pictures. The proposals, as the Electronic Frontier Foundation pointed out in January, is a “mishmash of old, outdated policy solutions.” The concern is that overzealous application of new laws could be used to prosecute hackers for anything as minor as violating the terms of service of a Web site.

In many ways, the U.S. crackdown on hackers is our new war on drugs. Just as the United States sought to win the “war on drugs” by adding aggressive charges and excessive punishment to round up all the drug dealers, it’s now trying to win the “war on hackers” by stiffening up the federal anti-hacking statutes to round up all the hackers. By toughening the laws on hacking, you might catch the Internet equivalent of all the low-level drug dealers and mules, but it won’t get to the core of the problem – the high-level, anonymous kingpins who live beyond our borders.

And just as massively criminalizing the war on drugs led to a spike in prison terms and a negative economic drag on society, we could see the same thing with tech culture. Any coder, hacker or technology activist would be at risk of running afoul of the government and its stepped-up campaign against hackers, much as Aaron Swartz ran afoul of the government.

Maybe tougher hacker laws will scare off the youngest generation from a life of crime to know that they could earn jail time and felony charges for clicking on a single unauthorized link or sharing a single password. It could scare them off a life of computers, and that would be the greatest shame, because it would shut down the innovation pipeline of the nation. As we’ve seen before with other cyber legislation, whenever the government thinks it’s doing what’s best for business, it’s not necessarily doing what’s best for innovation.


Brands Can Learn from Customer Conversations on Social Media

Consumer conversations surprise brands in a lot of ways. Customers often care more about some things than a brand realizes and less about what the brand thinks is important. This can affect social media, marketing and even product decisions.

In recent years, General Mills learned through customer conversations that families were playing with the Pillsbury dough, making shapes and designs for fun; not just cooking with it. Based on this insight, General Mills revitalized a tired brand by focusing on the family activity value the product enables.

Learning more about your customers from social and impacting your business is not new. Several years ago, Sun Microsystems discovered software developers were not talking about the category of tools they, IBM, and Microsoft were developing. Rather they were talking about a set of tools nobody was advancing. With this insight, Sun changed its strategy, re-allocated its budgets to the tools developers were interested in, and leaped ahead of their competition.

Customer conversations can also reveal that the message the brand intends to transmit via its marketing is not the message being heard. Either the marketing needs to be improved, the target audience changed, or a fundamental product change is needed. Conversations can also reveal customer support issues. In some cases, customers will solve each other’s problems through those conversations.

Social data from conversations outside your brand ­ say product category or target audience level ­ reveal insights about what consumers care about and why. This can inform your product development, business strategy, messaging and positioning, and inform the tactics you pursue to win customers.

Social data from customer conversations can provide brand, product, and consumer insights that you can apply to everyday marketing decisions. Social provides a rich, real-time, rejuvenated data source that brands are using to make smarter decisions.

Then there is the business benefit of higher customer satisfaction with lower costs by managing customer support issues in social media. Your customers are already there, trying to talk to you and trying to find resolution. Increasingly, the category leaders will be the companies who create positive customer experiences on both the pre and post sales side.

Phone support costs anywhere from $10 per inquiry (offshore and fully costed) to $23 per inquiry (US average for typical consumer products) to over $100 per inquiry (Cable TV), with Email and chat typically running 10/inquiry. Customer support via social media discussions (a discussion thread on Facebook or a community web site discussion board) can be as little as $2/inquiry if a support agent is involved, or $0 marginal cost when the customers answer each other’s questions, which they like to do.

Social media empowers your customers to help each other, which is a rewarding experience for them and builds a knowledge base for the brand. Generally, your customers as a whole know more about the products than the brand does. In social, we say, “Nobody is as clever as everybody.”


Personalizing the Global Entertainment Experience

The barriers and borders around content consumption are dropping.

With users being increasingly able to share the viewing experience within social networks there is now a great demand for TV programming to be made available across borders much sooner. As the next generation of TV evolves we are seeing greater links between major social media, recommendation services and analytical engines; all of which combine to enable consumers to give and receive recommendations from their network wherever they are in the world.

OTT services like Netflix are already responding to this. Its original series House of Cards is a perfect example. Netflix famously chose to commission two seasons of House of Cards for $100 million without ever seeing a pilot. It based the decision on a meticulous analysis of the viewing habits of its 44 million subscribers worldwide by running the data on a number of factors: the actors, director David Fincher and the type of cinematography he creates, its subscribers’ reaction to the original (British) House of Cards released in 1990, and whether the online audience tends to enjoy political drama.

Additionally, locally produced content such as Nordic-noir thrillers and dramas like The Killing and Borgen have opened up new international audiences through word-of-mouth shared by fans across the world. Consumers are also changing their viewing habits in favour of “TV Everywhere.” They are no longer tied to viewing content on a broad-caster’s schedule, and limited by access to the TV.

They want content whenever and wherever they are. They want the ability to “binge” watch, and access their favourite content even while traveling overseas. In November 2013, Saffron Digital worked with ITV – the UK’s largest commercial television network – to launch ITV Essentials, an international subscription VOD service which gives ex-pats and holidaymakers the opportunity to watch a selection of ITV’s most popular programmes while abroad.

In an evolving and highly competitive digital market, ITV Essentials is a way for ITV to respond to the challenges of new connected technologies, changing patterns of user behaviour and new market entrants. The overall intention of ITV Essentials is to take a cost-effective approach to creating a new subscription focused (non-advertising based) revenue stream. Online video platforms like Saffron Digital’s MainStage are enabling studios, content owners and broadcasters like ITV to generate revenue from fans and audiences across the globe.

It enables them to quickly and cost-effectively launch or add premium channels that can deploy and evolve new business models and markets for their content. These end-to-end platforms also give providers the tools to support multiple languages, subtitles, currencies and territory rights and launch to new markets as and when the con-tent rights are signed and audiences demand it.

Localizing the experience

However, content creators need to be careful about striking the right balance of keeping cultural preferences in mind while ensuring programming that can cross international and cultural boundaries.

Historically companies have approached new markets on a country by country basis, localizing content per territory. The real gain comes in doing this once and keeping the content in a central place for use across many outlets and countries. By taking content, applying multiple languages, audio or subtitles, then choosing the optimum combination from a central repository saves repeating the process many times. Service Providers can additionally manage and restrict how content is accessed geographically, whilst cloud deployment ensures high availability, performance and the ability to scale effortlessly and cost-effectively with the peaks and troughs of global demand.

It’s getting personal

Personal relevance is also proving to be an important part of a successful multi-screen strategy. One of the challenges of having hundreds of items of content available on demand at any time is finding content that is relevant for you at the time you wish to watch it. For example, people tend to watch different types of programming in the morning or during their commute than they do when at home in the evening. When consumers are not bound to a linear TV schedule the world of content and viewing options they have is a lot larger.

They need new mechanisms for content discovery that make it easier to find the content they want to watch. This makes personalization key in helping users discover content that is of interest to them. Service providers need to utilize search, recommendations and social features that track what consumers have watched and suggest similar content. One of the biggest advantages of OTT over traditional TV broadcasting is that you have access to so much more data and insight on which to base these discovery mechanisms and there-fore you have much more information with which to offer users a personalised experience.

We have been working with Japanese telco KDDI to personalise its Video Pass subscription service for the past two years, focusing our data tracking and analytics on time-sensitive content recommendations.

One example is delivering users personalized content during the unique peak travel time scenarios in Tokyo. We found episodic TV content to be a perfect short-form fit to consume during their commute, and so surfaced and promoted this type of TV content when we saw users choosing and downloading content prior to their daily travel times.

However, one of the risks of any in-market personalization or recommendation service is that users end up in a ‘filter bubble’.

They are only suggested content based on previous viewing habits, which are skewed due to the availability and type of content they consuming digitally via a single service. This can mean users are only recommended content they probably already aware of, rather than led to discover new content which may be less obvious but which they will like.

When choosing an OTT platform provider it is important to ensure that it can blend both algorithmic and human-based curation in order to deliver a content mix that is both personally relevant but also highlights new and interesting content as it emerges.

Taking it a step further

We are also seeing the emergence of “Superfan” OTT channels that are connecting celebrities to their fan bases with direct to consumer TV channels.

Is this the ultimate personalization? Individual consumers choosing to subscribe to services that have exclusive access to the exact programming they demand? We recently launched the Paula Deen Network, a multi-platform service which features a mix of cooking and lifestyle shows, with exclusive value-added content and community engagement across smartphones, tablets and web.

Its appeal has been very evident. Superfan TV disintermediates the traditional TV networks and enables talent and content owners to generate significant revenues by engaging their fans directly.

Superfans typically represent up to 25 percent of a star’s fan base and by going direct to consumer with a premium subscription service there is po-tential to generate a monthly fee from each of these fans, whilst providing them with content they care about. Is this the future? With the advent of direct to consumer OTT services powered by specific celebrities or brands like the Paula Deen Network does this herald a shift in new content packaging and consumption models?

Are consumers willing to pay a-la-carte for specific access to content they are passionate about or will the cost advantages of the bundling provided by Pay-TV operators re-assert itself within the OTT space? Ultimately, key developments within OTT are likely to come from providing more unique, personalized content mixes for each individual user – no matter where they are in the world.


Amazon Surges on Q2 Results, Now Worth More Than Walmart

Amazon’s stock climbed as much as 17 percent in after-hours trading on Thursday as the company posted second-quarter financial results that easily exceeded analyst expectations.

Amazon earned 19 cents a share on $23.2 billion in revenue for the period, easily surpassing estimates of a loss of 14 cents per share on $22.4 billion in revenue.

With the after-market stock move, Amazon’s market value is now north of $250 billion, surpassing Walmart, the world’s largest brick-and-mortar retailer, for the first time.

Revenue in the company’s AWS unit, which sells cloud computing and data storage services, grew 81 percent year over year to $1.8 billion. Amazon first broke out AWS results in the first quarter of this year, when revenue had posted 49 percent year-over-year growth. The division’s operating profit margin was 21 percent for the quarter, up from 17 percent in the first quarter, which at the time was already a pleasant surprise to analysts.

North American revenue grew 26 percent year over year in the second quarter, fueled by the popularity of Amazon Prime and increased product selection in categories such as fashion.

In a call with reporters, new CFO Brian Olsavsky said there is “certainly” a correlation between speeding up delivery, with same-day delivery services such as Prime Now, and increased spending.

“It gets us into the consideration set for more immediate purchases,” he said.

North American operating margin rose to 5 percent in the quarter, up from 3 percent in the same period last year. Olsavsky suggested that the improvement is partly due to Amazon’s opening of product sorting centers closer to city centers, which shortens delivery times and brings down costs.

Amazon Reports Unexpected Profit, and Stock Soars

We have reached Peak Amazon, or perhaps Prime Amazon.

The e-commerce company beloved by Wall Street for its fast-growing ways did something completely out of character in the second quarter: It made a profit.

It was only $92 million, practically a rounding error for Google or Apple. But it confirmed all the hopes and expectations of analysts and investors, who immediately pushed Amazon shares up 17 percent in after-hours trading Thursday to $566.

The surge added another $40 billion or so to Amazon’s market cap. That will almost assuredly propel it to be more valuable than Walmart for the first time when the stock market opens Friday, making this a deeply symbolic moment for e-commerce and the Internet. It is also a nice present for Amazon, which celebrated its 20th birthday last week.

“Holy cow, what a quarter,” said Jason Moser, an analyst with the Motley Fool website and an Amazon investor. “They blew that thing out of the water.”

Amazon’s second-quarter profit amounted to 19 cents a share. A year ago, the company lost 27 cents a share. Analysts had been predicting another loss of 13 cents.

Revenue was better than expected, too, up 20 percent to $23.2 billion. That was about $800 million more than forecast.

One big contribution to the improved profit and revenue was Amazon Web Services, the cloud computing division whose numbers were broken out for the first time in the first quarter. A.W.S. is the undisputed leader in the sector, outdistancing both Microsoft and Google, but analysts had been wondering if price cuts would hamper growth.

Apparently not. A.W.S. revenue rose 81 percent to $1.82 billion from a year ago, even better than it did last quarter. In the first quarter, A.W.S. revenue was up 49 percent.

Operating income for the cloud division rose to $391 million from $77 million. No wonder analysts on a conference call with Amazon financial executives offered their congratulations.

Even before the after-hours surge, Amazon shares were up 55 percent this year. Analysts have been rushing to upgrade their already enthusiastic ratings.


YouTube Revamps Its Android App

YouTube CEO Susan Wojcicki has spent the past few months telling everyone that the world’s biggest video site is all about mobile, mobile, mobile. So here’s one way to underscore it: A revamped mobile app and mobile site.

Android and iOS users can get the update now. There’s a slicker design and new features that are supposed to make it easier to edit and upload footage to the site, which already processes 400 hours of new content every minute.

Update: Earlier Thursday, YouTube announced that the update would initially only be available for Android users and that an iOS version would be ready “very soon.” That was fast: The iOS update is now out, too.

And, as sharp-eyed bloggers noticed earlier this week, the new app allows you to watch vertical video in full screen, since that’s the format so many people use when they shoot video.

Now picture Snapchat CEO Evan Spiegel, smiling.

Wojcicki introduced the new app onstage at Vidcon, the annual YouTube fan convention that also doubles as a video industry convention; this year the convention expects to see an astonishing 21,000 visit the event, held right outside of Disneyland in Anaheim, Calif. Wojcicki also used her stage time to soothe and encourage the people who make videos for YouTube, and who are constantly wondering about the best way to interact with the site: Should they devote all of their resources to it? Most of their resources? Or should they look for a new home?

This time around, the big question for YouTube and its advertisers and video makers is about Facebook: Now that the social network is providing real competition for YouTube ­ it’s the first time YouTube has had real competition ­ how will the site respond? But Wojcicki and her crew aren’t interested in answering that one onstage.

Also unaddressed today: YouTube’s plan for its music subscription service, which has spent a very long time in beta mode, and an ad-free subscription service. YouTube executives continue to say that they expect to launch those this year; most people in the Web video world expect the two of them to be bundled together.


How To Help Call Center Employees Not Hate Their Jobs

It’s been called the "electronic sweatshop" and its employees "digital slaves." It’s the call center, and if you’ve ever been on the other end of a (scripted) call from a telesalesperson or a customer service associate, you’ve glimpsed how soul-sucking such rote work can be.

On their end: hours of sitting shoulder-to-shoulder in a cubicle farm, chatting with customers who range from clueless to chafed to irate, all for a paycheck that barely peeps over minimum wage (if they’re fortunate enough not to have their pay based on commission). Pile on the presence of pushy supervisors and the lack of upward mobility, and it’s no wonder and can go as high as 70% at those with staff exceeding 1,000 agents.

It’s workplaces such as this that present the ultimate challenge for companies like Tenacity.

Tenacity is a spin-out from Alex Pentland’s Human Dynamics Group at MIT and a Techstars startup. It's a cloud-based platform that works to improve employee engagement in call centers and other high-turnover industries. It does this by blending "social physics" (think: analyzing big data to understand human behavior) with medical science and machine learning to engage employees.

Motivated employees are big business. Engage them and watch the profits roll in. New research from found that the highest level of growth (between 10% and 15%) occurred at companies whose staff were highly engaged. Ignore them, and it won’t be long before absentee rates start to tick up and productivity plummets.

At call centers, management is spread thin and managers are hard-pressed to provide the kind of support required to engage employees through mentoring and team building, according to Tenacity CEO Ron Davis. But he says technology can make the task easier by creating new ways to facilitate collaboration, resilience, and form habits that keep them from burning out.

Beyond Gamification

To test-drive their engagement strategies, Tenacity did a pilot program at one call center that is part of a $30 billion telecom company. In three months, Tenacity reduced turnover by two-thirds, from 6.6% to 2.2%. Davis says they accomplished this by addressing two key factors: behavioral and social.

The behavioral piece includes modules such as an app for guided breathing exercises for the employees to use independently, and encouraging moderate physical activity. Davis says these are more effective at changing behavior than traditional incentives and social games.

We’ve covered how other companies use gamification strategies to boost engagement, most notably Bunchball, which also tackled the tricky business of call centers using a leaderboard as well as badges and reward points to encourage social interaction and collaboration. In brief: Gamification works because it plays on both intrinsic (I want to be in charge and make progress toward goals that will make a difference and get recognized by my peers) and extrinsic (completing a task in a prescribed way) motivators.

It’s not that gamification can’t improve retention, but Davis believes Tenacity takes it one step further because call centers are already de-facto gamified. "Everyone has five to 20 key performance indicators, and they are judged entirely based on these," Davis explains. "They are all public, and there are competitions, bonuses, promotions, and firings based on them." Davis contends that adding quests and badges doesn't do much in this environment "unless you are walking into a call center that is 20 years behind in terms of workforce optimization and performance tracking."

With the Tenacity pilot, Davis is already seeing evidence of behavioral change. "Interestingly, beside the big initial increase (from zero, of course) by the end of the first month, we then saw breathing exercises per capita double again by the end of the fifth month. Not only sustained behavior change, but increasing behavior change," he says.

On the social front, call center denizens are also at a disadvantage because they are often tethered to their headset. The Tenacity pilot offered individual and team-based challenges to combat the solitary work mode by strengthening existing relationships and creating new ones, says Davis. This not only builds resilience, but the presence of work BFFs has been shown to improve engagement and productivity.

Davis says that eventually, the data gathered from these interactions create a social network map and can be "tuned" to address problems and even optimize the best time for individuals to take breaks. Though Davis can’t reveal anything more about the analytics or the social tuning­"Both get a little bit more into our secret sauce," he says­the larger point is that most call centers already know how to push people harder, faster, further. "This tends to make the work feel rote, it stresses people out, and it’s socially isolating," he says.

Tenacity’s intervention is all about tapping into human motivation, but then using that motivation to increase self-care and care of others, and to build social capital and meaning in work­not to get people to try to get their average handle times down by five seconds, says Davis. "It turns out that by focusing on the people rather than the work product, we are addressing the neglected part," he adds. "The great thing for our business is that this also produces spectacular results in terms of work product and retention."


“Over-the-Top” TV Viewing Is Higher in Gen X, Kid Households

At a time when streaming video viewing is upending TV business models, a new GfK study sheds light on the phenomenon, including which Internet-connected devices and services different viewers prefer.

The 2015 Ownership and Trend Report, from GfK’s The Home Technology Monitor™, reveals that households with at least one member of Generation X (roughly ages 35 to 49), and those where children (ages 17 and under) are present, are much more likely to stream video and view other content using an Internet-connected device attached to a TV – also known as “over-the-top” (OTT) viewing.

More than half (54%) of homes with kids view OTT content on a TV set, compared with a national average of 40% of all TV households. Child households are also significantly more likely than those without kids to be using all four key devices to watch OTT on a set.

The research shows that using streaming video is now the third most common online activity, behind social networking and online shopping. This means that streaming is now reported to be more prevalent than listening to music online, instant messaging, and Internet gaming.

Among ethnic and racial groups, Hispanics (42%) and whites (40%) are at roughly the national average in their OTT use, while African Americans (29%) are significantly below. In terms of devices, Hispanics are much more likely than whites to use smart TVs and videogame systems for streaming OTT content to a TV set.

“The old stereotype of an OTT viewer hunched over a laptop or tablet is very much out of date,” says David Tice, Senior Vice President in GfK’s Media and Entertainment practice. “Rapid adoption of smart TVs and digital media players over the past three years has pushed OTT to the biggest screens in the home, with attendant expectations from consumers that OTT quality should be as good as regular TV service, and as easy to use as mobile OTT options.”

EEMEA OTT TV & Video Market to Add $2.2 billion

OTT TV and video revenues in EEMEA [19 countries] will reach $2,635 million in 2020; up from only $52 million recorded in 2010 and the $616 million expected in 2015, according to Digital TV Research’s Eastern Europe, Middle East & Africa OTT TV & Video Forecasts report.

From the $2.21 billion in revenues to be added between 2014 and 2020, Russia will contribute $795 million, with Turkey bringing in a further $219 million. Russia will remain the largest revenue earner by some distance.

Simon Murray, Principal Analyst at Digital TV Research, said: “OTT in Eastern Europe, Middle East & Africa will still be an immature sector by 2020, although this is an improvement on the very immature status by end-2014 and its nearly non-existent status in 2010.”

SVOD will become the region’s largest OTT revenue source in 2017. SVOD revenues will total $1,568 million by 2020 (60% of total OTT revenues) – up from only $3 million in 2010 (6% of total OTT revenues).

Digital TV Research forecasts 24.22 million SVOD homes by 2020, up from 69,000 in 2010 and an expected 3.13 million by end-2015. Russia will overtake Poland to become the largest SVOD country in 2015. From the 22.71 million SVOD home additions between 2014 and 2020, Russia will supply 8.97 million, Turkey 2.37 million and Poland 2.07 million.

By 2020, 12.3% of the region’s TV households will subscribe to a SVOD package, up from only 0.8% by end-2014. Penetration rates will vary considerably: from 30.0% in Israel to 1.5% in Egypt.


Worldwide Smartphone Market Posts 11.6% YoY Growth in Q2 2015

According to the latest preliminary release from the International Data Corporation (IDC) Worldwide Quarterly Mobile Phone Tracker, vendors shipped a total of 337.2 million smartphones worldwide in the second quarter of 2015 (2Q15), up 11.6% from the 302.1 million units in 2Q14.

The 2Q15 shipment volume represents the second highest quarterly total on record. Following an above average first quarter (1Q15), smartphone shipments were still able to remain slightly above the previous quarter thanks to robust growth in many emerging markets. In the worldwide mobile phone market (inclusive of smartphones), vendors shipped 464.6 million units, down -0.4% from the 466.3 million units shipped 2Q14.

"The overall growth of the smartphone market was not only driven by the success of premium flagship devices from Samsung, Apple, and others, but more importantly by the abundance of affordable handsets that continue to drive shipments in many key markets," said Anthony Scarsella, Research Manager with IDC's Mobile Phone team. Although premium handsets sold briskly in developed markets, it was emerging markets, supported by local vendors, driving the momentum that heavily contributed to the second highest quarter of shipments on record. "As feature phone shipments continue to decrease, vendors will continue to attack both emerging and developed markets with competitive smartphones that are both rich in features and low in price," added Scarsella.

"While much of the attention is being paid to Apple and Samsung in the top tier, the smartphone market in fact continues to diversify as more entrants hit this increasingly competitive market," said Melissa Chau, Senior Research Manager with IDC's Mobile Phone team. "While the Chinese players are clearly making gains this quarter, every quarter sees new brands joining the market. IDC now tracks over 200 different smartphone brands globally, many of them focused on entry level and mid-range models, and most with a regional or even single-country focus."

Smartphone Vendor Highlights:

Samsung remained the leader in the worldwide smartphone market but was the only company among the top five to see its shipment volume decline year over year. The new Galaxy S6 and S6 edge arrived with mixed results as a limited supply of the edge models did not keep pace with the demand for the new curved handset. Older Galaxy models, however, sold briskly thanks to deep discounts and promotions throughout the quarter. All eyes will now be on the early release of the pending Note 5 and rumored S6 edge plus to come this August.

Apple's second quarter proved to be its biggest fiscal third quarter ever with 47.5 million units shipped. The iPhone once again continued to dominate in China where shipments remained buoyant after a strong first quarter. The larger screened iPhones along with the rapid expansion of 4G networks in China continued to drive momentum for Apple in Asia/Pacific. As smartphone saturation continues to climb in many new developed markets like China, Apple will look to drive upgrades with refreshed "S" models in the following quarter.

Huawei captured the number 3 position thanks to strong European sales as well as domestic sales that led to a staggering 48.1% year-over-year growth. Huawei's mid-range and high-end models continue to prove successful with the flagship P8, Honor Series, and Mate 7 handsets delivering sustainable growth both in the consumer and commercial segment. Huawei will now look beyond Europe and Asia/Pacific as its latest P8 Lite handset launched in the U.S. (as an unlocked model) for only $250 earlier in the quarter.

Xiaomi continues to find success in its home country thanks to both premium and entry-level devices like the Mi Note and Redmi 2 handsets, which helped Xiaomi achieve a 29.7% year-over-year increase. With a significant presence in India and Southeast Asia, Xiaomi is now looking to bulk up its IP portfolio to expand its reach even further outside of Asia/Pacific, starting with Brazil.

Lenovo, the third and final Chinese OEM on the list, captured the final spot despite steep home turf competition from both Xiaomi and Huawei. Outside of China, Lenovo continued to witness success in many emerging markets such as India with entry-level and mid-range models like the A600 and A7000, sold via Internet retail channels. The Motorola brand within the Americas and Europe continues to thrive with the ultra-affordable second generation Moto E and entry-level to mid-range Moto G devices. The pending launches of a third generation Moto X and Moto G look to be on the horizon for the second half of 2015.


Under the Smart Hood: 5 IoT Tech for Cars

From smart lights lightning the way, to facial recognition systems avoiding drivers falling asleep at the wheel, smart cars will reshape the auto industry.

The UK government has this week put forward £20 million of funding into the country's smart car revolution.

CBR sums up five key technologies that every smart vehicle will have built-in in the future.

1. Smart lights

The use of smarter lightning systems in cars will help prevent accidents in darker driving conditions, according to manufacturers.

Ford is currently developing a new technology in this space, to which it calls "Spot Lightning". The solution uses a front-mounted infrared camera that works together with GPS data to light the way through hard-to-see routes.

The lights learn the driver's regular trips, and adjust the lightning automatically as the car drives down the road.

Mercedes is another auto manufacturer with smart light solutions already in the consumer market. Drivers using the Mercedes CL550 Premium Package 2, have a smart infrared camera built in the front of the car that transmits real time images into a screen in the instrument cluster. Drivers are then able to see obstacles and increase light intensity if needed.

2. Mapping

Giving intelligence to cars and roads to communicate in real time has led Nokia to invest $100 million in the development of smart car technology.

In May 2014, the company said it had 6,000 employees working on mapping solutions for vehicles. HERE, a Nokia business currently on sale, and Mercedes have already built a 3D digital map of the route that the first Benz Patent-Motorwagen took 125 years ago from Mannheim to Pforzheim, Germany.

The solution offers the same information normal maps do, but it also gives the number and direction of lanes, traffic signs along the route and exact coordinates of traffic lights.

TomTom and Bosch have also come together to develop technology that will give drivers real-time mapping solutions. The venture is already rolling out automated vehicles in Germany that receive up to the minute information on traffic conditions, speed cameras and other road data.

3. Autonomous driving

The U.S. Department of Transportation's National Highway Traffic Safety Administration (NHTSA) has established five different levels of vehicle automation.

The documentation goes from Level 0, in which the driver is in complete and sole control of the primary vehicle controls, to Level 4, where the vehicle is designed to perform all safety-critical driving functions and monitor roadway conditions for an entire trip.

Driverless cars are a big adventure for Google that is currently testing vehicles in California. The company expects to make the first Lexus driverless car range available by 2020.

Volvo is also rolling out its driverless car solutions in Sweden with the "Drive Me" project. The company aims to ensure people automated cars are as safe as today's vehicles.

Last year, Swiss auto think tank Rinspeed unveiled the XchangE car. The solution takes advantage of the car driving on its own to transform the interior of the vehicle into an office on wheels.

In the model, front seats can be turned around to face back seats and ease meetings. With its own wireless 4G connection, the car also boosts several touchscreens throughout.

4. V2V communication

Vehicle to vehicle (V2V) or vehicular communication systems are wireless systems built in the vehicles that obtain real time data from other cars and the environment surrounding them.

The communication technology for cooperative ITS and Car-2-Car Communication is derived from the standard IEEE 802.11, according to the Car 2 Car Communication Consortium.

Cadillac has been one of the early adopters of the "intelligent and connected" V2V movement and has announced the first smart fleet for 2017.

An all-new 2017 vehicle will have advanced driver assistance technology called "Super Cruise" and the 2017 Cadillac CTS will be enabled with V2V communication technology.

General Motors, the company behind the "Super Cruise" technology used by Cadillac, said that V2V communication technology could mitigate many traffic collisions and improve traffic congestion by sending and receiving basic safety information such as location, speed and direction of travel between vehicles that are approaching each other.

5. Facial recognition

UK Government statistics suggest that 20% of accidents on major roads are caused by lack of sleep, with 40% of these incidents involving commercial vehicles.

Recent technology developed by ARM will gear up cars with facial recognition cameras in the cockpit to ensure drivers keep their eyes on the road.

The system will use a camera installed in the rear-view mirror of the vehicle to read drivers' facial expressions.

It will constantly monitor users' faces and in the case of distraction, or if drivers fall asleep, the car will wake them up via an alarm, a shaking steering wheel or by making the seat vibrate.


Utilize Video in Your Sales Strategy

Using video to generate leads, enhance your pitch and close the deal

Content is no longer exclusive to the marketer’s toolkit. There is a common need for good content to support both marketing and sales goals; content that spurs engagement, drives conversion and meets the needs of your customers.

Video is a powerful tool that can work across departments and across the sales cycle. For marketers, video is often a key component of the marketing mix. Its purpose is focused on positioning the brand, promoting products and services and generating leads for the sales team.

The sales opportunity lies in leveraging video content in the sales strategy, from introduction to follow up. In order to succeed, your sales team needs the right content to complement their calls, pitches and proposals. When marketing and sales work together to develop content and share resources that meet the needs of their customers, success is realized.

Here are some tips for effectively incorporating video in your sales strategy:

Arm the sales team with the right content.

The first step towards integrating video into your sales strategy is to identify the right type of content that engages customers at different stages in the sales cycle. To do this, focus on the needs of your customer: What are customers asking for? What sets us apart from competitors? What matters to our current customers? Use the responses to these questions to engineer the appropriate content to enhance your strategy.

For example, showing how a product works or the technology behind it may eliminate misconceptions, skepticism or conflicting beliefs. A testimonial, on the other hand, might be used to show its application and enhance confidence through social proof. Focus on short video clips that demonstrate key competitive advantages, prove performance and validate your claims.

All too often companies create stand-alone videos that are expected to do too many things and appeal to too many audiences. Don't fall victim to this approach. Coordinate with the marketing team to repurpose video assets and develop multiple versions of your content to meet specific needs and scenarios.

Always make sure you are maintaining a level of professionalism in regards to the production value or quality of your video. Your content is a reflection of your brand, products and services. Don’t show customers a poorly lit, shaky videoshot with an iPhone. Make the investment in your business and create a quality video. Planning ahead for content needs can often lead to economies of scale when producing multiple videos. A good video production partner will counsel you on these potential efficiencies and help you tell a better story.

Utilize video at each stage of the sales cycle.

Once the appropriate content is developed, determining how and when to use it is critical. Video can be used in different ways throughout the sales process. For example, in advance of an initial sales meeting, a brand video might be appropriate to tell a story and convey value. This can show that not only is your product or service the right solution, but there is value add of working with your brand. For a less experienced sales person or someone less brand centric, this type of video may be an effective way to articulate your value proposition.

Demo videos can be powerful during pitch meetings to support selling points and leave a lasting impression on the customer. Sharing case study videos is a good way to follow up to reinforce key takeaways from your meetings.

To identify when to leverage each type of video, consider where the buyer is in the sales cycle. What will drive the conversation? What supports the dialogue? What closes the deal? Engage your audience to find out their needs, and tailor the content to them. Having a thorough understanding of your target audience helps to best determine what type of videowill make the most impact – and when it should be delivered.

Consider tools for effective delivery.

Sales pitches can happen anywhere, and your sales force should have convenient tools to support them in a variety of settings. Effective delivery of content is key. Many sales people have moved to utilizing tablet devices. Consider a custom sales support app that allows easy access to video content and other useful selling tools. Further reinforce your selling propositions by embedding video links into proposals. This can further the reach of your video content when the proposal is reviewed by additional stakeholders who were not present in previous meetings.

Train your sales team to effectively use video.

Every selling situation requires a unique approach and each customer can be very different. A good salesperson understands this and prepares accordingly. Creating and deploying video content that seamlessly complements the selling process is critical. Help your staff determine how these tools are most helpful in supporting different conversations during the decision-making process. Arming your team with video and an accompanying script to help close the deal is a win for everyone.

For example, during the meeting, the salesperson shouldn’t compete with the video. Consider creating your demo videoswithout sound so he or she is not talking over a narrator. Incorporate on-screen text and graphics to highlight the key points you make in your presentation. For the follow up, send testimonial videos that provide further narrative.

Let the conversation drive how video is used. Salespeople should be able to work on the fly during meetings, quickly jumping to specific aspects of the video depending on what feedback or questions they’re getting.

Video is also a great tool in the absences of deep technical knowledge or the presence of a sales engineer. This still allows the salesperson to take the conversation to the next level and explain the complexity of the product or manufacturing process.

But it’s a two way street. Video content – and how it’s used – needs to also fit how your employees sell.

Solicit feedback and monitor consumption.

Marketers have long understood the value of analytics and feedback to measure and modify a content program. Salesteams need to also utilize these tools to manage their video library and implementation strategy.

Regularly collect feedback from your boots on the ground about how video is working. How did they use this tool? How did customers respond? When do they feel video helped them close a sale? What else do they wish they had? Use this qualitative feedback to optimize your efforts and share best practices to benefit the whole team.

Monitoring video consumption can also help you gauge the effectiveness and appetite for your program. Identify what videos your team is sharing and how often. And when sending follow up video content, track who’s viewing and sharing the content.

The key to using video successfully is not to abandon your current sales strategy, but rather, use video to complement your strategy. Video is a really great tool if used properly. A successful video strategy meets your customer’s needs, complements each step of the sales cycle and is delivered effectively to win more business.


IoT Standards Wars

For the Internet of Things to reach its full potential, a single communications standard is needed. A range of devices and “things” need to be able to communicate with cloud and perhaps more importantly with one another. Several early standards and frameworks have emerged in attempt to define the wider framework that will enable IoT interconnection.

The problem is consortia, foundations, and standards are multiplying. These consortia are somewhat paradoxically competing to be the most open and interoperable.

The Internet of Things requires an agreed-upon communications standard so that the information generated by devices can be shared and cross pollinated to create new and useful cross-functionality. Data isn’t as useful when it exists in silos.

A single standard will enable all these devices from different manufacturers to communicate with one another, clouds and proprietary data clouds, and do it securely and privately. The Open Interconnect Consortium and AllSeen Alliance both want there to be a single standard for IoT. The problem is they don’t necessarily agree with one another and each insists that it should be the standard.

OIC and AllSeen are two of the more recently active competing standards among several. Both recently announced impressive membership gains and market traction.

While they are both member organizations, OIC was born out of Intel, and AllSeen was born out of Qualcomm. Both insist that their way is the right way. So who will be VHS and who will be Betamax?

Recent membership gains for AllSeen include IBM and Pivotal, while OIC added IBM and National Instruments. Not only are there multiple consortia, the member companies also frequently join multiple consortia.

“We’re seeing companies join multiple consortia and placing multiple bets, evaluating from the inside,” said Gary Martz, a member of the OIC marketing team and Intel product manager.

Both Martz and AllSeen senior director Philip DesAutels believe that one standard will eventually emerge.

OIC, the younger of the two, has seen a hockey-stick growth curve with its membership and just released its 0.9.1 specification. The core 1.0 specification will be submitted for review at the end of August.

AllSeen’s framework is the open-source AllJoyn. “AllJoyn’s goal as a software project is to create and maintain and deliver production-ready code,” said DesAutels. “It’s a very mature framework that has been around for about five years and is in tens of millions of products.”

DesAutels added AllJoyn will likely hit a billion devices in about a week; the open source code is bundled into Windows 10.

Where AllSeen and OIC Disagree

AllSeen focuses on device-to-device and starts at the home extending outward. OIC was born out of business applicability of IoT extending into consumer.

AllSeen is different, said DesAutels, because it’s completely open and will never require a specific vendor to implement, and it focuses on product-to-product communications and doesn’t require a cloud in the middle.

Martz said the right approach is a combination of open source and industry specifications. Intel’s interests in IoT standards is simply creating markets, he said.

“Open source will help speed solutions to market, while the industry specification means we can go to other standards bodies and make liaison agreements,” said Martz. “The consumer space can be quicker and looser about security, privacy, and authentication. The enterprise space needs a different approach.”

So who’s right?

IoT and Data Centers

Data center service providers care about IoT because more devices mean more data, more bandwidth and ultimately more backend infrastructure is needed. However, both IoT standards organizations believe there are misconceptions out there.

Martz said that while large quantities of data are being collected, the impact on data centers is misrepresented. The storage footprint is very small for the kinds of data being generated. However, he added that data centers really benefit when it comes to manipulating and working with the data through analytics.

AllSeen’s Philip DesAutels believes the data creation angle is a misconception altogether.

“It’s about device-to-device and product communications,” said DesAutels. “We want things in your house to talk through APIs with one another and interact in safe ways. This orchestration requires things to talk locally, openly, through a robust protocol,” as opposed to constantly reporting back to the data center.

DesAutels gives an IoT-enabled lightbulb as an example. A consumer doesn’t want to turn on that lightbulb by opening up a smartphone, having that smartphone talk to a cloud on the opposite coast, which then remotely turns on the light in a few seconds.

“That’s not a world that anybody wants,” he said. “That’s a reflection of where we were several years ago with IoT.”

Instead, IoT’s value is about what he dubs “accidental orchestration.” As more devices are IoT-enabled and standardized, they will talk to other devices for cool, useful functionality, much of which hasn’t even been dreamed of yet. A large part of the future of IoT cuts out the data center.

A small example of accidental orchestration, said DesAutels, is when he starts watching a movie on Hulu, communication occurs locally between devices to dim the lights in the room automatically – no need for a cloud in the middle.

Another example could be a link from fridge to automatic water control to shut off the water main and send a text message if there’s a potential flooding problem; flooding is the number-one home insurance claim.

“Accidental orchestration is those kinds of thoughtful use cases where pieces stitch together that matter,” said DesAutels.

Accidental orchestration doesn’t cut the data center out of the IoT world altogether.

“Now If I was the company making that bulb, I have some problems to deal with,” said DesAutels. “I have an ERP to track manufacturing, and I have to keep track of serial numbers, etc. If I sell you a couple of lightbulbs, you activate them, and I have to provision devices, warranty, service, and support; I have to look at devices over time measuring performance over lifetime, usage, in a way that’s associated with you that respects your privacy and has no gaping security holes.”

There will be a ton of activity generated on the backend to make IoT seamless from a user perspective. There will also be frequent communication with data clouds for uses like telemetry in cars or a thermostats getting weather information.

AllSeen isn’t solely device-to-device focused, said DesAutels. It has a gateway agent, a bridging technology that connects the local network to the world. The gateway agent also performs device management and is the gatekeeper for uses like telemetry.

There is also a device system bridge, contributed by Microsoft. “The world is filled with other networks,” said DesAutels. The device system bridge provides connectors for proprietary, specialized systems, such as Bacnet and EnOcean.

There’s a bit of foundation fatigue out there. There are foundations for everything. It can be frustrating when there are several competing IoT standards with ultimately the same aim; in this case, making sure Internet of Things works and everything can communicate through overarching IoT standards.


Google Lures Businesses to Nearline with 100 PB of Free Cloud Storage

Google had its sights fixed firmly on Amazon Thursday as it launched its new, low-cost Nearline cloud storage service out of beta and into general availability.

Originally introduced to much fanfare in March, Cloud Storage Nearline now promises 99 percent uptime, on-demand I/O, lifecycle management and a broadly expanded partner ecosystem. Aiming to further sweeten the deal for companies currently using other providers, Google is now offering the service with 100 free petabytes of storage­equivalent to 100 million gigabytes­for new users for up to six months.

Google’s standard pricing is one cent per GB per month, so the credit is essentially worth $1 million for each month it lasts.

Also free for a limited time is the service’s new on-demand I/O feature, which is designed to give organizations a way to increase I/O in situations where they need to retrieve data faster than Nearline’s provisioned read rate of 4 MB per second throughput per terabyte of data stored. For the first three months after launch, on-demand I/O will be offered at no additional charge.

Taking aim directly at Amazon, Google has even created a total cost of ownership (TCO) calculator to estimate how much can be saved using Google’s cloud storage rather than Amazon Web Services.

For companies that decide to switch, Google’s Cloud Storage Transfer Service­previously known as Online Cloud Import­will import large amounts of online data from HTTP/HTTPS locations such as Amazon S3.

“You now will be able to configure one-time data migrations, as well as schedule recurring data transfers,” explained Google product manager Avtandil Garakanidze in a Thursday blog post.

The Cloud Storage Transfer Service also allows users to perform lifecycle management, Garakanidze added, including automated archival to Cloud Storage Nearline and scheduled deletions.

Finally, Google has added five new companies to its Nearline partner ecosystem. The Actifio, Pixit Media, Unitrends, CloudBerry Filepicker join Veritas/Symantec, NetApp, Iron Mountain and Geminare.

Hits: 3628

Attention Facebook users: Check out Michael Berman's Jocgeek fan page at, or follow him on Twitter @jocgeek.  You can also contact him via email at  Mike's blog can also be found on the Huffington Post website at