Facebook F8: Disruptive cocktail of personal AI, brain interface and 36Gbps links

774

by Caroline Gabriel, AI Trends Contributing Editor, Rethink Wireless Watch

  • Facebook builds on last year’s 10-year plan to embed AI in every layer
  • Looks to leap ahead of Google in redefining mobile and web user experience
  • mmWave demoes show it also wants to accelerate wireless network shake-up

Facebook’s annual F8 developer conference is becoming a very valuable barometer of the way mobile web markets are evolving. The past two years’ events were heavily focused on broadening and opening up its platforms, especially on mobile devices, to move well beyond social media and make it the hub for a user’s entire digital experience. They also saw the beginnings of Facebook’s attempt to disrupt the telecoms hardware platform by applying the open source approaches which have become the norm in software.

The efforts to expand the Facebook platform into every aspect of a user’s life, and even into the telco market, have included an increasingly large dose of artificial intelligence (AI), so it was no surprise, at this year’s F8, to see machine learning taking center stage – even to the extent of showing off a prototype brain interface to control devices and applications.

This built on the roadmap outlined at last year’s F8, when CEO Mark Zuckerberg laid out a 10-year roadmap to connect the whole world with Facebook’s services, messages and APIs, harnessing near term tactics, like enterprise extensions, to a plan to equip the Facebook/Messenger platform with AI, robots and virtual reality user interfaces.

Qualcomm joins chip partnerships behind Caffe2:

Now the social networks firm has to start delivering some of the specifics to flesh out that grand plan, and many of them rely on bringing AI to every device and user while reworking the user experience in a way that will put Facebook in pole position in the next generation, tactile Internet.

As with other key assets, Facebook is open sourcing its deep learning software, Caffe2, to attract innovation and accelerate adoption. This sees it partnering with several developers of processors and GPUs (graphics processing units), whose deep support for Caffe2 will be essential to its performance and economics. It deepened its relationship with Nvidia, but also discussed cooperations with Qualcomm and Intel, as well as cloud platform providers Amazon and Microsoft.

Qualcomm’s support will be important in the smartphone and other mobile device markets, and the firms Neutral Processing Engine (NPE) will support Caffe2 from July. This will be another boost to Qualcomm’s bid to bring machine learning out of the supercomputer center and towards the edge of the network, and the device itself. Such a trend would require high performance device processors, such as the firm’s Snapdragon, which already supports a wide range of ML functions to support use cases such as machine vision, connected cars and augmented reality.

MediaTek is also working to put neural networking and ML into local, low power devices, especially to power applications where the user may not have a reliable connection to the cloud. It offers a deep learning SDK (software developers’ kit) in combination with its Helio X20 mobile system-on-chip. Although not mentioned in the F8 releases, MediaTek’s SDK is optimized to work with Caffe, as well as the Google-initiated deep learning framework, TensorFlow.

Open source tools to push machine learning to the device:

At F8, Facebook announced that it planned to turn smartphone cameras into AR platforms eventually, showing off two of its inhouse camera designs, and a beta release of Facebook Spaces, a VR hangout lounge. This would be very much in line with Qualcomm’s own efforts. The chip giant has developed a ‘mobile brain chip’ called Zeroth, now part of the Snapdragon platform, and in October it unveiled a Snapdragon-based reference design for a camera enabled with machine vision, while Intel has acquired Movidius, a start-up and Google partner with similar capabilities.

Movidius CEO, Remi El-Ouazzane, said: “Deploying AI at the edge of the network is becoming a massive trend.”

Facebook certainly agrees with that assessment. Pre-prepared models from its Caffe2 ‘Model Zoo’ can be run with only a few lines of code, and so are suited to handsets, Raspberry Pis and other low power objects. Facebook said its collaborations with chip providers, and with Amazon and Microsoft, “will allow the machine learning community to rapidly experiment using more complex models and deploy the next generation of AI-enhanced apps and services to optimize Caffe2 for both cloud and mobile environments.”

For Facebook and Caffe2, augmented reality (AR) is the low hanging fruit, but this is just a step on the road to redefining the user interface to web services. This is a process which has featured at F8, and in the social giant’s R&D, for some years, as it bids to outmaneuver Google and others in shaping, and controlling, how people communicate with the web, how their experiences are personalized, and how the resulting data is analysed and monetized.

Virtual reality and AR, as well as AI-driven technologies like computer vision, voice interfaces and gesture control, will all be essential to this attempt. Voice and gestures instead of keywords, intelligent chatbots to support users, massive AI engines to return evermore accurate and personalized responses to natural language questions – these are the keynotes of the new web and search experiences, and every major web player knows it needs to lead the way in order to retain its influence and and the best chance of monetizing the new conversations.

So Facebook expects developers to use Caffe2 to create applications which can leverage its vast user base and drive the web experience forward via chatbots, conversational interfaces (as Microsoft calls them) and robotics. And the logical next step from voice, gesture and vision interfaces is a computer-to-brain interface.

The beginnings of a device-to-brain interface:

Regina Dugan, head of Facebook’s hardware R&D department, Building 8, said that the company is initially aiming for a system that can manage 100 words-per-minute (wpm) in real time, using a non-invasive optical interface. Current methods require surgery, but are capable of around 80wpm, but Facebook knows that even its most enthusiastic users aren’t going to be digging into their craniums to better browse Facebook. It’s not quite as sci-fi as Tesla CEO Elon Musk’s Neuralink venture, but it would be a pretty radical step forward for human-machine interfaces (HMI).

An artificial cochlea was also mentioned, as way to convey language via haptic feedback in a sleeve, but this system effectively requires users to learn another language entirely. The approach might be useful for simpler tasks, such as navigation (left, right, stop, turn around), but it is unlikely that most users would learn the haptic language needed to convey the ‘meaning’ of more complex words. Direct translation between two languages is already a loaded and tense process, subject to semantic and linguistic bias, and adding Facebook’s haptic Esperanto to the mix would make an already difficult process even more complex. And that’s before we dive into the topic of Facebook as the middleman in that process too.

But the work on an optical interface to turn thoughts into text or actions is relevant and important for Facebook’s VR/AR user experience ambitions – which saw the company spend $2bn on VR headset maker Oculus, as a means of expanding its live video strategy.

A brain interface would enable hands-free control of the environment, allowing a user to interact with menus or manipulate virtual objects. For plain old video, the functions would be as simple as pausing playback, or tagging and messaging friends to share in the experience.

However, in order to promote such an interface, Facebook needs to convince device makers to support the technology. Apple is infamously closed to external design suggestions, and while a big Android name like Samsung could be converted, especially as Samsung is another big proponent of VR and AR via its GearVR family, Google rivalry would stand in the way of broad Android platform support.

Consequently, the easier win would be to push the interface into the headsets that hold smartphones in front of a user’s eyes for AR and VR experiences, or perhaps into a standalone wearable for use without a phone suspended from a forehead. But Facebook would have to open up the technology for use with other systems and not just its own web and mobile platforms.

mmWave trials push universal connectivity forward:

To maximize the commercial opportunities of these new interfaces, Facebook needs everyone on the globe to be connected, preferable with its services as their front door to the web. Hence its rising interest in making telecoms networks cheap and simple to deploy, so that if the traditional operators will not connect the next billion, some other player will be able to. White label switches and boxes; high capacity unlicensed spectrum such as 60 GHz; and flexible, low cost networks are at the heart of the TIP project, a telecoms version of Facebook’s Open Compute Project, which has already contributed to revolutionizing the cost base for large data centers and clouds.

This year, Facebook announced three new records in wireless data transfer using the increasingly fashionable millimeter wave spectrum bands. One of its engineering teams has demonstrated a point -to-point data rate of 36Gbps over a distance of 13 kilometers with mmWave, and 80Gbps between the same points using Facebook’s optical cross-link technology. Both these technologies underpin efforts which the company is contributing to its open source TIP (Telecoms Infra Project), which aims to shake up the economics of telecoms networks using commoditized and open source hardware, and software-defined networking (SDN).

The team also demonstrated 16 Gbps in each direction from the ground to a circling Cessna aircraft over 7km away.

Yael Maguire, director of the Connectivity Program, wrote in a blog post that this would be applicable to a number of Facebook’s solutions, including as a terrestrial backhaul link for its open source small cell platforms, Terragraph or OpenCellular, to provide connectivity in remote or urban areas. The mmWave link could also connect a ground station with Facebook’s solar-powered drone, Aquila, which had its first test flight last year.

Terragraph plus smart routing in San Jose trial:

Facebook is testing the real world challenges of high frequency spectrum with a trial in San Jose, California. This uses Terragraph, a technology for cellular or WiFi small cells in the 60 GHz WiGig spectrum. Terragraph has an innovative routing protocol for improved collision detection and greater reliability, plus “SDN-like” cloud controllers to handle large numbers of cells efficiently and flexibly, targeting capacity where it is required at any one time. The system can support multi-Gbps links and is IPv6-only.

Maguire wrote: “To figure out where to place the nodes, we worked with our computer vision team to run tests of images in San Jose to understand both where we can potentially mount a millimeter wave radio, and where our lines-of-sight are.” The trial, in the downtown corridor, is using the Facebook routing software to move around obstacles very quickly before users notice a lapse in the connection, addressing one of the chief problems with high frequency bands.

“If a tree grows leaves in front of a node, if a temporary construction project starts, or if any number of possible blockers obstruct our line-of-sight, the signal goes away,” Maguire added. With the new software, he claimed: “We reduce the failover rate to something so small, it’s a blip—unnoticeable on human timescales.”

Another innovation emerging from Facebook’s connectivity projects is ‘Tether-tenna’, which McGuire nicknames ‘instainfrastructure’. It is a small helicopter tethered to a fiber line and power to create an instant cell in a remote area or emergency situation. “If the fiber line is still good to a certain point, we can make a virtual tower by flying a Tether-tenna a few hundred feet from the ground,” he explained.

This joins a widening range of Facebook network hardware designs (which, of course, are designed to be open sourced and adopted by other vendors, not turn the firm itself into a hardware vendor). As well as Aquila and other drone, balloon and satellite projects, there is the , the OpenCellular small cell reference design, the Terragraph 60 GHz urban small cell and the Project Aries Massive MIMO macrocell for rural areas.

While these may appear to see Facebook muscling into the territory of Nokia and Ericsson, its exercises in radio networks are really designed to fill gaps in current systems and to show off what might be done, especially in terms of lowering the cost of buying and deploying wireless systems.

By open sourcing many of its designs; developing alternatives to fiber for remote area backhaul; pushing for free or flexible spectrum such as white spaces; and working with non-traditional operators or infrastructure partners, Facebook has the opportunity to rewrite the rules for how wireless networks are built and costed, driving down the capex and opex bills and lowering barriers to entry for innovative service providers which can make a business case for targeting the unserved billions (something which is hard to do with current architectures, spectrum and revenue expectations).

Jay Parikh, VP of engineering at Facebook, said the firm’s Connectivity Lab unit is interested in “radical new approaches to get the connected unconnected … Our rule in the Connectivity Lab is we’re looking for gains that will make things 10x faster or 10x cheaper or both.”

Learn more at Rethink Research.