}

Home: Knowledge Center > Embedded Neural Network Summit

Embedded Neural Network Summit

February 1, 2017

In today’s world, there is an enormous amount of data that requires robust processing power to make intelligent decisions for applications from image, pattern, and speech recognition to natural language processing and video analysis. The applications for convolutional neural networks are here and growing.

Cadence hosted the Embedded Neural Network Summit 2017, Deep Learning: The New Moore’s Law, showcasing experts who have a profound understanding of convolutional neural networks and have applied these complicated algorithms in various products and applications.

View speaker presentations

 

February 9, 2016

Cadence hosted an Embedded Neural Network Summit 2016, Extending Deep Learning into Mass-Market Silicon, with experts that have deep understanding of convolutional neural networks and have applied these complicated algorithms in various products and applications.

The Era of Machines that See and Understand

Jeff Bier, BDTi, president and founder of Embedded Vision Alliance

 Watch video            Download

Abstract

Modern processors and image sensors are making it possible to incorporate computer vision capabilities into a wide range of systems, such as automotive safety systems and assistive devices for the visually impaired. At the same time, artificial neural networks are enabling machines to understand the world in new ways through visual inputs. In this presentation, we’ll provide an update on widely deployable visual intelligence and how it’s being used to create compelling products and capabilities—in embedded systems, wearables, mobile devices, and the cloud. We’ll highlight some of the most interesting products incorporating vision capabilities. And we’ll report on important developments in enabling technologies, including processors, sensors, and standards.

 

High-Performance Hardware for Deep Neural Networks

Bill Dally, NVIDIA, chief scientist and SVP of research, Stanford professor

 Watch video            Download

Abstract

Deep neural networks (DNNs) are computationally intensive, requiring teraOps of performance for some embedded applications. GPUs have emerged as the standard solution for both training and deployment of DNNs offering teraOps in a 10W embedded platform. Training of DNNs can be accelerated using both model and data parallelism, at the cost of inter-processor communication. Both training and deployment can also be accelerated by using reduced precision for weights and activations. There is a tradeoff between accuracy and precision in these networks. Finally, the ultimate in performance and efficiency for deployment is realized with special-purpose hardware for neural networks.

TensorFlow on Embedded Devices

Pete Warden, Google staff research engineer

 Watch video            Download

Abstract

Google recently open-sourced their internal deep-learning framework, TensorFlow, and it’s already being used on mobile and embedded devices. This talk will discuss how to run it well on existing hardware, and how we think vendors can optimize their future designs for deep learning.

Get Real! Neural Network Technology for Embedded Systems

Chris Rowen, Cadence CTO IP Group

 Watch video            Download

Abstract

Deep learning is emerging from the fog of hype into serious production use across myriad applications. Neural networks are already being deployed in initial real-time embedded applications in automotive, but are they poised for broad adoption in mobile, robotics, surveillance, consumer, and wearables? Three serious hurdles must fall for convolution neural networks and other deep learning methods to proliferate widely: easy network optimization, training, and inference deployment; scalability to tens of teraMACs; and two order-of-magnitude energy improvement. This talk describes progress towards all three goals and attempts a glimpse at the future of embedded neural network technology.

The Need for Heterogeneity: From ASICs to Data Centers (Deep Learning as a Case Study)

Sumit Sanyal, minds.ai founder and CEO

 Watch video            Download

Abstract

NN network training and inference are heralding a new class of algorithms with unique compute to I/O ratio requirements. At the same time, we are now firmly in the age of post-Dennardian scaling at the silicon level and pushing thermodynamic limits in our datacenters. Using deep learning as a case study, we will examine how these limits can be transcended and how heterogeneous designs emerge at various levels in product design.

Is Bigger CNN Better?

Samer Hijazi, Cadence engineering director, CTO office

 Watch video            Download

Abstract

Since the strong showing of Alex Krizhevsky’s ImageNet CNN in 2012, the artificial intelligence community rapidly improved CNN performance and understanding of the best practices to get better results. At the same time, the hardware resources needed to train and run the recognition tasks have grown exponentially in a matter of few years. In this talk, we will shed the light on some of the work Cadence has developed to help the industry balance complexity versus performance effectively.

Programmable Logic for Embedded Neural Networks

Michael Leventhal, Xilinx technical manager, data center acceleration

 Watch video            Download

Abstract

FPGAs are, arguably, the most power-efficient hardware platform available today for embedded neural networks. This talk will present how neural networks map onto programmable logic processors and the architectural sources of high efficiency. System design examples and results in real-time automotive and security applications will be discussed.

Targeting CNNs for Embedded Platforms

Anshu Arya, MulticoreWare solution architect

 Watch video            Download

Abstract

MulticoreWare designs convolutional neural network (CNN) applications that target embedded platforms and execute within strict compute, memory, and power constraints. We’ll discuss the challenges encountered in developing CNNs for embedded platforms and some of the techniques used to take advantage of modern embedded designs.

The Era of Cognitive Computing

Sumit Gupta, IBM

 Watch video            Download

Abstract

Inexpensive and pervasive computers with the ability to communicate data have led to an explosive growth in our ability to collect data. A large percentage of this data is unstructured and in fact, “uncertain.” New, cognitive methods enable us to extract insights from this data in ways we haven’t even imagined yet. This talk will discuss the art of the possible and how hardware infrastructure in the cloud and in devices will enable cognitive computing.