How to Build an Autonomous Anything

9:45–10:15

Autonomous technology will touch nearly every part of our lives, changing the products we build and the way we do business. It’s not just in self-driving cars, robots, and drones; it’s in predictive engine maintenance, automated trading, medical image interpretation, and other applications. Autonomy—the ability of a system to learn to operate independently—requires three elements:

  • Massive amounts of data and computing power
  • A diverse set of algorithms, from communications and controls to vision and deep learning
  • The flexibility to leverage both cloud and embedded devices to deploy the autonomous technology

Mary Ann Freeman shows you how engineers and scientists are combining these elements, using MATLAB® and Simulink®, to build autonomous technology into their products and services today—to build their autonomous anything.

Richard Rovner, MathWorks


Unleashing the Power of FPGAs Through Model-Based Design

10:15–10:45

Model-Based Design has long been the de facto standard for algorithm developers for exploring and implementing applications such as software defined radios, embedded vision, motor control systems and medical devices. Many of these applications require high performance compute and benefit significantly from the massively parallel architecture of FPGAs. But to leverage FPGAs, developers needed to bridge the gap between the algorithm-centric world of MATLAB and Simulink and the hardware-centric world of FPGAs, which once required fairly arduous manual translation steps.

Almost 20 years ago Xilinx pioneered the solution to this problem with System Generator for DSP which enabled a Model-Based design flow that could map directly to FPGAs. This has been used successfully over thousands of designs. But a lot has changed over the last 2 decades. New applications such as ADAS, 5G and Machine Learning have placed increasing performance demands on systems and driven the evolution of FPGAs into new device classes such as programmable SoCs and just recently, adaptive compute acceleration platforms (ACAP). Along with that, the model-based programming model has also evolved and moved to higher levels of abstraction in order to manage the massive increase in system complexity.

This talk draws inspiration from the past 20 years of Model Based design to lay a foundation for the next 20 years of innovation. We describe how the market trends, programmable devices and model-based development have changed over the past decade and how they are likely to evolve in the years to come.

Nabeel Shirazi, Ph.D., Senior Director of System Level Design Tools, Xilinx Inc.


What's New in MATLAB and Simulink

11:15–11:45

Learn about new capabilities in the latest releases of MATLAB® and Simulink® that will help your research, design, and development workflows become more efficient. MATLAB highlights include updates for writing and sharing code (Live Editor), developing and sharing MATLAB apps (App Designer), and managing and analyzing data. Simulink highlights include the Simulation Manager to run multiple simulations in parallel and new smart editing capabilities to build up models even faster. There are also new tools to make it easier to automatically upgrade your projects to the newest release.

Adam Sifounakis, MathWorks

Predictive Maintenance: From Development to IoT Deployment

11:45–12:15

Interest in predictive maintenance is increasing as more and more companies see it as a key application for data analytics that run on the Internet of Things. This talk covers the development of these predictive maintenance algorithms, as well as their deployment on the two main nodes of the IoT—the edge and the cloud.

Mehernaz Savai, MathWorks


Creating Deep Learning-Based Speech Products in Record Time

1:30–2:00

In the past two years, we’ve seen the industry discover speech as a critical interface protocol between humans and machines, especially for cloud-based information queries driven by speech recognition. However, speech recognition is just the tip of the iceberg. A whole new set of functions—speech enhancement, speaker identification and authentication, and background noise classification—are becoming available. These create new and significant opportunities for every application that touches audio or video—opening new potential for improved intelligibility, personalization, and customer “stickiness.”

BabbleLabs Clear Cloud is an example of a breakthrough deep learning technology applied to widely applicable speech APIs and it gives us a sense of the future roadmap of speech-centric applications. The number of speech problems BabbleLabs is working on is growing by the day, and the company has to develop a flow that will maximize the speed of creating production-ready SW IP. Using mature and comprehensive toolboxes from MathWorks, such as DSP System Toolbox™, Neural Network Toolbox™ (Deep Learning), and MATLAB Coder™, BabbleLabs can create state-of-the-art SW IP products in record time. These SW IP products integrate advanced digital signal processing (DSP) and sophisticated deep learning architectures using a homogeneous flow from development to deployment.

Samer Hijazi, Ph.D., BabbleLabs


Techniques for Deploying AI for Near-Real-Time Engineering Decisions

2:00–3:00

With the increasing popularity of artificial intelligence (AI), new frontiers are emerging across multiple application areas such as in predictive maintenance and manufacturing decision science. There are many complexities associated with modeling plant assets, building predictive models for them, and deploying these models at scale for near-real-time decisions support. This talk will focus on an end-to-end workflow that starts with a physical model of an engineering asset and walks through the process of developing and deploying a machine learning model for that asset. Deploying AI models as a scalable and reliable service is a key part of both in-cloud and on-premise solutions. This presentation will highlight common problems and solution patterns encountered in building these kinds of systems.

Arvind Hosagrahara, MathWorks


Master Class: Solving Optimization Problems with MATLAB

3:30–5:00

In this presentation, you will learn about the different tools available for optimization in MATLAB®. See how you can use Optimization Toolbox™ and Global Optimization Toolbox to solve a wide variety of optimization problems. You will learn best practices for setting up and solving optimization problems, as well as how to speed up optimizations with parallel computing.

Topics include:

  • Solving linear, nonlinear, and mixed-integer optimization problems in MATLAB
  • Finding better solutions to multiple minima and non-smooth problems using global optimization
  • Using symbolic math for setting up problems and automatically calculating gradients
  • Using parallel computing to speed up optimization problems

Loren Shure, Ph.D., MathWorks

Verification and Validation: Automating Best Practices to Improve Design Quality

11:45–12:15

Years of engineering expertise and best practices form the basis for the industry standards used in developing high-integrity and mission-critical systems. The standards include proven guidelines which can improve the quality of any design. Learn how you can take advantage of best practices from standards such as ISO 26262, DO-178/DO-331, IEC 61508, MISRA®, and others to find errors earlier in your process and improve the quality of your Simulink® models.

Chuck Olosky, MathWorks


Developing Battery Management System Using Simulink

2:00–2:30

Battery management systems (BMS) ensure maximum performance, safe operation, and optimum lifespan of battery pack energy storage systems under diverse operating and environmental conditions. In this session, learn how Simulink, as a development platform, allows engineers from electrical, thermal, and software backgrounds to collaborate throughout the development process. Starting from early design tradeoffs to hardware-In-loop (HIL) testing of BMS hardware, Simulink can help engineers ensure the BMS performs as intended under all desired operating conditions.

Chirag Patel, MathWorks


Mechatronic Design for Aircraft Systems

2:00–2:30

Mechatronic systems include a wide range of components, including motors, op-amps, and shaft encoders. Simulating these components together with mechanical and control systems is critical to optimizing system performance. To ensure that testing is efficient, MathWorks offers a number of ways to easily balance the trade-off of model fidelity and simulation speed. The ability to generate C code from the model enables engineers to use Model-Based Design for the entire system (plant and controller).

Terry Denery, MathWorks


Autonomous Navigation Using Model-Based Design

2:30–3:00

Navigation technologies, such as localization, mapping, SLAM, and path planning, are the key building blocks of enabling autonomy in any robotic system. In this presentation, you will learn how Model-Based Design can help develop and test these algorithms. You will also learn how to use co-simulation with simulation engines such as Gazebo and automatic code generation to deploy to hardware. You will see a complete navigation stack built in MATLAB® and Simulink® and deployed via ROS on a robot running Jetson TX2.

Pulkit Kapur, MathWorks


Master Class: Deep Learning for Signals

3:30–5:00

Learn how MATLAB enables and simplifies the process of performing deep learning with signal data. We’ll start off by looking at key capabilities for labeling, pre-processing, and sorting large signal data sets. Then we'll examine the key types of networks used for deep learning and how they are applied to solve real-world signal problems.

We’ll walk through full workflow examples covering two different types of networks:

  • Voice command recognition of audio signals using Convolution Neural Networks (CNN)
  • Classify ECG signals using Long Short-Term Memory (LSTM) networks

We will wrap up by showing how these trained models can be deployed to run on a GPU or embedded processor.

Demystifying Deep Learning

11:45–12:15

Deep learning can achieve state-of-the-art accuracy for many tasks considered algorithmically unsolvable using traditional machine learning, including classifying objects in a scene or recognizing optimal paths in an environment. Gain practical knowledge of the domain of deep learning and discover new MATLAB® features that simplify these tasks and eliminate the low-level programming. From prototype to production, you’ll see demonstrations on building and training neural networks and hear a discussion on automatically converting a model to CUDA® to run natively on GPUs.

Pitambar Dayal, MathWorks


Automated Driving Development with MATLAB and Simulink

1:30–2:00

ADAS and autonomous driving technologies are redefining the automotive industry, changing all aspects of transportation, from daily commutes to long-haul trucking. Engineers across the industry use Model-Based Design with MATLAB® and Simulink® to develop their automated driving systems. This talk demonstrates how MATLAB and Simulink serve as an integrated development environment for the different domains required for automated driving, including perception, sensor fusion, and control design.

This talk covers:

  • Perception algorithm design using deep learning
  • Sensor fusion design and verification
  • Control algorithm design with Model Predictive Control Toolbox™

Avinash Nehemiah, MathWorks


5G: What’s Behind the Next Generation of Mobile Communications?

2:00–2:30

Learn how MATLAB® and Simulink® help you develop 5G wireless systems, including new digital, RF, and antenna array technologies that enable the ambitious performance goals of the new mobile communications standard.

This talk presents and demonstrates available tools and techniques for designing and testing 5G new radio physical layer algorithms, massive MIMO architectures and hybrid beamforming techniques for mmWave frequencies, and details on modeling and mitigating channel and RF impairments.

Marc Barberis, MathWorks


IoT Sensor Solutions Using Radar Technology for Contactless Patient Monitoring

2:30–3:00

Frank Morese will present an overview of Olea Sensor Networks with a focus on IoT sensor solutions for connected care applications. He will show how the company uses MathWorks technologies to develop unique machine learning algorithms for optimizing sensor performance monitoring for tasks like checking the vital signs of humans and animals without any contact to the body.

Frank Morese, Olea Sensor Networks


Effects of Phase Noise and Signal to Noise Ratio in PAM4 Signaling

3:30–4:00

Early analysis and evaluation of clock phase noise and signal to noise ratio of the channel in a high-speed serial (HSS) link can have a significant effect on correctly designing the core and achieving first time right. This presentation evaluates the effect of measured phase noise and/or channel signal to noise ratio in pulse-amplitude modulation 4-level (PAM4) signaling. The effect of closed-loop phase-locked loop (PLL) bandwidth is also included to further enhance the analysis and estimation of symbol error rate (SER) and total error based on total number of symbols run.

Don Pakbaz, GLOBALFOUNDRIES


Analog and Mixed-Signal Product Development Round-Trip Workflow

4:00–4:30

With several leading-edge industry drivers such as IOT, driverless cars, drones, medical devices, augmented reality, artificial intelligence, and the like, there is a constant push on the IC product development teams, which are comprised of multiple disciplines such as digital design, analog and mixed-signal design, pre-silicon verification, post-silicon validation, IC-package-board co-design, board design, and manufacturing test. This constant pressure is to innovate, improve cycle times, improve quality, and reduce silicon excursions. Automation that supports structured workflows for a wide variety of design styles helps improve multiple metrics in the IC product development process. Proposed in this presentation is a flow that leverages products from MathWorks and Cadence to help IC development teams start thinking at a high level of abstraction as they start thinking about exploring their design space using Simulink®. This is followed by an implementation in the Cadence® environment using Virtuoso®. Once the IC design is implemented, a behavioral model of the design in PSpice® could then be brought back into the Simulink environment, where the system around the IC could be explored and designed. In this context, the system could be an application board or a test board. This type of analysis could help us better understand the behavior of the IC in the context of the system for which it was designed for.

Kishore Karnane, Cadence


Machine Learning for Electronic Design Automation

4:30–5:00

Electronic design automation must evolve in response to increasingly ambitious goals for low power and high performance, which are accompanied by a decreasing design cycle time. There is an unmet need for models, methods and tools that enable fast and accurate design and verification of microelectronic circuits and systems while protecting intellectual property. A behavioral approach to systems modeling will help achieve those objectives. Designers’ prior knowledge may be used to impose physical constraints on the models and to speed up learning.

This presentation will introduce the Center for Advanced Electronics through Machine Learning (CAEML). CAEML is an NSF-sponsored Industry/University Cooperative Research Center whose mission is to advance the state-of-the-art in electronic design automation (EDA) by using machine learning methods and algorithms. CAEML researchers are located at the University of Illinois at Urbana-Champaign, Georgia Tech, and North Carolina State University.

CAEML’s initial research efforts were primarily in the realms of behavioral modeling and system optimization, but have recently expanded to encompass data analytics and deep learning. Applications under investigation range from IP reuse to signal integrity analysis, and from FPGA compilation recipes to system ESD design. Many of the CAEML researchers use MATLAB tools in their work, including the toolboxes for system identification, global optimization, machine learning, and neural networks.

This talk will provide a brief overview of the center’s research portfolio and will include in-depth examples of the work being done to advance IP reuse, thermal optimization of 3D-IC, and system ESD design.

Elyse Rosenbaum, University of Illinois at Urbana-Champaign

MATLAB for Deep Learning: A Hands-On Workshop

11:45–12:15

Bring your laptop and explore the latest deep learning techniques first hand at MATLAB EXPO. This hands-on session provides an opportunity to explore deep learning tasks for three different data types: images, text, and sound. You’ll access the latest research and explore practical deep learning techniques in MATLAB®. Whether you’re a beginner or an expert in deep learning, you’re guaranteed to learn something new.

In this session, you will:

  • Graphically build a deep neural network
  • Train deep learning networks on GPUs in the cloud
  • Explore a curated set of pretrained models available within MATLAB
  • Access models from ONNX™, TensorFlow®, Caffe and PyTorch
  • Apply deep learning to image, signal, or text data

This is a hands-on, self-paced session. Attendees need to bring their own laptops.

No Deep Learning Experience Required.