Beyond the ‘I’ in AI

8:30 a.m.–9:05 a.m.

Insight. Implementation. Integration. AI, or artificial intelligence, is transforming the products we build and the way we do business. It also presents new challenges for those who need to build AI into their systems. Creating an “AI-driven” system requires more than developing intelligent algorithms. It also requires:

  • Insights from domain experts to generate the tests, models, and scenarios required to build confidence in the overall system
  • Implementation details including data preparation, compute-platform selection, modeling and simulation, and automatic code generation
  • Integration into the final engineered system

Join us as Mike Agostini demonstrates how engineers and scientists are using MATLAB® and Simulink® to successfully design and incorporate AI into the next generation of smart, connected systems.

Mike Agostini

Mike Agostini, MathWorks


Next-Generation Wi-Fi Networks for Time-Critical Applications

9:05–9:45 a.m.

Wireless Time Sensitive Networks (TSN) is an emerging research area, which can enable new applications and services for many industrial automation systems that rely on time-synchronized (and timely) communication among sensing, computing, and actuating devices. Feasibility demonstration using hardware platforms is a required step before wireless technologies can be adopted in soft and hard real-time industrial applications. However, in order to experiment with time synchronization and other TSN features that control latency and reliability over the wireless medium, it is fundamental to have access to lower level MAC and PHY layer implementations. This presentation introduces a wireless platform for experimental work in the Wi-Fi physical layer. Next-generation Wi-Fi being defined by the IEEE 802.11ax Task Group introduces several features and capabilities that can significantly improve the support for industrial automation applications.

We have recently demonstrated an 802.11ax baseband experimental implementation (with select features) on an Intel Arria 10 FPGA platform integrated with an off-the-shelf analog front end. This SDR platform enables the development of techniques to optimize latency in FPGA and application-specific implementations. For instance, several latency optimizations were developed using this platform, including parallelization techniques for binary convolutional codes, low-latency streaming Fourier transforms, and tightly pipelined transmit and receive processing chains.

Using 802.11ax baseband design, this presentation demonstrates a workflow for wireless system design that utilizes MATLAB®, Simulink® modeling, Embedded Coder®, and HDL Coder™ as a unified tool set for rapid prototyping. We discuss software vs. FPGA implementation partitioning based on a deliverable’s objective and tradeoffs.


What’s New in MATLAB and Simulink

9:45–10:15 a.m.

Learn about new capabilities in the MATLAB® and Simulink® product families to support your research, design, and development workflows. This talk highlights features for deep learning, wireless communications, automated driving, and other application areas. You will see new tools for defining software and system architectures, and modeling, simulating, and verifying designs.

Mehernaz Savai

Mehernaz Savai, MathWorks

Exploring Microsoft Machine Teaching Online Service for Building Autonomous Systems Using Simulink Models

11:00 a.m.–11:30 a.m.

Building the next generation of autonomous systems requires a new approach. Microsoft Machine Teaching Online Service uses techniques such as curriculum learning, deep RL, scalable data generation, and others to support subject matter experts who are building intelligent control systems for a wide range of applications. Cyrill Glockner will explore these techniques and demonstrate how the system works by training a BRAIN using a Simulink based model.

Cyrill Glockner

Cyrill Glockner, Microsoft Corporation


Model-Based Hyper Scalable Assessment of Automated Vehicle Functions

11:30 am–12:00 p.m.

Samsung DRVLINE™ is a platform for building advanced driver-assistance systems (ADAS), built with a philosophy of being open, modular, and scalable. The platform is designed to accommodate a wide range of automated driving requirements, from ADAS and scaling upwards to support full autonomy. DRVLINE is both a hardware and a software solution. As a relevant part of the DRVLINE solution, it's important to offer developers the possibility to adopt Model-Based Design and access simulation and testing tools, but in a modern and scalable way. To achieve that objective, a DRVLINE Toolbox for Simulink® was developed, integrating all the relevant elements of the toolchain in a single docker container. That container can be installed on developers’ machines or deployed in a server farm for parallel execution to seamlessly enable two major use cases: a convenient way to develop vehicle functions using a Model-Based approach in a developer’s machine, and deploy the container with the function under test in a cluster and explore the parameter space to detect defects. This way, in a code-free fashion, we can help rapid exploration of new functional features as well as provide a comprehensive environment for a deep functional verification.

Stefano Marzani

Stefano Marzani, Samsung


CAEML Research in Hardware Design and Optimization Using Machine Learning

12:00–12:30 p.m.

The Center for Advanced Electronics through Machine Learning (CAEML) was established in 2016. Much of its research is starting to bear fruit in real-world applications. We will highlight two Hewlett Packard Enterprise applications that use CAEML research results.

The first is a 56G PAM channel optimization and training speed-up using principal component analysis (PCA) and polynomial chaotic expansion (PCE) surrogate models. A 56G PAM SerDes and a channel with varying loss is measured and machine learning techniques are used to accelerate the channel optimization process and correctly model the SerDes without using any simulations.

The second is a proactive hardware failure prediction method using machine learning techniques developed by CAEML. The method is currently being deployed in the field to proactively remove drives from the field to avoid potential performance degradation and data loss.

The presentation covers:

  • A brief introduction of CAEML
  • Unique applications of machine learning for hardware design that are different from typical CNN or LSTM neural network applications
  • Demonstration of a 56 PAM SerDes performance optimization using PCA and PCE surrogate models
  • Production application using proactive hardware failure prediction with casual inference to remove bad drives in the field
  • Future investigations of CAEML

CAEML researchers use MATLAB® and related toolboxes extensively throughout the application development process. For example, the standard MATLAB PCA package was used while custom MATLAB code was developed for the polynomial chaotic expansion surrogate models and the casual inference feature selection functions. The rich mathematical libraries allow rapid development of the prototype special functions.

Chris Cheng

Chris Cheng, HP Enterprise


Insights into MATLAB  Memory Handling and Datatypes

1:30–2:00 p.m.

In this session you will gain an understanding of how different MATLAB data types are stored in memory and how you can program in MATLAB to use memory efficiently. You can benefit from applying these ideas to develop code that is memory efficient, maintainable, and robust.

Loren Shure

Loren Shure, MathWorks


Deep Learning and Reinforcement Learning Workflows in AI

2:30–3:00 p.m.

AI, or artificial intelligence, is powering a massive shift in the roles that computers play in our personal and professional lives. Two new workflows, deep learning and reinforcement learning, are transforming industries and improving applications such as diagnosing medical conditions, driving autonomous vehicles, and controlling robots.

This talk dives into how MATLAB® supports deep learning and reinforcement workflows, including:

  • Automating preparation and labeling of training data
  • Interoperability with open source deep learning frameworks
  • Training deep neural networks on image, signal, and text data
  • Tuning hyper-parameters to accelerate training time and increase network accuracy
  • Generating multi-target code for NVIDIA®, Intel®, and ARM®

AI Techniques in MATLAB for Signal, Time-Series, and Text Data

3:30–4:00 p.m.

Using artificial intelligence (AI) techniques on signals and time-series is growing in popularity across a variety of applications, including speech classification, radar target recognition, digital health, music information retrieval, voice biometrics and emotion recognition.

In this talk, you will learn how MATLAB® is being used by engineers and scientists to apply deep learning

  • Ground-truth labeling of signals via programs and interactive Apps
  • Ingestion of existing labeled datasets
  • Feature extraction, time-frequency transformations and advanced pre-processing techniques (e.g. wavelet scattering)
  • Use of transfer learning with well-established network architectures
  • Collaborating with other deep learning frameworks by importing and exporting ONNX models
  • Accelerating computations using GPUs, locally or on the cloud
  • Deploying models on cloud platforms and embedded processors

The talk will make use of practical worked examples involving deep learning classifiers of Radar target detections, EKG signals, and speech commands.

Bryan Perfetti

Bryan Perfetti, MathWorks


Sensor Fusion and Tracking for Autonomous Systems

4:00–4:30 p.m.

Autonomous systems are a focus for academia, government agencies, and multiple industries. These systems range from road vehicles that meet the various NHTSA levels of autonomy through consumer quadcopters capable of autonomous flight and remote piloting, package delivery drones, flying taxis, and robots for disaster relief and space exploration. In this talk, you will learn to design, simulate, and analyze systems that fuse data from multiple sensors to maintain position, orientation, and situational awareness. By fusing multiple sensors data, you ensure a better result than would otherwise be possible by looking at the output of individual sensors. Several autonomous system examples are explored to show you how to:

  • Define trajectories and create multiplatform scenarios
  • Simulate measurements from inertial and GPS sensors
  • Generate object detections with sensor models
  • Design multi-object trackers as well as fusion and localization algorithms
  • Evaluate system accuracy and performance on real and synthetic data
Rick Gentile

Rick Gentile, MathWorks


Deploying Deep Neural Networks to Embedded GPUs and CPUs

4:30–5:00 p.m.

Designing and deploying deep learning and computer vision applications to embedded GPU and CPU platforms like NVIDIA® Jetson, AGX Xavier™, and DRIVE AGX is challenging because of resource constraints inherent in embedded devices. A MATLAB® based workflow facilitates the design of these applications, and automatically generated C/C++ or CUDA® code can be deployed to achieve up to 2X faster inference than other deep learning frameworks.

This talk walks you through the workflow. Starting with algorithm design, the algorithm may employ deep learning networks augmented with traditional computer vision techniques and can be tested and verified within MATLAB. Bring live sensor data from peripherals devices on your Jetson/DRIVE platforms to MATLAB running on your host machine for visualization and analysis. Deep learning networks are trained using GPUs and CPUs on the desktop, cluster, or cloud. Finally, GPU Coder™ and MATLAB Coder™ generate portable and optimized CUDA and/or C/C++ code from the MATLAB algorithm, which is then cross-compiled and deployed to Jetson or DRIVE, ARM®, and Intel® based platforms.


Digital Twins for Smart Manufacturing

5:00–5:30 p.m.

With the increasing popularity of AI, new frontiers are emerging in predictive maintenance and manufacturing decision science. However, there are many complexities associated with modeling plant assets, training predictive models for them, and deploying these models at scale, including:

Generating failure data, which can be difficult to obtain, but physical simulations can be used to create synthetic data with a variety of failure conditions.

Ingesting high-frequency data from many sensors, where time-alignment makes it difficult to design a streaming architecture.

This talk will focus on building a system to address these challenges using MATLAB®, Simulink®, Apache™ Kafka®, and Microsoft® Azure®. You will see a physical model of an engineering asset and learn how to develop a machine learning model for that asset. To deploy the model as a scalable and reliable cloud service, we will incorporate time-windowing and manage out-of-order data with Apache Kafka.

Pallavi Kar

Pallavi Kar, MathWorks

On the New Generation of Bio-Inspired Robots

11:00–11:30 a.m.

Robots can only contribute to our lives with full potential when they can learn efficiently, generalize to a variety of tasks, and operate in challenging environments. Here, Ali will discuss some of the solutions that biology has found to address these challenges and elaborate on how we can apply them to our robots. In particular, he will talk about learning to perform functional movements without prior knowledge of the system and only with limited experience in the physical world. The effects of simple kinematic feedback on both performance and learning speed will also be discussed. Lastly plots and videos for both simulation and physical system implementations will be provided.


Design for AMI - A New Integrated Workflow for Modeling High-Speed PAM4 SerDes Systems

11:30 a.m.–12:00 p.m.

Today’s high-speed SerDes design requires upfront effort by architects to allow for the direct extraction of an IBIS-AMI model from the architectural model. We demonstrate a process of creating an IBIS-AMI model from detailed characterization data of the CTLE, DFE, and CDR. The multi-stage CTLE is defined by frequency domain curves and saturating voltage in/out tables; poles/zeros extracted from the curves by vector fitting are combined with a memoryless nonlinearity to model each CTLE stage. Advanced impulse response equalization adaptation schemes quickly find near-optimum settings and serve as a starting point for custom adaptation implementations.

Jonggab Kil

Jonggab Kil, Intel


Verify 5G System Performance Using Xilinx RFSoC and Avnet RFSoC Development Kit

12:00–12:30 p.m.

In this presentation, we demonstrate Ethernet-based connectivity to MATLAB® and Simulink®, which will allow you to capture, measure, and characterize RF performance with the Avnet Zynq UltraScale+ RFSoC Development Kit. Over-the-air testing is demonstrated using direct RF-sampling with a 2x2 Small Cell LTE Band 3 plugin card.
During our presentation, we will demonstrate how to:

  • Connect to Xilinx RFSoC hardware from MATLAB and Simulink
  • Characterize performance of RFSoC data converters
  • Control an RF front-end for antenna-to-digital verification
  • Perform radio-in-the-loop data capture to example designs from 5G Toolbox™
Matt Brown

Matt Brown, Avnet


RF Design and Test Using MATLAB and NI Tools

1:30–2:00 p.m.

In this session, you will learn how to interface MathWorks wireless design software to NI RF validation test solutions. Using an RF power amplifier (PA) linearization example, we present technologies that streamline the workflow from RF modeling and simulation in MATLAB® and Simulink® to post-silicon device validation on NI PXIe RF instruments.

Tim Reeves

Tim Reeves, MathWorks

Chen Chang

Chen Chang, National Instruments


Wired Communications Systems Modeling and Analysis

2:30–3:00 p.m.

In this presentation, we talk about building wired channel equalization models using a high level of abstraction. This talk introduces the SerDes Toolbox™ and demonstrates putting together blocks such as DFE, FFE, CTLE, AGC, and CDR as part of a channel equalization scheme. This talk also covers IBIS-AMI model generation for the purpose of channel simulation.

Barry Katz

Barry Katz, MathWorks


Top-Down Modeling and Analysis of Analog Mixed-Signal Systems

3:30–4:00 p.m.

In this presentation, we will talk about a top-down approach to analog mixed-signal architectural modeling using the Mixed-Signal Blockset™. This session will cover usage of the Mixed-Signal Blockset to model mixed-signal elements such as phase locked loops and analog-digital converters. It will also illustrate the capabilities of bringing in impairments and validate the performance of the PLL and ADC using test benches and measurement blocks.

Rajesh Berigei

Rajesh Berigei, MathWorks


Understanding and Modeling the 5G NR Physical Layer

4:00–5:00 p.m.

In this presentation, we will provide an understanding of the key physical layer concepts in the 3GPP 5G New Radio (NR) standard. To accomplish the goals of high data rates, low latency, and massive connectivity, 5G introduces new features that add greater flexibility and complexity compared to the 4G LTE standard. This session covers the fundamentals of 5G waveforms, frame structure and numerology, physical channels and signals, synchronization, beam management, and link modeling. Theoretical material will be supplemented with MATLAB demonstrations.

Marc Barberis

Marc Barberis, MathWorks

Leveraging MATLAB and Simulink in Building Battery SOH

11:00 a.m.–11:30 a.m.

This talk gives an overview of battery state-of-health (SOH) estimation and prognostics modeling with data generated from the vehicle model in the cloud. The vehicle model is composed of a Simulink® based electric vehicle model that includes Li-Ion cell chemistry-based battery models. While building battery state-of-health pipelines, it is difficult to capture real data from the vehicle in various driving conditions. We took the approach to leverage a calibrated Li-Ion cell chemistry model to generate the required data in various driving conditions. We pushed this data to the cloud, then had the data pipelines pick this data and do all the downstream processing. This enabled us to build the data pipelines and the analytics stack without having extensive vehicle data. As we have now started getting real data, we are validating this analytics stack. This talk also discusses leveraging the Simulink code-generation feature to generate C-code and its feasibility for real-time in-vehicle SOH estimation.


Full Vehicle Simulation for Electrified Powertrain Selection

11:30–12:00 p.m.

Full vehicle simulation models are needed to assess attributes such as fuel economy and performance for each candidate. At times, this requires integrating models from different engineering teams into a single system level simulation. Integrating these subsystems, including many controllers in model or code together in a closed-loop testing environment, can be challenging. In this session, you will learn how MathWorks automotive modeling tools and simulation integration platform can be used for powertrain selection studies.

Kevin Oshiro

Kevin Oshiro, MathWorks


Design and Test of Automated Driving Algorithms

12:00 p.m.–12:30 p.m.

In this talk, you will learn how MathWorks helps you design and test automated driving algorithms, including:

  • Perception: Design LiDAR, vision, radar, and sensor fusion algorithms with recorded and live data
  • Planning: Visualize street maps, design path planners, and generate C/C++ code
  • Controls: Design a model-predictive controller for traffic jam assist, test with synthetic scenes and sensors, and generate C/C++ code
  • Deep learning: Label data, train networks, and generate GPU code
  • Systems: Simulate perception and control algorithms, as well as integrate and test hand code
Shusen Zhang

Shusen Zhang, MathWorks


Women in Tech Ignite Lunch and Networking

12:30-1:30 p.m.

As part of the Women in Tech initiative, MathWorks will host a Women in Tech Ignite Lunch during this year’s MATLAB EXPO. Join the lunch to hear from leading technical experts and to discuss your experiences, and use this opportunity to meet and network with female industry peers. All are welcome.


Adopting Model-Based Design for FPGA, ASIC, and SoC

1:30–2:00 p.m.

The competing demands of functional innovation, aggressive schedules, and product quality have significantly strained traditional FPGA, ASIC, and SoC development workflows.

This talk shows how you can use Model-Based Design with MATLAB® and Simulink® for algorithm- and system-level design and verification, including how to:

  • Verify the functionality of algorithms in the system context
  • Refine algorithms with data types and architectures suitable for FPGA, ASIC, and SoC implementation
  • Prototype and debug models running live on hardware connected to MATLAB or Simulink
  • Generate and regenerate verified design and verification models for the hardware engineering team
  • Keep the workflow connected to speed verification closure and meet functional safety requirements
Robert Anderson

Robert Anderson, MathWorks


Making Software Safe and Secure with Team Collaboration

2:00–2:30 p.m.

Do you need evidence that your code will not cause safety hazards or security issues? Polyspace® products allow you to achieve the highest levels of software quality with reduced testing effort. Using formal methods-based static code analysis, Polyspace can prove that your code is free from certain critical run-time errors. The analysis can be done interactively by software developers during code development to quickly find coding defects and violations of safety and security standards like MISRA® and CERT C/C++. When used with continuous integration tools such as Jenkins™, Polyspace helps improve software quality, safety, and security across your projects. Results are published for web browser-based code review with tracing information to identify the root cause of defects. Polyspace supports modern team collaboration dashboards to show quality metrics for project and safety managers. Integration with defect tracking tools, such as Jira, help manage issues across your development enterprise.

Jeff Chapple

Jeff Chapple, MathWorks


Planning Simulink Model Architecture and Modeling Patterns for ISO 26262 Compliance

2:30–3:00 p.m.

The ISO 26262 standard for functional safety provides guidance on the development of automotive electronics and electrical system, including embedded software. A common challenge is to determine the strategy, software architecture, design patterns, and toolchain up front in a project to achieve standard compliance and to avoid mid-project changes to these foundational areas. In this presentation, MathWorks engineers will address the following topics based on their experiences applying Simulink® to production programs that require ISO 26262 compliance:

  • Toolchain and reference workflow for ISO 26262 compliance
  • Key considerations for model architecture
  • Modeling constructs required to meet freedom from interference
  • Applying the above best practices to meet AUTOSAR at the same time
Dave Hoadley

David Hoadley, MathWorks


Toolchain Definition and Integration for ISO 26262-Compliant Development

3:30–4:00 p.m.

MathWorks tools such as Simulink and Stateflow are used extensively for ISO 26262-compliant embedded software development, from ASIL-A through ASIL-D. The algorithmic needs of advanced driver assistance and autonomous driving applications are often expressed more naturally in MATLAB, however. We will discuss the challenges and best practices for achieving ISO 26262-compliance in a mixed MATLAB and Simulink paradigm. Examples will include applying verification and validation tools to software components authored primarily in MATLAB, and integrating Simulink with collaboration tools such as Git and Gerrit Code Review.

Dave Hoadley

David Hoadley, MathWorks


Developing Battery Management Systems Using Simulink

4:00–4:30 p.m.

Software algorithms play a critical role in battery management systems (BMS) to ensure maximum performance, safe operation, and optimal life of battery pack under diverse operating and environmental conditions. Developing and testing these algorithms requires expertise in multiple domains and achieving functional safety certification can be very confusing and lengthy process. In this talk, you will learn how to:

  • Design and test BMS algorithms such as state of charge estimation, cell balancing, contactor management and current/power limit calculation
  • Generate production quality C/C++ code and target embedded processors
  • Measure design complexity and perform systematic unit testing
  • Prove that your design meets requirements, and automatically generate tests
  • Perform Hardware-In-Loop testing using Speedgoat real-time hardware
  • Produce reports and artifacts, and certify to functional safety standards
Chirag Patel

Chirag Patel, MathWorks