Abstracts

What’s New in MATLAB and Simulink

8:30 a.m.–9 a.m.

Learn about new capabilities in the MATLAB® and Simulink® product families to support your research, design, and development workflows. This talk highlights features for deep learning, wireless communications, automated driving, and other application areas. You will see new tools for defining software and system architectures, and modeling, simulating, and verifying designs.

Mehernaz Savai, MathWorks


Beyond the ‘I’ in AI

9:15–9:45 a.m.

Insight. Implementation. Integration. AI, or artificial intelligence, is transforming the products we build and the way we do business. It also presents new challenges for those who need to build AI into their systems. Creating an “AI-driven” system requires more than developing intelligent algorithms. It also requires:

  • Insights from domain experts to generate the tests, models, and scenarios required to build confidence in the overall system
  • Implementation details including data preparation, compute-platform selection, modeling and simulation, and automatic code generation
  • Integration into the final engineered system

Join us as Mike Agostini demonstrates how engineers and scientists are using MATLAB® and Simulink® to successfully design and incorporate AI into the next generation of smart, connected systems.

Mike Agostini, MathWorks

CAEML Research in Hardware Design and Optimization Using Machine Learning

12:00–12:30 p.m.

The Center for Advanced Electronics through Machine Learning (CAEML) was established in 2016. Much of its research is starting to bear fruit in real-world applications. We will highlight two Hewlett Packard Enterprise applications that use CAEML research results.

The first is a 56G PAM channel optimization and training speed-up using principal component analysis (PCA) and polynomial chaotic expansion (PCE) surrogate models. A 56G PAM SerDes and a channel with varying loss is measured and machine learning techniques are used to accelerate the channel optimization process and correctly model the SerDes without using any simulations.

The second is a proactive hardware failure prediction method using machine learning techniques developed by CAEML. The method is currently being deployed in the field to proactively remove drives from the field to avoid potential performance degradation and data loss.

The presentation covers:

  • A brief introduction of CAEML
  • Unique applications of machine learning for hardware design that are different from typical CNN or LSTM neural network applications
  • Demonstration of a 56 PAM SerDes performance optimization using PCA and PCE surrogate models
  • Production application using proactive hardware failure prediction with casual inference to remove bad drives in the field
  • Future investigations of CAEML

CAEML researchers use MATLAB® and related toolboxes extensively throughout the application development process. For example, the standard MATLAB PCA package was used while custom MATLAB code was developed for the polynomial chaotic expansion surrogate models and the casual inference feature selection functions. The rich mathematical libraries allow rapid development of the prototype special functions.

Chris Cheng, Hewlett Packard Enterprise


AI Techniques in MATLAB for Signal, Time-Series, and Text Data

1:30–2:00 p.m.

Developing predictive models for signal, time-series, and text data using artificial intelligence (AI) techniques is growing in popularity across a variety of applications and industries, including speech classification, radar target classification, physiological signal recognition, and sentiment analysis.

In this talk, you will learn how MATLAB® empowers engineers and scientists to apply deep learning beyond the well-established vision applications. You will see demonstrations of advanced signal and audio processing techniques such as automated feature extraction using wavelet scattering and expanded support for ground truth labeling. The talk also shows how MATLAB covers other key elements of the AI workflow:

  • Use of signal preprocessing techniques and apps to improve the accuracy of predictive models
  • Use of transfer learning and wavelet analysis for radar target and ECG classification
  • Interoperability with other deep learning frameworks through importers and ONNX™ converter for collaboration in the AI ecosystem
  • Scalability of computations with GPUs, multi-GPUs, or on the cloud

Bryan Perfetti, MathWorks


Sensor Fusion and Tracking for Autonomous Systems

2:00–2:30 p.m.

There is an exponential growth in the development of increasingly autonomous systems. These systems range from road vehicles that meet the various NHTSA levels of autonomy, through consumer quadcopters capable of autonomous flight and remote piloting, package delivery drones, flying taxis, and robots for disaster relief and space exploration. Work on autonomous systems spans industries and includes academia, as well as government agencies.

In this talk, you will learn to design, simulate, and analyze systems that fuse data from multiple sensors to maintain position, orientation, and situational awareness. By fusing multiple sensors data, you ensure a better result than would otherwise be possible by looking at the output of individual sensors.

Several autonomous system examples are explored to show you how to:

  • Define trajectories and create multiplatform scenarios
  • Simulate measurements from inertial and GPS sensors
  • Generate object detections with radar, EO/IR, sonar, and RWR sensor models
  • Design multi-object trackers as well as fusion and localization algorithms
  • Evaluate system accuracy and performance on real and synthetic data

Deep Learning and Reinforcement Learning Workflows in AI

2:30–3:00 p.m.

AI, or artificial intelligence, is powering a massive shift in the roles that computers play in our personal and professional lives. Two new workflows, deep learning and reinforcement learning, are transforming industries and improving applications such as diagnosing medical conditions, driving autonomous vehicles, and controlling robots.

This talk dives into how MATLAB® supports deep learning and reinforcement workflows, including:

  • Automating preparation and labeling of training data
  • Interoperability with open source deep learning frameworks
  • Training deep neural networks on image, signal, and text data
  • Tuning hyper-parameters to accelerate training time and increase network accuracy
  • Generating multi-target code for NVIDIA®, Intel®, and ARM®

Deploying Deep Neural Networks to Embedded GPUs and CPUs

4:30–5:00 p.m.

Designing and deploying deep learning and computer vision applications to embedded GPU and CPU platforms like NVIDIA® Jetson, AGX Xavier™, and DRIVE AGX is challenging because of resource constraints inherent in embedded devices. A MATLAB® based workflow facilitates the design of these applications, and automatically generated C/C++ or CUDA® code can be deployed to achieve up to 2X faster inference than other deep learning frameworks.

This talk walks you through the workflow. Starting with algorithm design, the algorithm may employ deep learning networks augmented with traditional computer vision techniques and can be tested and verified within MATLAB. Bring live sensor data from peripherals devices on your Jetson/DRIVE platforms to MATLAB running on your host machine for visualization and analysis. Deep learning networks are trained using GPUs and CPUs on the desktop, cluster, or cloud. Finally, GPU Coder™ and MATLAB Coder™ generate portable and optimized CUDA and/or C/C++ code from the MATLAB algorithm, which is then cross-compiled and deployed to Jetson or DRIVE, ARM®, and Intel® based platforms.


Industrial IoT and Digital Twins

5:00–5:30 p.m.

Industrial IoT has brought the rise of connected devices that stream information and optimize operational behavior over the course of a device’s lifetime.

This presentation covers how to develop and deploy MATLAB® algorithms and Simulink® models as digital twin and IoT components on assets, edge devices, or cloud for anomaly detection, control optimization, and other applications. It includes an introduction to how assets, edge, and OT/IT components are connected.

The talk features customer use cases starting from design to final operation, the underlying technology, and results.

Wireless System Design and Prototyping: A Case Study for Next-Generation Wi-Fi Networks for Time-Critical Applications

11:00 a.m.–11:30 a.m.

Wireless Time Sensitive Networks (TSN) is an emerging research area, which can enable new applications and services for many industrial automation systems that rely on time-synchronized (and timely) communication among sensing, computing, and actuating devices. Feasibility demonstration using hardware platforms is a required step before wireless technologies can be adopted in soft and hard real-time industrial applications. However, in order to experiment with time synchronization and other TSN features that control latency and reliability over the wireless medium, it is fundamental to have access to lower level MAC and PHY layer implementations. This presentation introduces a wireless platform for experimental work in the Wi-Fi physical layer. Next-generation Wi-Fi being defined by the IEEE 802.11ax Task Group introduces several features and capabilities that can significantly improve the support for industrial automation applications.

We have recently demonstrated an 802.11ax baseband experimental implementation (with select features) on an Intel Arria 10 FPGA platform integrated with an off-the-shelf analog front end. This SDR platform enables the development of techniques to optimize latency in FPGA and application-specific implementations. For instance, several latency optimizations were developed using this platform, including parallelization techniques for binary convolutional codes, low-latency streaming Fourier transforms, and tightly pipelined transmit and receive processing chains.

The development of wireless TSN technologies is still in the initial exploratory research stage, but as research and standards evolve, new experimentation platforms, especially SDR-based, will be required to validate the research in practice. Today’s SDR hardware and software tools will need to be enhanced to enable new wireless capabilities as well as implementation optimizations that can address the strict TSN requirements.

Using 802.11ax baseband design, this presentation demonstrates a workflow for wireless system design that utilizes MATLAB®, Simulink® modeling, Embedded Coder®, and HDL Coder™ as a unified tool set for rapid prototyping. We discuss software vs. FPGA implementation partitioning based on a deliverable’s objective and tradeoffs.

Mikhail Galeev, Intel Labs


Verify 5G System Performance Using Xilinx RFSoC and Avnet RFSoC Development Kit

12:00–12:30 p.m.

In this presentation, we demonstrate Ethernet-based connectivity to MATLAB® and Simulink®, which will allow you to capture, measure, and characterize RF performance with the Avnet Zynq UltraScale+ RFSoC Development Kit. Over-the-air testing is demonstrated using direct RF-sampling with a 2x2 Small Cell LTE Band 3 plugin card.
During our presentation, we will demonstrate how to:

  • Connect to Xilinx RFSoC hardware from MATLAB and Simulink
  • Characterize performance of RFSoC data converters
  • Control an RF front-end for antenna-to-digital verification
  • Perform radio-in-the-loop data capture to example designs from 5G Toolbox™

Matt Brown, Avnet


MATLAB for 5G Wireless: Baseband, RF, and Antennas

1:00–2:00 p.m.

In this presentation, you’ll learn how to apply the latest capabilities in MATLAB® to model, simulate, test, and prototype 5G technologies including baseband algorithms, massive MIMO antenna arrays, and digitally controlled RF front ends. MathWorks engineers will present in-depth examples to demonstrate how you can optimize and verify your 5G design through all phases of development.

You’ll see how to:

  • Perform 5G NR PHY simulation, including uplink and downlink processing
  • Generate standards-compliant waveforms for design verification and over-the-air testing with a range of RF instruments
  • Develop smart RF technologies, including power amplifier linearization with DPD
  • Model massive MIMO antenna arrays and hybrid beamforming architectures
  • Visualize antenna sites, communication links, and signal coverage on maps

Marc Barberis, MathWorks

Tim Reeves, MathWorks


Wired Communications Systems Modeling and Analysis

2:30 p.m.–3 p.m.

In this presentation, we talk about building wired channel equalization models using a high level of abstraction. This talk introduces the SerDes Toolbox and demonstrates putting together blocks such as DFE, FFE, CTLE, AGC, and CDR as part of a channel equalization scheme. This talk also covers IBIS-AMI model generation for the purpose of channel simulation.

Barry Katz, MathWorks


Master Class: Understanding the 5G NR Physical Layer

3:30–4:00 p.m.

In this presentation, we will provide an understanding of the key physical layer concepts in the 3GPP 5G New Radio (NR) standard. To accomplish the goals of high data rates, low latency, and massive connectivity, 5G introduces new features that add greater flexibility and complexity compared to the 4G LTE standard. This session covers the fundamentals of 5G waveforms, frame structure and numerology, physical channels and signals, synchronization, beam management, and link modeling. Theoretical material will be supplemented with MATLAB demonstrations.

Marc Barberis, MathWorks


Top-Down Modeling and Analysis of Analog Mixed-Signal Systems

5:00–5:30 p.m.

In this presentation, we will talk about a top-down approach to analog mixed-signal architectural modeling using the Mixed-Signal Blockset™. This session will cover usage of the Mixed-Signal Blockset to model mixed-signal elements such as phase locked loops and analog-digital converters. It will also illustrate the capabilities of bringing in impairments and validate the performance of the PLL and ADC using test benches and measurement blocks.

Rajesh Berigei, MathWorks

Leveraging MATLAB and Simulink in Building Battery State-of-Health Estimation Pipelines for Electric Vehicle

11:00 a.m.–11:30 a.m.

This talk gives an overview of battery state-of-health (SOH) estimation and prognostics modeling with data generated from the vehicle model in the cloud. The vehicle model is composed of a Simulink® based electric vehicle model that includes Li-Ion cell chemistry-based battery models. While building battery state-of-health pipelines, it is difficult to capture real data from the vehicle in various driving conditions. We took the approach to leverage a calibrated Li-Ion cell chemistry model to generate the required data in various driving conditions. We pushed this data to the cloud, then had the data pipelines pick this data and do all the downstream processing. This enabled us to build the data pipelines and the analytics stack without having extensive vehicle data. As we have now started getting real data, we are validating this analytics stack. This talk also discusses leveraging the Simulink code-generation feature to generate C-code and its feasibility for real-time in-vehicle SOH estimation.

Matthew Daigle, NIO Inc.


Women in Tech Ignite Lunch and Networking

12:30-1:30 p.m.

As part of the Women in Tech initiative, MathWorks will host a Women in Tech Ignite Lunch during this year’s MATLAB EXPO. Join the lunch to hear from leading technical experts and to discuss your experiences, and use this opportunity to meet and network with female industry peers. All are welcome.


Design and Test of Automated Driving Algorithms

1:30 p.m.–2:00 p.m.

In this talk, you will learn how MathWorks helps you design and test automated driving algorithms, including:

  • Perception: Design LiDAR, vision, radar, and sensor fusion algorithms with recorded and live data
  • Planning: Visualize street maps, design path planners, and generate C/C++ code
  • Controls: Design a model-predictive controller for traffic jam assist, test with synthetic scenes and sensors, and generate C/C++ code
  • Deep learning: Label data, train networks, and generate GPU code
  • Systems: Simulate perception and control algorithms, as well as integrate and test hand code

Shusen Zhang, MathWorks


Full Vehicle Simulation for Electrified Powertrain Selection

2:00–2:30 p.m.

Full vehicle simulation models are needed to assess attributes such as fuel economy and performance for each candidate. At times, this requires integrating models from different engineering teams into a single system level simulation. Integrating these subsystems, including many controllers in model or code together in a closed-loop testing environment, can be challenging. In this session, you will learn how MathWorks automotive modeling tools and simulation integration platform can be used for powertrain selection studies.

Kevin Oshiro, MathWorks


Toolchain Definition and Integration for ISO 26262-Compliant Development

3:30–4:00 p.m.

Dave Hoadley, MathWorks


Planning Simulink Model Architecture and Modeling Patterns for ISO 26262 Compliance

4:00–4:30 p.m.

The ISO 26262 standard for functional safety provides guidance on the development of automotive electronics and electrical system, including embedded software. A common challenge is to determine the strategy, software architecture, design patterns, and toolchain up front in a project to achieve standard compliance and to avoid mid-project changes to these foundational areas. In this presentation, MathWorks engineers will address the following topics based on their experiences applying Simulink® to production programs that require ISO 26262 compliance:

  • Toolchain and reference workflow for ISO 26262 compliance
  • Key considerations for model architecture
  • Modeling constructs required to meet freedom from interference
  • Applying the above best practices to meet AUTOSAR at the same time

Dave Hoadley, MathWorks


Adopting Model-Based Design for FPGA, ASIC, and SoC Development

5:00–5:30 p.m.

The competing demands of functional innovation, aggressive schedules, and product quality have significantly strained traditional FPGA, ASIC, and SoC development workflows.

This talk shows how you can use Model-Based Design with MATLAB® and Simulink® for algorithm- and system-level design and verification, including how to:

  • Verify the functionality of algorithms in the system context
  • Refine algorithms with data types and architectures suitable for FPGA, ASIC, and SoC implementation
  • Prototype and debug models running live on hardware connected to MATLAB or Simulink
  • Generate and regenerate verified design and verification models for the hardware engineering team
  • Keep the workflow connected to speed verification closure and meet functional safety requirements

Robert Anderson, MathWorks