Completed Research Projects

Modular Machine Learning for Behavioral Modeling of Microelectronic Circuits and Systems

Modern machine learning algorithms are inherently modular. This modularity, combined with the behavioral approach to system design and simulation, will be leveraged to develop mathematical tools for assessing the performance and minimal data requirements for learning a low-complexity representation of the system behavior, one component or subsystem at a time.

  • Project PIs: Maxim Raginsky, Andreas Cangellaris
  • Project Research Thrust: Theory and Machine Learning Efficiency
  • Research Timeline Jan 1, 2017 – May 16, 2020

Intellectual Property Reuse through Machine Learning

The objective of this project was to demonstrate that machine learning can be applied to the problem of recasting an analog or full custom digital design from one technology node to another, assuming the same circuit topology.

  • Project PIs: Paul Franzon, Brian Floyd
  • Research Thrust: Design and System Optimization
  • Research Timeline Jan 1, 2017 – Dec 31, 2018

Design Rule Checking with Deep Networks

In this seed project, we investigated the feasibility of training a deep convolutional network to perform Design Rule Checking (DRC). By replacing DRC with a recognition network, we hope to greatly speed it up. After showing initial feasibility, in following years, we investigated tying DRC to interactive layout tools.

  • Project PIs: Paul Franzon, Rhett Davis
  • Research Thrust: Verification
  • Research Timeline Jan 1, 2017 – Dec 31, 2017

Behavioral Model Development for High-Speed Links

Systematically develop a hierarchy of behavioral models of circuits that protects IP, has the same accuracy as the transistor-level models, but require 25–50X less CPU time and memory.

  • Project PIs: Madhavan Swaminathan, Paul Franzon, Jose Schutt-Aine
  • Research Thrust: Modeling and Simulation; Design and System Optimization
  • Research Timeline Jan 1, 2017 – Aug 15, 2020

Models to Enable System-level Electrostatic Discharge Analysis

ML is used to create ESD models of the system’s nonlinear components, as needed for SOA analysis and soft failure prediction. The models are targeted for circuit or mixed-mode (EM-circuit) simulators.

  • Project PIs: Elyse Rosenbaum, Maxim Raginsky
  • Research Thrust: Reliability and Security
  • Research Timeline Jan 1, 2017 to May 31, 2019

Optimization of Power Delivery Networks for Maximizing Signal Integrity

Develop ML based methods to optimize the system output response based on a large set of design (control) parameters. Co-optimization of the signal path and power delivery network under a multi-physics environment to maximize performance.

  • Project PI’s: Madhavan Swaminathan, Chuanyi Ji
  • Research Thrust: Design and System Optimization
  • Research Timeline Jan 1, 2017 – Dec 31, 2018

Machine Learning for Trusted Platform Design

Use ML techniques to assess if an IoT system is under cyber attack via power or RF side-channels and develop hardware countermeasures to identify and nullify such attacks.

  • Project PIs: Arijit Raychowdhury, Madhavan Swaminathan
  • Research Thrust: Reliability and Security
  • Research Timeline Jan 1, 2018 – May 7, 2020

Machine Learning to Predict Successful FPGA Compilation Strategy

Produce FPGA compilation recipes that show high success rate and fast compilation time.

  • Project PIs: Sungkyu Lim
  • Research Thrust: Design and System Optimization; Verification
  • Research Timeline Jan 1, 2018 – Dec 31, 2019

Causal Inference for Early Detection of Hardware Failure

Use time-series sensor data to detect wear-out of a hardware component, e.g. HDD or SSD in a storage array. Longitudinal causal inference techniques will omit redundant covariates or features that might be correlated with the failure but do not help in the prediction task.

  • Project PIs: Negar Kiyavash, Maxim Raginsky, Elyse Rosenbaum
  • Research Thrust: Theory and Machine Learning Efficiency
  • Research Timeline Jan 1, 2018 – May 31, 2019

Fast, Accurate PPA Model-Extraction

Earlier work in CAEML (“Applying Machine Learning to Back End IC Design”) will be used to generate models of performance, area and static power. This project focuses on the important missing piece—dynamic power prediction. Parameterized RTL source code and a test-bench with embedded architectural event counters must be provided for each circuit block. This work seeks to eliminate the complicated gate-level simulations presently needed to make accurate predictions of power, which typically occur very late in the design process. This project will develop a comprehensive data mining methodology to maximize the accuracy of PPA predictions while minimizing the data collection effort. A central challenge of this project is to show that high-level events are meaningful predictors of dynamic power.

  • Project PIs: Rhett Davis, Paul Franzon, Dror Baron, Eric Rotenberg
  • Research Thrust: Design and System Optimization
  • Research Timeline Jan 1, 2019 – May 7, 2020

High-Dimensional Structural Inference for Non-Linear Deep Markov or State Space Time Series Models

In many applications, a time series of high-dimensional latent vector variables is observed indirectly from noisy measurements. The data are used to predict future failures, and the system can respond accordingly. This project will investigate deep Markov models (DMMs), in which an inference network approximates a posterior probability for the time-dynamics of latent variables by running a multi-layer perceptron (MLP) neural network. We will implement and develop a DMM system that can cope with various types of statistical structure among the features, and pay close attention to scaling the computation as the dimensionality increases.

  • Project PIs: Dror Baron, Rhett Davis, Paul Franzon
  • Research Thrust: Theory and Machine Learning Efficiency
  • Research Timeline Jan 1, 2019 – May 7, 2020

Applying Machine Learning to Back End IC Design

Back end design refers to physical design of an ASIC, including place and route. The goal of this project is to use machine learning to set up the physical design tools on a design by design basis so that optimum results can be achieved with minimal human interaction.

  • Project PIs: Rhett Davis, Paul Franzon, Dror Baron
  • Research Thrust: Design and System Optimization; Verification
  • Research Timeline Jan 1, 2018 – Dec 31, 2020

NL2PPA: Netlist-to-PPA Prediction Using Machine Learning

This project aims to build machine learning models and develop associated tools to predict PPA (power/performance/area) given an RTL description of a circuit, eliminating the need to undertake the lengthy physical design process. Using the predicted PPA results, designers can fix and/or improve RTL in turn. The inputs to the model include the target technology specs (e.g., technology node, supply voltage, target frequency), netlist info (e.g., number of IPs/gates/nets, connectivity), physical design options (e.g., footprint, placement density, P&R algorithms, clock/power network options), and other key features that will help improve the prediction accuracy.

  • Project PI: Sungkyu Lim
  • Research Thrust: Design and System Optimization
  • Research timeline: January 1, 2019 – May 15, 2021

RNN Models for Computationally-Efficient Simulation of Circuit Aging Including Stochastic Effects

This project will develop a method for accurate and efficient simulation of circuit aging due to hot carrier injection (HCI) and bias temperature instability (BTI). For design-technology co-optimization (DTCO), the simulations must cover the range of use conditions, i.e., the “mission profile,” which includes the input vector, and both the deterministic and stochastic aspects of aging should be simulated. Each circuit block, i.e. library cell or IP block, will be represented by a limited-complexity black-box model, such as the RNN, that takes the circuit’s total operating time as one of its inputs. Ensuring the stability of each black-box model alone and when interconnected to other circuit models is a significant research challenge.

  • Project PIs:  Elyse Rosenbaum, Maxim Raginsky
  • Research Thrust: Modeling and Simulation; Reliability and Security
  • Research Timeline: January 1, 2019 to May 15, 2021

High-Speed Bus Physical Design Analysis through Machine Learning

This project will create a dynamic ML ecosystem to characterize electrical performance of each net in a given PCB/package layout file with confidence bounds and leverage pre-PD simulation to collect training data. The use of stochastic collocation to will be used to account for manufacturing tolerance and nets will be ranked in descending order of SI performance to determine any bottleneck in the system.

  • Project PIs: Xu Chen, Madhavan Swaminathan
  • Research Thrust: Design and System Optimization; Modeling and Simulation; Verification
  • Research Timeline Jan 1, 2019 – May 15, 2021

Enabling Side-Channel Attacks on Post-Quantum Protocols through Machine Learning

The primary purpose of this project is to enable single-trace power side-channel attacks on post-quantum key-exchange protocols using machine learning and to quantify the strength of timing obfuscation defenses against those attacks. The central question to be addressed is whether machine-learning classifiers provide stronger attacks compared to the conventional ones in the context of post-quantum cryptosystems, and to what extent can obfuscation methods hide the vulnerability.

  • Project PI: Aydin Aysu
  • Research Thrust: Reliability and Security
  • Research Timeline Jan 1, 2019 – May 15, 2021

Design Space Exploration Using DNN

Designing advanced semiconductor manufacturing process brings area, speed, power and other benefits but also new performance challenges as a result of the pure physics of running current through tiny wires. Often times there are post tape-out escapes both at the silicon and packaging levels due to inadequate analysis at an early design stage. This sometimes is due to lack of time or poor assumptions made by the designer which may be inaccurate. We address these challenges in this project by focusing on early Design Space Exploration (DSE). Such a solution we believe would be applicable to various levels in the system hierarchy.

  • Project PI: Madhavan Swaminathan
  • Research Thrust: Design and System Optimization; Reliability and Security
  • Research Timeline Jan 1, 2019 – May 15, 2021

FPGA Hardware Accelerator for Real Time Security

Determine design approaches to building real time detection systems for ransomware defenses, with a focus on Random Forest ML.  Investigate the training support needs as well.  Determine higher level ML approaches that support model update without redesign.

  • Project PI: Paul Franzon
  • Project Timeline: Jan 1, 2020 – Dec 31, 2021

Machine Learning Method for Inverse Design and Optimization of High-Speed Links

Our plan is to develop a machine learning-driven method for solving inverse and optimization problems in hardware design.  We will leverage forward surrogate models developed in earlier CAEML research, along with additional training and dimensionality reduction methods to synthesize candidate designs that meet specified performance objects, speeding up the design process.

  • Project PIs: Xu Chen & Andreas Cangellaris
  • Project Timeline: Jan 1, 2021 – July 31, 2023

High Dimensional Optimization and Inverse Methods for Electronic Design

This project aims to use high dimensional optimization approaches in electronic system design.  Techniques that will be considered include surrogate modeling, Bayesian optimization, and inverse neural networjks.

  • Project PIs: Paul Franzon, Brian Floyd, and Dror Baron
  • Project Timeline: Jan 1, 2021 – July 31, 2023

GAN generated models for Signal Integrity, Thermal Modeling, Etc.

This project will use conditional GANs to models drivers and receivers and to support thermal modeling.

  • Project PIs: Paul Franzon, Chau-Wai Wu, Tainfu Wu, Rhett Davis, and Dror Baron
  • Project Timeline: Jan 1, 2021 – July 31, 2023

ML-Based Security Analysis of Homomorphic Encryption Side-Channels

This project explores implementation security issues of homomorphic encryption solutions.  Specifically, the project aims exposing the first side-channel vulnerabilities of edge devices executing homomorphic encryption/decryption and integrating the side-channel analysis into standard EDA flows.  Machine learning techniques will enable automating the proposed work and achieving effective attacks that can outperform conventional ones. 

  • Project PIs: Aydin Aysu and Paul Franzon
  • Project Timeline: Jan 1, 2021 – July 31, 2023