In the last two decades, we have witnessed a dramatic increase in the scale of chip designs. Today, we have multi-core system-on-a-chip design with billions of transistors and embedded processors that power everything from cell phones to cars. Recent advances in sciences and engineering have also led to the proliferation of cyber-physical systems. These are systems whose correct operation hinge on some complex interaction and integration of computational processes, physical processes and human activities. Drones, telesurgery robots, self-driving cars, body-sensor networks, smart dust, and building automation systems are all examples of cyber-physical systems. As the quality of our lives now depend on the correct functioning of these devices and systems, it is paramount to ensure that they are safe, reliable and secure.

Current Projects:

A.I. Safety

Artificial Intelligence (A.I.) is on the verge of ubiquity. The rise of AlphaGo and Watson suggests that A.I. might soon replace humans as the decision maker in many applications ranging from driving a car to making medical diagnosis. On the other hand, A.I. systems are also becoming increasing complex, and it is quite daunting that we are going to deploy these systems without a solid understanding of the complexities and their implications: why did the machine pick this decision? why not something else? when does it fail? how can we trust the machine if we don’t fully understand it? The somewhat dismal status quo is that A.I. is often used as the silver bullet or some magic black box in many applications. In this project, we will develop a combination of machine learning and computational proof techniques to (im)prove A.I. safety, explain A.I. decisions, and ultimately make A.I. systems more trustworthy.

Students: Weichao Zhou, Jiameng Fan, Panagiota Kiourti, Feisi Fu

Relevant publications:

  • TrojDRL: Evaluation of Backdoor Attacks on Deep Reinforcement Learning.
    Panagiota Kiourti, Kacper Wardega, Susmit Jha and Wenchao Li.
    ACM/EDAC/IEEE Design Automation Conference (DAC), July 2020.
  • Towards Verification-Aware Knowledge Distillation for Neural-Network Controlled Systems.
    Jiameng Fan, Chao Huang, Wenchao Li, Xin Chen and Qi Zhu.
    International Conference on Computer Aided Design (ICCAD), November 2019.
  • ReachNN: Reachability Analysis of Neural-Network Controlled Systems.
    Chao Huang, Jiameng Fan, Wenchao Li, Xin Chen and Qi Zhu.
    International Conference on Embedded Software (EMSOFT), October 2019.
  • Safety-Guided Deep Reinforcement Learning via Online Gaussian Process Estimation.
    Jiameng Fan and Wenchao Li
    International Conference on Learning Representation (ICLR), Workshop on Safe Machine Learning: Specification, Robustness, and Assurance, May 2019.
  • Safety-Aware Apprenticeship Learning.
    Weichao Zhou and Wenchao Li.
    International Conference on Computer Aided Verification (CAV), July 2018.
  • Polynomial-Time Verification of PCTL Properties of MDPs with Convex Uncertainties.
    Alberto Puggelli, Wenchao Li, Alberto L. Sangiovanni-Vincentelli and Sanjit A. Seshia.
    International Conference on Computer Aided Verification (CAV), July 2013. [Longer Tech. Report Version]

Secure Multi-Robot Systems

Recent trends in industrial automation indicate an ever-increasing adoption of autonomous mobile robots in production systems (the Kiva system in Amazon’s warehouses is a prime and large-scale example). In parallel, there is also an accelerating trend in large scale hacking events directed against high-profile companies (Equifax, Target, Walmart, Yahoo, Adobe, etc.) that have resulted in serious security breaches. Until now, these breaches have been limited to data, but it is easy to imagine that similar incidents in the near feature could take advantage of the increasing amount of automation to expand the attack reach to physical damages. It is therefore of critical importance to devise strategies that can preemptively address these threats. In this project, we propose a novel cyber-physical framework that uses planning and physical measurements to improve the security of networks of advanced robots against malicious cyber attacks that can compromise individual robots and sabotage the task of the group.

Student: Kacper Wardega

Relevant publications:

  • Resilience of Multi-Robot Systems to Physical Masquerade Attacks
    Kacper Wardega, Roberto Tron and Wenchao Li
    IEEE Workshop on the Internet of Safe Things (SafeThings), May 2019.
  • Masquerade Attack Detection Through Observation Planning for Multi-Robot Systems
    Kacper Wardega, Roberto Tron and Wenchao Li
    International Conference on Autonomous Agents and Multiagent Systems (AAMAS), May 2019.

Extensibility-Driven Design of Cyber-Physical Systems


A longstanding problem in the design of CPSs is the inability and ineffectiveness in coping with software and hardware evolutions over the lifetime of a design or across multiple versions in the same product family. For example, Boeing 777 still uses the discontinued AMD 2950-microprocessor running a MS DOS file system for its Information Management System and Air Data Inertial Reference System. A fundamental reason for this is that small changes in resource usage can cause big and unexpected changes in timing and ultimately affect the functionality of the design. Engineers need to ensure that any changes not only (1) meet the constraints of the embedded platform such as schedulability and power constraints, but also (2) preserve the correctness of functional properties, many of which are affected by the platform changes. Systems that are designed without future changes in mind often incur significant redesign and re-verification cost, and reduced system availability and reliability. In this project, we treat extensibility as a first-class design objective, and aim to develop an extensibility-driven design (EDD) flow where different models and metrics of extensibility are considered jointly with other objectives at design time, further supported by new synthesis and verification tools that can reason about design updates efficiently. This is a collaborative project with Northwestern University, funded by NSF CPS.

Students: Kacper Wardega

Relevant publications:

  • Opportunistic Intermittent Control with Safety Guarantees for Autonomous Systems.
    Chao Huang, Shichao Xu, Zhilu Wang, Shuyue Lan, Wenchao Li and Qi Zhu.
    ACM/IEEE Design Automation Conference (DAC), July 2020.
  • Formal Verification of Weakly-Hard Systems
    Chao Huang, Wenchao Li and Qi Zhu.
    International Conference on Hybrid Systems: Computation and Control (HSCC), April 2019.
  • Exploring Weakly-hard Paradigm for Networked Systems
    Chao Huang, Kacper Wardega, Wenchao Li and Qi Zhu.
    Workshop on Design Automation for CPS and IoT (DESTION), April 2019.
  • Extensibility-Driven Automotive In-Vehicle Architecture Design.
    Qi Zhu, Hengyi Liang, Licong Zhang, Debayan Roy, Wenchao Li and Samarjit Chakraborty.
    ACM/IEEE Design Automation Conference (DAC), June 2017.
  • Delay-Aware Design, Analysis and Verification of Intelligent Intersection Management.
    Bowen Zheng, Chung-Wei Lin, Hengyi Liang, Shinichi Shiraishi, Wenchao Li and Qi Zhu.
    IEEE International Conference on Smart Computing (SMARTCOMP), May 2017.
  • Design Automation of Cyber-Physical Systems: Challenges, Advances, and Opportunities.
    Sanjit A. Seshia, Shiyan Hu, Wenchao Li and Qi Zhu.
    IEEE Transactions on Computer-Aided Design of Circuits and Systems (TCAD), 2017.

Verification and Synthesis of Human-in-the-Loop Controllers


This project is motivated by the emergence of automated driver-assistance systems and smart factories where human operators and industrial robots work together for the manufacturing and delivery of goods. A common theme in these settings is the deployment of control systems in safety-critical settings that involve the interaction of autonomous controllers with human operators. The correctness of such systems depends not only on the autonomous controller, but also on the actions of the human operator. Our previous work has provided a formalism for human-in-the-loop controllers, and explored automata-theoretic techniques for synthesizing provably safe controllers from high-level temporal logic specifications. In this project, we will build on these results and investigate novel verification and synthesis techniques for richer models (e.g., hybrid and probabilistic) and more complex modes of interactions.

Students: Jiameng Fan

Relevant publications:

  • Design Automation of Cyber-Physical Systems: Challenges, Advances, and Opportunities.
    Sanjit A. Seshia, Shiyan Hu, Wenchao Li and Qi Zhu.
    IEEE Transactions on Computer-Aided Design of Circuits and Systems (TCAD), September 2017.
  • Synthesis for Human-in-the-Loop Control Systems.
    Wenchao Li, Dorsa Sadigh, S. Shankar Sastry and Sanjit A. Seshia.
    International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), April 2014.

Assured Communication with Distributed, Low-Power Devices

Students: Kacper Wardega

Relevant publications:

  • Application-Aware Scheduling of Networked Applications over the Low-Power Wireless Bus.
    Kacper Wardega and Wenchao Li.
    Design, Automation and Test in Europe Conference (DATE), March 2020.

Past Projects:

Dynamic Adaptive Embedded Software (DyAdEm)

Adaptivity is a fundamental requirement of embedded systems. These systems often face changes in computing platforms, resources available, physical environments, operator skills, and functional and performance requirements. This project aims to develop the principles and tools for designing adaptive embedded software. Specifically , we propose an integrative framework that can (1) learn about what and how much has changed (e.g., through runtime monitoring), (2) use language-based approaches (e.g., program transformation, program synthesis) to automatically adapt the system to these changes, and (3) regulate the adaptation through verification or an assurance case so that changing the system does not result in violation of safety requirements which can cause catastrophic failures. We will explore both online and offline approaches to these three problems. This is a collaborative project with SRI International, UC Berkeley, Honeywell Research Labs, funded under the DARPA BRASS program.

Programmer’s Assistant Synthesizing Code via Abstraction & Logical Inference (PASCALI)

PASCALI aims to offer a comprehensive range of capabilities for extracting, indexing, and applying semantic information from large code bases to support both individual programmers and software development teams in the task of constructing reliable, robust, and efficient software. In particular, we focus on developing techniques that can effectively and efficiently identify similar program fragments across software projects in a large code base. This is a collaborative project with SRI International, University of Waterloo, MIT, and University of Washington, funded under the DARPA MUSE program.