Energy-Efficient AI Hardware

Our research focuses on overcoming the “Memory Wall” and “Power Wall” in modern deep learning deployment.

We investigate novel architectures including Compute-in-Memory (CIM), sparsity-aware processing, and hardware-software co-design. Our goal is to enable real-time intelligence on edge devices with milliwatt-level power consumption.

Research Highlight

Key Directions and Achievements

Sparsity-Aware Accelerators

Exploiting activation and weight sparsity to reduce computation by 5x-10x without accuracy loss.

Compute-in-Memory

Designing mixed-signal circuits to perform MAC operations directly within SRAM/RRAM arrays.

Reconfigurable Architectures

FPGA-based dynamic accelerators that adapt to Transformer and CNN workloads in real-time.