Rapid advance in semiconductor technology in the last several decades largely stems from the development of 3D transistors and 3D integrated circuits (ICs). This substantially increases the transistor density, as well as the power density, in semiconductor chips and results in severe thermal problems that must be mitigated in the design of computing/communication chips. Thermal consideration thus becomes a critical part in the design of modern semiconductor chips. However, thermal-aware chip design has been based on oversimplified compact models that are incapable of predicting crucial hot-spots over a chip. Although direct numerical simulations (DNSs) are able to capture the hot-spots, they are computationally expensive and prohibitive at the architecture level even with today's high-performance computing capabilities.
This proposed work will investigate an efficient thermal simulation approach for thermal-aware architecture design exploration based on a multi-block reduced-order model (ROM) developed in a previous NSF project. This approach is able to offer a reduction in computational time by several orders of magnitude, compared to DNS, to capture hot spots in an IC. The investigation will first develop an efficient thermal ROM for each selected functional unit (as a block) of a selected GPU. This requires collection of a large data set for each block using DNSs to account for parametric variations in each block. The data will are needed to calculate model parameters for the ROM block. This process is computationally very intensive and will be performed in an Nvidia GPU to improve the efficiency. These ROM blocks for the functional units can be glued together to design different GPU floorplans. The multi-block ROM approach will be implemented in a microarchitecture simulator to study thermal effects on GPU performance with different designs of floorplans. Nvidia CUDA or OpenCL based codes will be developed to implement the model to achieve cost-effective thermal-aware design for GPU architectures.