Download PDFOpen PDF in browserHiNUMA: NUMA-aware Data Placement and Migration in Hybrid Memory SystemsEasyChair Preprint 16189 pages•Date: October 9, 2019AbstractNon-uniform memory access (NUMA) architectures feature asymmetrical memory access latencies on different nodes. Hybrid memory systems composed of emerging non-volatile memory (NVM) and DRAM further diversifies data access latencies due to significant performance gap between NVM and DRAM. Traditional NUMA memory management polices fail to be effective in hybrid memory systems and may even hurt application performance. In this paper, we present HiNUMA, a new NUMA abstraction for memory allocation and migration in hybrid memory systems. HiNUMA advocates NUMA topology-aware hybrid memory allocation polices for initial data placement. HiNUMA also introduces a new NUMA balancing mechanism called HANB for memory migration at runtime. HANB considers not only data hotness but also memory bandwidth utilization to reduce the cost of data access in hybrid memory systems. We evaluate the performance of HiNUMA with several typical workloads. Experimental results show that HiNUMA can effectively utilize hybrid memories, and deliver much higher application performance than the default NUMA memory management polices and other state-of-the-art work. Keyphrases: Data Migration, NUMA, data placement, hybrid memory
|