开源软件名称(OpenSource Name): zwang4/awesome-machine-learning-in-compilers开源软件地址(OpenSource Url): https://github.com/zwang4/awesome-machine-learning-in-compilers开源编程语言(OpenSource Language): 开源软件介绍(OpenSource Introduction): Awesome machine learning for compilers and program optimisation
A curated list of awesome research papers, datasets, and tools for applying machine learning techniques to compilers and program optimisation.
Contents
Papers
Survey
Machine Learning in Compiler Optimisation - Zheng Wang and Michael O'Boyle, Proceedings of the IEEE, 2018
A survey on compiler autotuning using machine learning - Amir H. Ashouri, William Killian, John Cavazos, Gianluca Palermo, and Cristina Silvano, ACM Computing Surveys (CSUR), 2018
A survey of machine learning for big code and naturalness - Miltiadis Allamanis, Earl T. Barr, Premkumar Devanbu, and Charles Sutton, ACM Computing Surveys (CSUR), 2018
A Taxonomy of ML for Systems Problems - Martin Maas, IEEE Micro, 2020
The Deep Learning Compiler: A Comprehensive Survey - Mingzhen Li, Yi Liu, Xiaoyan Liu, Qingxiao Sun, Xin You, Hailong Yang, Zhongzhi Luan, Lin Gan, Guangwen Yang, Depei Qian, IEEE Transactions on Parallel and Distributed Systems, 2021
Iterative Compilation and Compiler Option Tuning
SRTuner: Effective Compiler Optimization
Customization by Exposing Synergistic Relations - Sunghyun Park, Salar Latifi, Yongjun Park, Armand Behroozi, Byungsoo Jeon, Scott Mahlke. CGO 2022.
Iterative Compilation Optimization Based on Metric Learning and Collaborative Filtering - Hongzhi Liu, Jie Luo, Ying Li, Zhonghai Wu. ACM TACO 2022.
Bayesian Optimization is Superior to Random Search for
Machine Learning Hyperparameter Tuning: Analysis of the Black-Box Optimization Challenge 2020 - Ryan Turner, David Eriksson, Michael McCourt, Juha Kiili, Eero Laaksonen, Zhen Xu, Isabelle Guyon. arXiv 2021.
Bliss: auto-tuning complex applications using a pool of diverse lightweight learning models - RB Roy, T Patel, V Gadepally, D Tiwari. PLDI 2021.
Efficient Compiler Autotuning via Bayesian Optimization - Junjie Chen, Ningxin Xu, Peiqi Chen, Hongyu Zhang. ICSE 2021.
Customized Monte Carlo Tree Search for LLVM/Polly's Composable Loop Optimization Transformations - Jaehoon Koo, Prasanna Balaprakash, Michael Kruse, Xingfu Wu, Paul Hovland, Mary Hall. Arxiv.org, 2021.
Improved basic block reordering - Andy Newell and Sergey Pupyrev. IEEE Transactions on Computers, 2020.
Static Neural Compiler Optimization via Deep Reinforcement Learning - Rahim Mammadli, Ali Jannesari, Felix Wolf. LLVM HPC Workshop, 2020.
Autotuning Search Space for Loop Transformations - Michael Kruse, Hal Finkel, Xingfu Wu. LLVM HPC Workshop, 2020.
A Collaborative Filtering Approach for the Automatic Tuning of Compiler Optimisations - Stefano Cereda, Gianluca Palermo, Paolo Cremonesi, and Stefano Doni, LCTES 2020.
Autophase: Compiler phase-ordering for hls with deep reinforcement learning . Ameer Haj-Ali, Qijing Huang, William Moses, John Xiang, Ion Stoica, Krste Asanovic, John Wawrzynek. MLSys 2020.
FuncyTuner: Auto-tuning Scientific Applications With Per-loop
Compilation - Tao Wang, Nikhil Jain, David Beckingsale, David Böhme, Frank Mueller, Todd Gamblin. ICPP 2019.
Micomp: Mitigating the compiler phase-ordering problem using optimization sub-sequences and machine learning - Amir H. Ashouri, Andrea Bignoli, Gianluca Palermo, Cristina Silvano, Sameer Kulkarni, and John Cavazos. ACM Transactions on Architecture and Code Optimization (TACO) 2017.
Iterative Schedule Optimization for Parallelization in the
Polyhedron Model - Stefan Ganser, Armin Grösslinger, Norbert Siegmund, Sven Apel, and Christian Lengauer. ACM Transactions on Architecture and Code Optimization (TACO), 2017.
Learning to superoptimize programs - Rudy Bunel, Alban Desmaison, M. Pawan Kumar, Philip H.S. Torr, Pushmeet Kohlim. ICLR 2017
Continuous learning of compiler heuristics - Michele Tartara and Stefano Crespi Reghizzi. ACM Transactions on Architecture and Code Optimization (TACO), 2013.
Mitigating the compiler optimization phase-ordering problem using machine learning - Sameer Kulkarni and John Cavazos. OOPSLA 2012.
An evaluation of different modeling techniques for iterative compilation - Eunjung Park, Sameer Kulkarni, and John Cavazos. CASES 2011.
Evaluating iterative optimization across 1000 datasets - Yang Chen, Yuanjie Huang, Lieven Eeckhout, Grigori Fursin, Liang Peng, Olivier Temam, and Chengyong Wu. PLDI 2010
Iterative optimization in the polyhedral model: Part II, multidimensional time - Louis-Noël Pouchet, Cédric Bastoul, Albert Cohen, and John Cavazos. PLDI 2008.
Cole: compiler optimization level exploration - Kenneth Hoste and Lieven Eeckhout. CGO 2008.
MILEPOST GCC: machine learning based research compiler - Grigori Fursin, Cupertino Miranda, Olivier Temam, Mircea Namolaru, Elad Yom-Tov, Ayal Zaks, Bilha Mendelson et al., 2008
Evaluating heuristic optimization phase order search algorithms - J. W. Davidson, Gary S. Tyson, D. B. Whalley, and P. A. Kulkarni. CGO 2007.
Rapidly selecting good compiler optimizations using performance counters - John Cavazos, Grigori Fursin, Felix Agakov, Edwin Bonilla, Michael FP O'Boyle, and Olivier Temam. CGO 2007.
Using machine learning to focus iterative optimization - Felix Agakov, Edwin Bonilla, John Cavazos, Björn Franke, Grigori Fursin, Michael FP O'Boyle, John Thomson, Marc Toussaint, and Christopher KI Williams. CGO 2006.
Method-specific dynamic compilation using logistic regression - John Cavazos and Michael FP O'boyle. OOPSLA 2005.
Predicting unroll factors using supervised classification - Mark Stephenson and Saman Amarasinghe. CGO 2005.
Fast searches for effective optimization phase sequences - Prasad Kulkarni, Stephen Hines, Jason Hiser, David Whalley, Jack Davidson, and Douglas Jones. PLDI 2004.
Instruction-level Optimisation
A Reinforcement Learning Environment for Polyhedral Optimizations - Alexander Brauckmann, Andrés Goens, Jeronimo Castrillon. PACT, 2021.
AI Powered Compiler Techniques for DL Code Optimization - Sanket Tavarageri, Gagandeep Goyal, Sasikanth Avancha, Bharat Kaul, Ramakrishna Upadrasta. Arxiv.org, 2021.
Deep Learning-based Hybrid Graph-Coloring Algorithm for Register Allocation - Dibyendu Das, Shahid Asghar Ahmad, Kumar Venkataramanan. LLVM HPC Workshop, 2020.
NeuroVectorizer: end-to-end vectorization with deep reinforcement learning - Ameer Haj-Ali, Nesreen K. Ahmed, Ted Willke, Yakun Sophia Shao, Krste Asanovic, and Ion Stoica. CGO 2020.
Unleashing the Power of Learning: An Enhanced Learning-Based Approach for Dynamic Binary Translation - Changheng Song, Wenwen Wang, Pen-Chung Yew, Antonia Zhai, Weihua Zhang. USENIX ATC 2019.
Compiler Auto-Vectorization with Imitation Learning - Charith Mendis, Cambridge Yang, Yewen Pu, Saman P. Amarasinghe, Michael Carbin. NeurIPS 2019.
Multi-objective Exploration for Practical Optimization Decisions in Binary Translation - Sunghyun Park, Youfeng Wu, Janghaeng Lee, Amir Aupov, and Scott Mahlke. ACM Transactions on Embedded Computing Systems (TECS), 2019.
Automatic construction of inlining heuristics using machine learning. - Sameer Kulkarni, John Cavazos, Christian Wimmer, and Douglas Simon. CGO 2013.
Automatic tuning of inlining heuristics - John Cavazos and Michael O'Boyle. SC 2005.
Inducing heuristics to decide whether to schedule - John Cavazos and J. Eliot B. Moss. PLDI 2003.
Meta optimization: Improving compiler heuristics with machine learning - Mark Stephenson, Saman Amarasinghe, Martin Martin, and Una-May O'Reilly. PLDI 2003.
Learning to schedule straight-line code - J. Eliot B. Moss, Paul E. Utgoff, John Cavazos, Doina Precup, Darko Stefanovic, Carla E. Brodley, and David Scheeff. NeurIPS 1998.
Auto-tuning and Design Space Exploration
One-shot tuner for deep learning compilers - Jaehun Ryu, Eunhyeok Park, Hyojin Sung. CC 2022.
Value Learning for Throughput Optimization of Deep Learning Workloads - Benoit Steiner, Chris Cummins, Horace He, Hugh Leather. MLSys 2021.
DynaTune: Dynamic Tensor Program Optimization in Deep Neural NetworkCompilation - Minjia Zhang, Menghao Li, Chi Wang, Mingqin Li. ICLR 2021.
Optimizing Memory Placement using Evolutionary Graph Reinforcement Learning - Shauharda Khadka, Estelle Aflalo, Mattias Mardar, Avrech Ben-David, Santiago Miret, Shie Mannor, Tamir Hazan, Hanlin Tang, Somdeb Majumdar. ICLR 2021.
GPTune: Multitask Learning for Autotuning Exascale Applications - Yang Liu, Wissam M. Sid-Lakhdar, Osni Marques, Xinran Zhu, Chang Meng, James W. Demmel, Xiaoye S. Li. PPoPP 2021.
ApproxTuner: A Compiler and Runtime System for Adaptive Approximations - Hashim Sharif, Yifan Zhao, Maria Kotsifakou, Akash Kothari, Ben Schreiber, Elizabeth Wang, Yasmin Sarita, Nathan Zhao, Keyur Joshi, Vikram S. Adve, Sasa Misailovic, Sarita Adve. PPoPP 2021.
Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation - Byung Hoon Ahn, Prannoy Pilligundla, Amir Yazdanbakhsh, Hadi Esmaeilzadeh. ICLR 2020.
Ansor: Generating High-Performance Tensor Programs for Deep Learning - Lianmin Zheng, Chengfan Jia, Minmin Sun, Zhao Wu, Cody Hao Yu, Ameer Haj-Ali, Yida Wang, Jun Yang, Danyang Zhuo, Koushik Sen, Joseph E. Gonzalez, Ion Stoica. OSDI 2020. (slides , presentation )
A Pattern Based Algorithmic Autotuner for Graph Processing on GPUs - Ke Meng, Jiajia Li, Guangming Tan, Ninghui Sun. PPoPP 2019.
FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search - Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, Yangqing Jia, Kurt Keutzer. CVPR 2019.
TVM: An automated end-to-end optimizing compiler for deep learning - Tianqi Chen, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Haichen Shen, Meghan Cowan et al., OSDI 2018
BOAT: Building auto-tuners with structured Bayesian optimization - Valentin Dalibard, Michael Schaarschmidt, and Eiko Yoneki, WWW 2017.
Cobayn: Compiler autotuning framework using bayesian networks - Amir H. Ashouri, Giovanni Mariani, Gianluca Palermo, Eunjung Park, John Cavazos, and Cristina Silvano, ACM Transactions on Architecture and Code Optimization (TACO), 2016.
Autotuning algorithmic choice for input sensitivity - Yufei Ding, Jason Ansel, Kalyan Veeramachaneni, Xipeng Shen, Una-May O'Reilly, and Saman Amarasinghe. PLDI 2015
Fast: A fast stencil autotuning framework based on an optimal-solution space model - Yulong Luo, Guangming Tan, Zeyao Mo, and
Ninghui Sun. ACM Transactions on Architecture and Code Optimization (TACO), 2015.
GPU performance and power tuning using regression trees - Wenhao Jia, Elba Garza, Kelly A. Shaw, and Margaret Martonosi. SC 2015.
Reinforcement learning-based inter-and intra-application thermal optimization for lifetime improvement of multicore systems - Anup K Das, Rishad Ahmed Shafik, Geoff V Merrett, Bashir M Al-Hashimi, Akash Kumar, Bharadwaj Veeravalli. DAC 2014
Opentuner: An extensible framework for program autotuning - Jason Ansel, Shoaib Kamil, Kalyan Veeramachaneni, Jonathan Ragan-Kelley, Jeffrey Bosboom, Una-May O'Reilly, and Saman Amarasinghe. PACT 2014
Taming parallel I/O complexity with auto-tuning - Babak Behzad, Huong Vu Thanh Luu, Joseph Huchette, Surendra Byna, Ruth Aydt, Quincey Koziol, and Marc Snir. SC 2013.
A multi-objective auto-tuning framework for parallel codes - Herbert Jordan, Peter Thoman, Juan J. Durillo, Simone Pellegrini, Philipp Gschwandtner, Thomas Fahringer, and Hans Moritsch. SC 2012.
Bandit-based optimization on graphs with application to library performance tuning - Frédéric De Mesmay, Arpad Rimmel, Yevgen Voronenko, and Markus Püschel. ICML 2009.
Combining models and guided empirical search to optimize for multiple levels of the memory hierarchy - Chun Chen, Jacqueline Chame, and Mary Hall. CGO 2005
Active harmony: towards automated performance tuning - Cristian Tapus , I-Hsin Chung , Jeffrey K. Hollingsworth. SC 2002