• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# MLIR
2
3## Overview
4
5MLIR, or Multi-Level Intermediate Representation, is a representation format
6and library of compiler utilities that sits between the model representation
7and low-level compilers/executors that generate hardware-specific code.
8
9MLIR is, at its heart, a flexible infrastructure for modern optimizing
10compilers. This means it consists of a specification for intermediate
11representations (IR) and a code toolkit to perform transformations on that
12representation. (In compiler parlance, as you move from higher-level
13representations to lower-level representations, these transformations can be
14called “lowerings”)
15
16MLIR is highly influenced by [LLVM](https://llvm.org/) and unabashedly reuses
17many great ideas from it. It has a flexible type system, and allows
18representing, analyzing and transforming graphs combining multiple levels of
19abstraction in the same compilation unit. These abstractions include TensorFlow
20operations, nested polyhedral loop regions, and even LLVM instructions and fixed
21hardware operations and types.
22
23We expect MLIR to be of interest to many groups, including:
24
25*   Compiler researchers and implementers looking to optimize performance and
26    memory consumption of machine learning models
27*   Hardware makers looking for a way to connect their hardware to TensorFlow,
28    such as TPUs, portable neural hardware in phones, and other custom ASICs
29*   People writing language bindings that want to take advantage of optimizing
30    compilers and hardware acceleration.
31
32The TensorFlow ecosystem contains a number of compilers and optimizers that
33operate at multiple levels of the software and hardware stack. We expect the
34gradual adoption of MLIR to simplify every aspect of this stack.
35
36<img alt="MLIR overview diagram" src="./images/mlir-infra.svg"/>
37