A flow is presented for synthesizing Tensorflow computation graphs into FPGA accelerators using the open-source high-level synthesis (HLS) tool LegUp. The Tensorflow computation graph is represented translated from an intermediate representation in Tensorflow's Accelerated Linear Algebra (XLA) compiler called High Level Optimizer (HLO). This is translated into LLVM intermediate representation (IR) using a modified version of XLA's CPU backend. These modifications enable users to leverage IP modules for computation-intensive operations. For a simple instance of matrix multiply, using even a naively implemented IP is shown to give a 1.7x speedup over baseline accelerators synthesized from the original CPU backend