LegUp Computing closes a seed-funding round led by Intel Capital

TORONTO, February 22, 2018 — LegUp Computing, Inc. announced today that it closed a seed funding round led by Intel Capital. LegUp offers a cloud platform that enables software developers to program, deploy, scale, and manage FPGA devices for accelerating high performance applications without requiring hardware expertise. The technology enables the next generation of low-latency and high-throughput computing on the vast amount of real-time data processed in the cloud. LegUp Computing, Inc., was spawned from years of research in the Dept. of Electrical and Computer Engineering at the University of Toronto to commercialize the award-winning open-source LegUp high-level synthesis tool.

LegUp Computing Team

LegUp Computing team from left to right: Omar Ragheb, Zhi Li, Dr. Andrew Canis, Ruolong Lian, Dr. Jongsok Choi, and University of Toronto Professor Jason Anderson (Photo: Jessica MacInnis)

Faster Facial Analytics using an FPGA


The quality and price of image sensors has seen a huge improvement over the past decade, we are now seeing increased adoption of cameras in the automotive sector. One new application is a driver facing camera that can monitor the driver for signs of drowsiness. If the driver is about to fall asleep, we can trigger an alarm. Implementing a system like this requires a camera and an embedded processor to analyze the video stream, looking for the driver’s face and performing facial landmark detection to determine the location of their eye lids.

We have recently worked with the company Eyeris who specializes in these facial analytics software algorithms. However, they were having a problem, the software algorithms ran too slowly on an embedded processor.  They came to Efinix, a company that specializes in programmable hardware acceleration platforms, who contacted us to help them convert this facial analysis written in software into hardware that can run on an FPGA.

In the video below shows three versions of the facial analytics demo. First, running on the embedded processor (~5 frames per second), then running on an FPGA (~13 FPS), and finally on a smaller video canvas (~15 FPS). You can see that the responsiveness improves tremendously by using the FPGA to accelerate this application. Eyeris showed this demo to some of their customers during CES this year:


Video Filtering on AWS F1 FPGA using LegUp


In this post we’re going to show you a video filtering demonstration on an Amazon cloud FPGA using LegUp. The specific video filter we will showcase is Canny edge detection as we described in a previous post. This same approach could be used to implement any other image filter that uses convolution (blur, sharpen, emboss, etc.).

We’re from Toronto and fans of the Blue Jays, so we’ll use a slow motion video of Josh Donaldson hitting a home run to demonstrate our filter:

Synthesizing Streaming Floating-Point Cores with LegUp HLS — Publication in DATE 2018

A paper describing the use of LegUp HLS to synthesize hardware cores for floating-point computations from C-language software will appear in the 2018 ACM/IEEE Design Automation and Test in Europe (DATE) conference, to be held at Dresden, Germany, in March 2018.  The floating-point cores are fully IEEE 754 compliant, yet through software changes alone, can be tailored to application needs, for example, by reducing precision or eliminating exception checking, saving area and raising performance in non-compliant variants.  The IEEE-compliant cores synthesized by LegUp HLS compare favourably to custom RTL cores from FloPoCo and Altera/Intel, despite the LegUp-generated cores being synthesized entirely from software.  An advance copy of the paper, jointly authored by the University of Toronto and Altera/Intel, is available: PDF.


Canny Edge Detector Using LegUp


In this post we will explain how to implement a Canny edge detector on an FPGA. We will describe the entire algorithm in C code using LegUp, allowing us to avoid the difficulty of implementing this design in Verilog or VHDL.

First, watch this quick video of the the finished edge detector, running on an Altera DE1-SoC board, which costs $249. We also have this demo working on a Lattice HDR-60 board, which costs $425 and includes a built-in 720p camera.

The first thing you’ll notice is the output is black and white, with each pixel representing whether there is an edge at that particular region of the image. The brightness of the pixel represents how distinct the edge is at that location.

The example below shows Canny edge detection performed on the Lenna test image:


You may be asking, why do edge detection on an FPGA instead of a standard microprocessor? The main reason is that Canny edge detection requires significant computation. Typically this is not a problem when working with static images, but for a video application the processor must keep up with the incoming video frame rate, otherwise we would see a choppy output video. Instead, we want the video output to be updated in real-time, which means there is no delay between moving the camera and updating the screen. On an FPGA, we can exploit the parallelism inherent in the Canny edge detection algorithm and stream the incoming video pixels through a series of specialized hardware pipelines that perform the Canny edge algorithm in parallel stages.

Amazon EC2 F1 Tutorial: Understanding the CL_DRAM_DMA example


The CL_DRAM_DMA example demonstrates lots of the Shell/CL interfaces and functionality.  This blog post will walk through the custom logic (CL) portion of the example.  You may have found that this example has more than 6000 lines of SystemVerilog code but with very little comments.  To help you quickly understand the example from a high level, we created some block diagrams to overview the CL’s hierarchy, interface, connectivity, and functionality.  We will also dive into some major modules and go through the implementations.

Amazon EC2 F1 Tutorial: The Shell-to-CL AXI-Lite interface used in the Hello World Example

The “Hello World” example exercises the OCL Shell-to-CL AXI-Lite interface, the Virtual LED outputs and the Virtual DIP switch inputs. This blog post will walk through the custom logic RTL (Verilog), explain the AXI-lite slave logic, and highlight the PCIe APIs that can be used for accessing the registers behind the AXI-lite interface.

Amazon EC2 F1 Tutorial: Step-by-step guide on running two examples on the Amazon FPGA Cloud


Amazon has recently announced the availability of their FPGA cloud, Amazon EC2 F1. We think that this is very exciting news, as it is the first time that FPGAs in the cloud are being available to the general public on a massive scale. This is the first step towards making FPGAs easier to use, as with the EC2 F1, a user no longer has to buy an FPGA and install it on site. FPGAs are typically much more expensive and cumbersome to buy than CPUs or GPUs, hence making them available in the cloud so that one can use them from anywhere around the world makes FPGAs much more accessible.

For more general information about their FPGA instances, visit their website at https://aws.amazon.com/ec2/instance-types/f1/

When the F1 instances first became available, we were excited to try them out, but we found that they were pretty difficult to use at first. Documentation was lacking (and incorrect in some case), and although Amazon provides a few examples with their AWS EC2 FPGA Hardware and Software Development Kit, the instructions to run the examples are scattered in different places and missing some steps. Understandably, they only released this service publicly in April 2017 and documentation may not have been their highest priority. To this end, we decided to write a unified guide which provides step-by-step instructions on how to run the two main examples provided by Amazon, cl_hello_world and cl_dram_dma, on the Amazon EC2 F1. This guide includes the instructions included in their AWS EC2 FPGA Hardware and Software Development Kit as well as information that we have written by trying out the examples ourselves. At the time of writing, we could not find such a step-by-step guide and we ran into issues here and there so we think that this guide will allow one to easily try out the F1 instances without getting stuck in some setup issue. So let’s dive right into it!

Using LegUp HLS to Synthesize a Deep CNN Inference Accelerator

A paper describing the synthesis of a deep convolutional neural network (CNN) inference accelerator from C software with LegUp HLS will appear at the 2017 IEEE International System-on-Chip Conference (SOCC), at Munich, Germany, in September 2017. The work showcases the use of LegUp’s unique Pthreads flow to convert parallel software threads into spatial hardware parallelism. The accelerator incorporates a novel approach for zero-weight skipping, leveraging the ability to prune CNN weights with little inference accuracy loss.

J. H. Kim, B. Grady, R. Lian, J. Brothers, J.H. Anderson, “FPGA-Based CNN Inference Accelerator Synthesized from Multi-Threaded C Software,” IEEE SOCC, 2017 (PDF).

LegUp 5.1 is released!

We are pleased to announce that LegUp 5.1 has been released!

This release is a culmination of more than 25 man-years of research and development. Prior to this release, LegUp has had 4 major releases for academic research. During these years, LegUp has been used by thousands of researchers around the world, making it the de-facto standard in high-level synthesis (HLS) research. In 2014, LegUp won the Community Award at the International Conference on Field Programmable Logic (FPL) for contributions to HLS research. LegUp has also been shown to produce state-of-the-art hardware.
We have brought all of the best features from our previous releases, and made it even better by adding new features, as well as improving the quality of the generated hardware.

New Features

Here are just some of the highlights of what we have added for this release.

  • LegUp IDE which provides a complete development environment with a debugger and a profiler.
  • Support for Xilinx, Lattice, Microsemi, and Achronix FPGAs (LegUp previously only supported Altera FPGAs).
  • Windows OS support (LegUp previously only supported Linux OS).
  • Improved pipelining.
  • Improved memory architecture.
  • Improved user messages.


LegUp 5.1 comes with a 30-day free trial period so that you can freely try out the tool. Please note that during the trial period, you may only use LegUp for evaluation purposes only, and the generated hardware cannot be used in a product. To purchase a full license, please contact us at sales@legupcomputing.com.