Category: News


LegUp joins Xilinx Partner Program and XDF next week

By Andrew Canis,

LegUp Computing is proud to announce we have been added to Xilinx’s Partner member website. We have also listed our Memcached database solution running on an AWS F1 (FPGA) instance. Please read our solution brief for more details.

Andrew Canis and Ruolong Lian will be at the Xilinx Developer Forum in Frankfurt, Germany this coming Monday, December 10th to give a talk (5:30-6pm) on LegUp’s Memcached accelerator. See you there!

Memcached Solution Brief Page 1

Memcached Solution Brief Page 2

LegUp 6.2 Release!

By Zhi Li,

We are excited to announce that LegUp 6.2 has been released! You can download LegUp here.
We have fixed reported bugs from the LegUp 6.1 release from August, thanks for your feedback! We have also continued to add new features to this release of LegUp.

Release Notes

New features and bug fixes for this release:

  • Support for Intel Arria 10 hard floating point operations.
  • SW/HW Co-simulation now works with floating point operations.
  • Preliminary support for an AXI slave interface used to control a LegUp accelerator.
  • Support for integrating a user-defined Verilog module into a LegUp design
  • Improved support for C++ classes and structs.
  • Improved memory partitioning.

Pricing

LegUp 6.2 comes with a 30-day free trial period so that you can try out the tool. Please note that during the trial period, you may only use LegUp for evaluation purposes, and the generated hardware cannot be used in a commercial product. To purchase a full license, please contact us at sales@legupcomputing.com.

Download

You can download LegUp here.

LegUp 6.1 is released!

By Ruolong Lian,

We are excited to announce that LegUp 6.1 has been released! You can download LegUp here.
Since the previous release of LegUp 5.1 last year, we have seen many users creating interesting projects using LegUp. We have also received lots of valuable feedback.  Based on your feedback, we have added a number of new features to further improve the HLS design process with LegUp. We have also enhanced our tool’s reliability through bug fixes.
Prior to the commercial releases, LegUp has had 4 major releases for academic research and has been used by thousands of researchers around the world, making it the de-facto standard in high-level synthesis (HLS) research. In 2014, LegUp won the Community Award at the International Conference on Field Programmable Logic (FPL) for contributions to HLS research. LegUp has also been shown to produce state-of-the-art hardware.
We have brought all of the best features from our previous releases, and made LegUp even better by adding new features, as well as improving the quality of the generated hardware.

New Features

Here are just some of the highlights of what we have added for this release:

  • SW/HW Co-simulation: uses your C-based software test bench to automatically verify the LegUp-generated RTL.
  • A C++ FIFO template class to allow more flexible definition of FIFO data types.
  • New FPGA device support for Intel Arria 10, Microsemi PolarFire, and Xilinx Virtex UltraScale+, in addition to the existing device support for Intel, Xilinx, Lattice, Microsemi, and Achronix FPGAs.
  • Improved control-flow optimization.
  • Improved memory partitioning.

Pricing

LegUp 6.1 comes with a 30-day free trial period so that you can try out the tool. Please note that during the trial period, you may only use LegUp for evaluation purposes, and the generated hardware cannot be used in a commercial product. To purchase a full license, please contact us at sales@legupcomputing.com.

Download

You can download LegUp here.

World’s Fastest Cloud-Hosted Memcached: 11M+ Ops/sec at 0.3 msec Latency with a Single AWS F1

By Jongsok Choi,

We are pleased to present the world’s fastest cloud-hosted Memcached on AWS using EC2 F1 (FPGA) instances. With a single F1 instance, LegUp’s Memcached server prototype achieves over 11M ops/sec, a 9X improvement over ElastiCache, at <300 μs latency. It offers 10X better throughput/$ and up to 9X lower latency compared to ElastiCache. Please refer to our 1-page handout for more details.

If you would like a demo of the Memcached server, please contact us at info@legupcomputing.com.

 

LegUpMemcachedHandout

 

LegUp Computing closes a seed-funding round led by Intel Capital

By Andrew Canis,

TORONTO, February 22, 2018 — LegUp Computing, Inc. announced today that it closed a seed funding round led by Intel Capital. LegUp offers a cloud platform that enables software developers to program, deploy, scale, and manage FPGA devices for accelerating high performance applications without requiring hardware expertise. The technology enables the next generation of low-latency and high-throughput computing on the vast amount of real-time data processed in the cloud. LegUp Computing, Inc., was spawned from years of research in the Dept. of Electrical and Computer Engineering at the University of Toronto to commercialize the award-winning open-source LegUp high-level synthesis tool.

LegUp Computing Team

LegUp Computing team from left to right: Omar Ragheb, Zhi Li, Dr. Andrew Canis, Ruolong Lian, Dr. Jongsok Choi, and University of Toronto Professor Jason Anderson (Photo: Jessica MacInnis)

… Continue reading

Synthesizing Streaming Floating-Point Cores with LegUp HLS — Publication in DATE 2018

By Jason Anderson,

A paper describing the use of LegUp HLS to synthesize hardware cores for floating-point computations from C-language software will appear in the 2018 ACM/IEEE Design Automation and Test in Europe (DATE) conference, to be held at Dresden, Germany, in March 2018.  The floating-point cores are fully IEEE 754 compliant, yet through software changes alone, can be tailored to application needs, for example, by reducing precision or eliminating exception checking, saving area and raising performance in non-compliant variants.  The IEEE-compliant cores synthesized by LegUp HLS compare favourably to custom RTL cores from FloPoCo and Altera/Intel, despite the LegUp-generated cores being synthesized entirely from software.  An advance copy of the paper, jointly authored by the University of Toronto and Altera/Intel, is available: PDF.

 

Using LegUp HLS to Synthesize a Deep CNN Inference Accelerator

By Jason Anderson,

A paper describing the synthesis of a deep convolutional neural network (CNN) inference accelerator from C software with LegUp HLS will appear at the 2017 IEEE International System-on-Chip Conference (SOCC), at Munich, Germany, in September 2017. The work showcases the use of LegUp’s unique Pthreads flow to convert parallel software threads into spatial hardware parallelism. The accelerator incorporates a novel approach for zero-weight skipping, leveraging the ability to prune CNN weights with little inference accuracy loss.

J. H. Kim, B. Grady, R. Lian, J. Brothers, J.H. Anderson, “FPGA-Based CNN Inference Accelerator Synthesized from Multi-Threaded C Software,” IEEE SOCC, 2017 (PDF).