Thanks to everyone who came by and visited out booth at MaRS for the University of Toronto startup showcase! We really enjoyed talking about how LegUp Computing’s technology makes FPGAs much easier to program. I was amazed at how many startups are being spun out of the University of Toronto. We appreciate the support from the UTEST incubator and University of Toronto as we grow the company.
This week, Zhi and Andrew will be at Super Computing 2018 showcase our FPGA-accelerated memcached database and our FPGA high-level synthesis software tool. You can visit us in the Startup Pavilion at booth 3773 to learn more about how to program FPGAs from C/C++ using LegUp HLS and how FPGAs can accelerate your workloads!
We have fixed reported bugs from the LegUp 6.1 release from August, thanks for your feedback! We have also continued to add new features to this release of LegUp.
New features and bug fixes for this release:
- Support for Intel Arria 10 hard floating point operations.
- SW/HW Co-simulation now works with floating point operations.
- Preliminary support for an AXI slave interface used to control a LegUp accelerator.
- Support for integrating a user-defined Verilog module into a LegUp design
- Improved support for C++ classes and structs.
- Improved memory partitioning.
LegUp 6.2 comes with a 30-day free trial period so that you can try out the tool. Please note that during the trial period, you may only use LegUp for evaluation purposes, and the generated hardware cannot be used in a commercial product. To purchase a full license, please contact us at firstname.lastname@example.org.
You can download LegUp here.
Since the previous release of LegUp 5.1 last year, we have seen many users creating interesting projects using LegUp. We have also received lots of valuable feedback. Based on your feedback, we have added a number of new features to further improve the HLS design process with LegUp. We have also enhanced our tool’s reliability through bug fixes.
Prior to the commercial releases, LegUp has had 4 major releases for academic research and has been used by thousands of researchers around the world, making it the de-facto standard in high-level synthesis (HLS) research. In 2014, LegUp won the Community Award at the International Conference on Field Programmable Logic (FPL) for contributions to HLS research. LegUp has also been shown to produce state-of-the-art hardware.
We have brought all of the best features from our previous releases, and made LegUp even better by adding new features, as well as improving the quality of the generated hardware.
Here are just some of the highlights of what we have added for this release:
- SW/HW Co-simulation: uses your C-based software test bench to automatically verify the LegUp-generated RTL.
- A C++ FIFO template class to allow more flexible definition of FIFO data types.
- New FPGA device support for Intel Arria 10, Microsemi PolarFire, and Xilinx Virtex UltraScale+, in addition to the existing device support for Intel, Xilinx, Lattice, Microsemi, and Achronix FPGAs.
- Improved control-flow optimization.
- Improved memory partitioning.
LegUp 6.1 comes with a 30-day free trial period so that you can try out the tool. Please note that during the trial period, you may only use LegUp for evaluation purposes, and the generated hardware cannot be used in a commercial product. To purchase a full license, please contact us at email@example.com.
You can download LegUp here.
We are pleased to present the world’s fastest cloud-hosted Memcached on AWS using EC2 F1 (FPGA) instances. With a single F1 instance, LegUp’s Memcached server prototype achieves over 11M ops/sec, a 9X improvement over ElastiCache, at <300 μs latency. It offers 10X better throughput/$ and up to 9X lower latency compared to ElastiCache. Please refer to our 1-page handout for more details.
If you would like a demo of the Memcached server, please contact us at firstname.lastname@example.org.
TORONTO, February 22, 2018 — LegUp Computing, Inc. announced today that it closed a seed funding round led by Intel Capital. LegUp offers a cloud platform that enables software developers to program, deploy, scale, and manage FPGA devices for accelerating high performance applications without requiring hardware expertise. The technology enables the next generation of low-latency and high-throughput computing on the vast amount of real-time data processed in the cloud. LegUp Computing, Inc., was spawned from years of research in the Dept. of Electrical and Computer Engineering at the University of Toronto to commercialize the award-winning open-source LegUp high-level synthesis tool.
LegUp Computing team from left to right: Omar Ragheb, Zhi Li, Dr. Andrew Canis, Ruolong Lian, Dr. Jongsok Choi, and University of Toronto Professor Jason Anderson (Photo: Jessica MacInnis)
… Continue reading
A paper describing the use of LegUp HLS to synthesize hardware cores for floating-point computations from C-language software will appear in the 2018 ACM/IEEE Design Automation and Test in Europe (DATE) conference, to be held at Dresden, Germany, in March 2018. The floating-point cores are fully IEEE 754 compliant, yet through software changes alone, can be tailored to application needs, for example, by reducing precision or eliminating exception checking, saving area and raising performance in non-compliant variants. The IEEE-compliant cores synthesized by LegUp HLS compare favourably to custom RTL cores from FloPoCo and Altera/Intel, despite the LegUp-generated cores being synthesized entirely from software. An advance copy of the paper, jointly authored by the University of Toronto and Altera/Intel, is available: PDF.
A paper describing the synthesis of a deep convolutional neural network (CNN) inference accelerator from C software with LegUp HLS will appear at the 2017 IEEE International System-on-Chip Conference (SOCC), at Munich, Germany, in September 2017. The work showcases the use of LegUp’s unique Pthreads flow to convert parallel software threads into spatial hardware parallelism. The accelerator incorporates a novel approach for zero-weight skipping, leveraging the ability to prune CNN weights with little inference accuracy loss.
J. H. Kim, B. Grady, R. Lian, J. Brothers, J.H. Anderson, “FPGA-Based CNN Inference Accelerator Synthesized from Multi-Threaded C Software,” IEEE SOCC, 2017 (PDF).