SolidRun and Gyrfalcon Develop First Edge Optimized AI Inference Server That Bests GPU Performance at a Fraction of the Cost and Power

Featuring Lightspeeur® 2803S Neural Accelerators, SolidRun's Janux GS31 Inference Server Supports Extremely Low Latency Decode and Video AI Inference on up to 128 channels of 1080p Video

MILPITAS, Calif., and TEL AVIV, Israel, Feb. 26, 2020 — (PRNewswire) —   SolidRun, a leading developer and manufacturer of high-performance edge computing solutions, and ASIC solutions provider Gyrfalcon Technology Inc., today introduced a co-developed Arm®-based  AI inference server optimized for the edge (SRGT-AI31). Highly scalable and modular, Janux GS31 supports today's leading neural network frameworks and can be configured with up to 128 Gyrfalcon Lightspeeur® SPR2803 AI acceleration chips for unrivaled inference performance for today's most complex video AI models.

SolidRun Unveils SolidSense Edge Gateway with Wirepas Mesh Support for Unparalleled Wireless Industrial IoT Connectivity (PRNewsfoto/SolidRun)

Tailor made to meet the future challenges of mass deployment of artificial intelligence applications at the edge, including energy consumption, cost effectiveness and server real estate, this powerful server foundation allows for accelerated and cost-effective scaling of AI inference. Supporting ultra-low latency decoding and video analytics of up to 128 channels of 1080p/60Hz video, Janux GS31 is well suited for monitoring smart cities and infrastructure, intelligent enterprise/industrial video surveillance applications, tagging photos and videos for text-based searching and more.

Featuring best-in-class application and energy efficiency, made possible by Gyrfalcon's Lightspeeur® 2803S Neural Accelerator chips, that deliver up to 24 TOPS per Watt, SolidRun's powerful Edge AI Inference Server outperforms SoC and GPU based systems by orders of magnitude, while using a fraction of the energy required by systems with equivalent computational power. While lower energy consumption will help the Janux GS31 server deliver long-term cost savings, it also requires less of an upfront investment than competing inference servers.

"Powerful, new AI models are being brought to market every minute, and demand for AI inference solutions to deploy these AI models is growing massively," said Dr. Atai Ziv, CEO at SolidRun. "While GPU-based inference servers have seen significant traction for cloud-based applications, there is a growing need for edge-optimized solutions that offer powerful AI inference with less latency than cloud-based solutions. Working with Gyrfalcon and utilizing their industry-proven ASICs has allowed us to create a powerful, cost-effective solution for deploying AI at the Edge that offers seamless scalability."

"SolidRun's Janux GS31 inference server is a perfect implementation of GTI's AI accelerator technology and the Lightspeeur 2803S," said Bin Lei, senior vice president of sales and marketing. "The design and implementation of this server supports extremely high-performance inference with low energy use for high capacity live HD streaming video encoding and decoding to address demand in surveillance, broadcasting and a wide range of service provider market segments."

Jim McGregor, Founder and Principal Analyst, Tirias Research commented, "AI is rapidly moving to the edge of the network to address the performance and security needs of many applications. As a result, new networks will drive increasing demand for processing performance and efficiency. The SolidRun platform, leveraging the GTI AI acceleration technology, will provide a powerful and efficient way to build a new intelligent network bridging the gap between devices and the cloud."

The Janux GS31 server, designed in partnership between SolidRun and Gyrfalcon, is optimized for AI inference and delivers a powerful computing foundation in an easily scalable 1U rackmount form factor.

Specifications include:

AI Inference  


AI Accelerators 

Gyrfalcon Lightspeeur® 2803S (Up to x128 chips)

TOPs

2,150 tera operations per second @ 300MHz

AI Networks 

VGG

ResNet

MobileNet 

Decoders

Up to 32 i.MX8M with 2D/3D engine and HW video decoders supporting 1080p60 

No. of Channels  

Up to 128 channels of 1080p @ 60Hz 

Processor & Memory  


Processor 

CEx7 LX2160A 16-core Arm® Cortex® A72 (up to 2GHz) 

Socket 

1

Memory Type 

SO-DIMM DDR4, up to 64GB

Memory Slots 

2

Network & Connectivity  


Network 

2 x 10GbE SFP+

1 x 1GbE RJ45 

USB 

2 x USB Type A

1 x USB Type B 

Power & Mechanical  


Power Consumption

Max 900W (single phase) 

Power Input 

IEC60320 connector 100V~240V 

Form Factor

1U 445mm X 43.5mm X 500mm (WxHxD) 

Software 


OS Support 

Linux 

Supported Frameworks

TensorFlow

Caffe

PyTorch

Environmental  


Operating Temperature 

10°C to 35°C 

Storage & Transport Temperature 

-40°C to -60°C

Operating Humidity 

20% to 80% relative humidity, non-condensing 

Storage & Transport Humidity 

20% to 93% relative humidity, non-condensing 


1 | 2  Next Page »



© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us
ShareCG™ is a trademark of Internet Business Systems, Inc.

Report a Bug Report Abuse Make a Suggestion About Privacy Policy Contact Us User Agreement Advertise