MemVerge Makes Big Memory Apps Sizzle

Releases Memory Machine™ v1.2 and programs for 3rd Gen Intel Xeon Scalable Processors and Intel Optane Persistent Memory 200 Series

MILPITAS, Calif., April 6, 2021 — (PRNewswire) — MemVerge™, the pioneers of Big Memory software, today announced the release of Memory Machine software version 1.2. The software delivers Big Memory performance and capacity leveraging up to 40 cores in 3rd Gen Intel Xeon Scalable processors (code named Ice Lake) and up to 6TB of memory capacity per socket with Intel Optane persistent memory 200 series. The company also announced its membership in the CXL™ Consortium, and five Big Memory Labs at Arrow, Intel, MemVerge, Penguin Computing, and WWT that are now equipped and available for Big Memory demonstrations, proof-of-concept testing, and software integration.

"Memory Machine v1.2 is designed to allow application vendors and end-users to take full advantage of Intel's latest Xeon Scalable processor and Optane memory technology," said Charles Fan, CEO of MemVerge. "We started by providing access to new levels of performance and capacity without requiring changes to applications."

Big Memory is Bigger, Faster, and More Available with Memory Machine v1.2
Pioneered by MemVerge, Big Memory software uniquely makes 100% use of available memory capacity while providing new operational capabilities to memory-centric workloads such as virtualized cloud infrastructure, in-memory databases, genomics, and animation/VFX. Memory Machine v1.2 adds the following capabilities to make Big Memory bigger, faster, and more available to an ever-broader set of applications:

  • Support for 3rd Gen Intel Xeon Scalable Processors – from 8 to 40 cores
  • Support for Intel Optane Persistent Memory 200 Series – 32% more bandwidth and up to 6TB per socket vs. previous generation
  • Centralized Memory Management - configuration, monitoring, management, and alerts for DRAM and PMEM across the data center
  • Redis & Hazelcast Cluster HA – coordinated in-memory snapshots among members of in-memory database clusters enables instant recovery of the entire cluster.
  • Support for Microsoft SQL Server on Linux – Double the performance for OLTP with the same memory cost by leveraging 100% of DRAM and PMEM capacity and proprietary memory tiering technology. 
  • Support for KVM hypervisors – Memory Machine 1.2 supports advanced memory management for QEMU-KVM, enabling utilization of 100% of DRAM+PMEM capacity, dynamic tuning of DRAM:PMEM ratio per VM, and minimizes performance degradation caused by noisy neighbors. 
  • Acceleration of Genomic Analytics - Memory Machine v1.1 has proven to accelerate overall single-cell analytics pipelines by 60% by eliminating storage I/O. Read and write testing with v1.2 is expected to improve those results.
  • Animation and VFX HA and Efficiency – Autosave and in-memory snapshots allow animation and VFX apps to provide "Time Machine" capabilities that allow artists to share workspaces instantly and recover from crashes in seconds.

According to Mark Wright, Technology Manager for Chapeau Studios, "Initially, we opened a poly-dense scene in Maya and it took two-and-a-half minutes. Then, we opened a scene from a snapshot we'd taken with Memory Machine and it took eight seconds. In addition to opening exponentially faster, another benefit of the Memory Machine snapshot is that it gets an artist right to the spot in the application where they were when they created a snapshot. There's no need to repopulate the entire application."

Apps Sizzle on Ice Lake in Testing by StorageReview.com
Independent Lab StorageReview.com pulled together a server configured with Intel Optane 200 Series Persistent Memory, Intel Gen 3 Xeon Scalable CPUs, and Memory Machine software from MemVerge. They performed bulk insert and read tests with kdb+, as well as Redis Quick Recovery with ZeroIO Snapshot and Redis Clone with ZeroIO Snapshot.

In summary, the configuration with 200 Series PMEM, Intel Gen 3 Xeon Scalable CPUs, and Memory Machine software demonstrated 2x read performance and 3x write performance. For more details, read the full review.

According to Kevin O'Brien, Lab Director at StorageReview.com, "The real benefits of PMEM show up when you can leverage it at the byte level with the appropriate software. In many cases, application developers like SAP, tune their application to be able to leverage PMEM. While that works for some applications, there's another option. Leverage a software-defined solution that's built from the ground up to help businesses leverage all of the performance and persistence benefits PMEM 200 offers. To test this latest generation of PMEM, that's exactly what we did."

Big Memory Labs Accelerate New Technology Evaluation and Integration
Big Memory makes use of Intel Optane persistent memory and MemVerge Memory Machine software. For IT organizations and vendors that need quick access to a Big Memory environment, five Big Memory labs are now available for demonstrations, proof-of-concept testing, and software integration. Visit the Big Memory Lab page at memverge.com for information about capabilities and scheduling time in each lab: Arrow, Intel, MemVerge, Penguin Computing, StorageReview.com, and WWT.

The Big Memory Opportunity
DRAM was invented in 1969. Over 50 years later, it remains expensive, scarce, and volatile. Clearly DRAM's speed of evolution cannot keep pace with the demand of modern applications that must process large quantities of data and deliver results in real-time. Fortunately, the invention of Intel 3D XPoint technology breathed new life into the aging segment, and marked the advent of the age of Big Memory Computing.    

By 2024, almost a quarter of all data created will be real-time data, and two-thirds of Global 2000 corporations will have deployed at least one real-time application that is considered mission-critical, according to IDC.

These next-generation applications (NGAs) frequently employ big data analytics and AI/ML. The real-time workloads are found in many industries on-premises and in the cloud. Examples include in-memory databases and fraud analytics in financial services, customer profiling in social media, recommendation engines in retail, 3D animation in media & entertainment, genomics in health sciences, and security forensics, to name just a few.

The result is explosive adoption of Big Memory Computing designed for big and fast data. IDC estimates the market opportunity for persistent memory will grow from $65 million to $2.6 billion by 2023. Coughlin & Associates estimates that revenue for persistent memory will reach $25 billion by 2030, equal to revenue for DRAM.

About MemVerge
The advent of persistent memory is sparking a new era of Big Memory Computing where applications of any size can forgo traditional storage in favor of abundant, persistent and highly available pools of memory. Memory Machine™ software from MemVerge makes this possible by virtualizing DRAM and persistent memory to form a platform for enterprise-class in-memory data services. To learn more about MemVerge, visit www.memverge.com.

Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. 

Media Contact:
Steve Sturgeon
MemVerge
Steve.sturgeon@memverge.com
858.472.5669

Cision View original content to download multimedia: http://www.prnewswire.com/news-releases/memverge-makes-big-memory-apps-sizzle-301263208.html

SOURCE MemVerge

Contact:
Company Name: MemVerge
Web: MemVerge.com



© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us
ShareCG™ is a trademark of Internet Business Systems, Inc.

Report a Bug Report Abuse Make a Suggestion About Privacy Policy Contact Us User Agreement Advertise