SAN JOSE, Calif. — (BUSINESS WIRE) — December 20, 2016 — The market has evolved from technologists looking to learn and understand new big data technologies to customers who want to learn about new projects, new companies and most importantly, how organizations are actually benefitting from the technology. According to John Schroeder, executive chairman and founder of MapR Technologies, Inc., the acceleration in big data deployments has shifted the focus to the value of the data.
John has crystallized his view of market trends into these six major predictions for 2017:
1 - Artificial Intelligence is Back in Vogue
In the 1960s, Ray Solomonoff laid the foundations of a mathematical theory of AI, introducing universal Bayesian methods for inductive inference and prediction. In 1980 the First National Conference of the American Association for Artificial Intelligence (AAAI) was held at Stanford and marked the application of theories in software. AI is now back in mainstream discussions and the umbrella buzzword for machine intelligence, machine learning, neural networks, and cognitive computing. Why is AI a rejuvenated trend? The three V’s come to mind: Velocity, Variety and Volume. Platforms that can process the three V’s with modern and traditional processing models that scale horizontally providing 10-20X cost efficiency over traditional platforms. Google has documented how simple algorithms executed frequently against large datasets yield better results than other approaches using smaller sets. We'll see the highest value from applying AI to high volume repetitive tasks where consistency is more effective than gaining human intuitive oversight at the expense of human error and cost.
2 - Big Data for Governance or Competitive Advantage
In 2017, the governance vs. data value tug of war will be front and center. Enterprises have a wealth of information about their customers and partners. Leading organizations will manage their data between regulated and non-regulated use cases. Regulated use cases data require governance; data quality and lineage so a regulatory body can report and track data through all transformations to originating source. This is mandatory and necessary but limiting for non-regulatory use cases like customer 360 or offer serving where higher cardinality, real-time and a mix of structured and unstructured yields more effective results.
3 - Companies Focus on Business- Driven Applications to avoid Data Lakes from becoming Swamps
In 2017 organizations will shift from the “build it and they will come” data lake approach to a business-driven data approach. Today’s world requires analytics and operational capabilities to address customers, process claims and interface to devices in real time at an individual level. For example any ecommerce site must provide individualized recommendations and price checks in real time. Healthcare organizations must process valid claims and block fraudulent claims by combining analytics with operational systems. Media companies are now personalizing content served though set top boxes. Auto manufacturers and ride sharing companies are interoperating at scale with cars and the drivers. Delivering these use cases requires an agile platform that can provide both analytical and operational processing to increase value from additional use cases that span from back office analytics to front office operations. In 2017, organizations will push aggressively beyond an “asking questions” approach and architect to drive initial and long term business value.
4 - Data Agility Separates Winners and Losers
Software development has become agile where dev ops provides continuous delivery. In 2017, processing and analytic models evolve to provide a similar level of agility as organizations realize data agility, the ability to understand data in context and take business action, is the source of competitive advantage not simply have a large data lake. The emergence of agile processing models will enable the same instance of data to support batch analytics, interactive analytics, global messaging, database and file-based models. More agile analytic models are also enabled when a single instance of data can support a broader set of tools. The end result is an agile development and application platform that supports the broadest range of processing and analytic models.
5 - Blockchain Transforms Select Financial Service Applications
In 2017, there will be select, transformational use cases in financial services that emerge with broad implications for the way data is stored and transactions processed. Blockchain provides a global distributed ledger that changes the way data is stored and transactions are processed. The blockchain runs on computers distributed worldwide where the chains can be viewed by anyone. Transactions are stored in blocks where each block refers to the preceding block, blocks are timestamped storing the data in a form that cannot be altered. Hackers find it impossible to hack the blockchain since the world has view of the entire blockchain. Blockchain provides obvious efficiency for consumers. For example, customers won't have to wait for that SWIFT transaction or worry about the impact of a central datacenter leak. For enterprises, blockchain presents a cost savings and opportunity for competitive advantage.
6 - Machine Learning Maximizes Microservices Impact
This year we will see activity increase for the integration of machine learning and microservices. Previously, microservices deployments have been focused on lightweight services and those that do incorporate machine learning have typically been limited to “fast data” integrations that were applied to narrow bands of streaming data. In 2017, we’ll see development shift to stateful applications that leverage big data, and the incorporation of machine learning approaches that use large of amounts of historical data to better understand the context of newly arriving streaming data.
“Our predictions are strongly influenced by leading customers who have gained significant business value by integrating analytics into operational use cases,” said Schroeder. “Our customer use of the MapR converged data platform provides agility to Devops where they can use a broad range of processing models from Hadoop to Spark, SQL, NoSQL, files and message streaming--whatever is required for their current and future use cases in private, public and hybrid cloud deployments."
Tweet this: .@MapR Executive Chairman Identifies 6 Big Data Predictions for 2017 http://bit.ly/2gTiD5j
About MapR Technologies
MapR enables organizations to create disruptive advantage and long-term value from their data with the industry’s only Converged Data Platform, which delivers distributed processing, real-time analytics, and enterprise-grade requirements across cloud and on-premise environments–while leveraging the significant ongoing development in open source technologies including Spark and Hadoop. Organizations with the most demanding production needs, including sub-second response for fraud prevention, secure and highly available data-driven insights for better healthcare, petabyte analysis for threat detection, and integrated operational and analytic processing for improved customer experiences, run on MapR. A majority of customers achieves payback in fewer than 12 months and realizes greater than 5X ROI. MapR ensures customer success through world-class professional services and with free on-demand training that 50,000 developers, data analysts and administrators have used to close the big data skills gap. Amazon, Cisco, Google, HPE, Microsoft, SAP, and Teradata are part of the worldwide MapR partner ecosystem. Investors include Future Fund, Google Capital, Lightspeed Venture Partners, Mayfield Fund, NEA, Qualcomm Ventures and Redpoint Ventures. Connect with MapR on
LinkedIn, and
Twitter.
View source version on businesswire.com: http://www.businesswire.com/news/home/20161220005237/en/