Humans, Machines, and the Power of Know Now

Tylorism to Consumerism: Decoding the Enterprise Digital Genome

As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality.

Albert Einstein.

The Context

The business landscape has changed drastically over the past decade. Increased competition and digital transformation have changed the way businesses must operate. With customers having more options to choose from, businesses must now be able to rapidly adapt to the changing demands of their customers if they wish to survive, let alone succeed. Customers now expect many of their digital services to be provided in real time, forcing businesses to change many of their outdated and mismanaged business processes to incorporate adaptive customer-facing processes. However, this is where businesses face their biggest challenge. Businesses have already invested heavily into business process management systems that are optimized for a line of production. These systems, unfortunately, are too rigid and not flexible enough to deal with the constantly evolving demands of customers in this new business landscape.

While Taylorism–a production efficiency methodology that breaks every action, job, or task into small and simple segments which can be easily analyzed and taught, within command-and-control structures—has benefited many business, command and control structures are now obsolete. Enterprises need to decode their business genome to better understand the pathological state of their enterprise processes and create their unique differentiation. Business genomes provide valuable insights into the critical success factors of total customer experience, velocity, and operational efficiencies, for example, enabling enterprises to continuously learn from and adapt their customer, operations, and product/service processes or systems to deliver instant and constantly improving customer value.

Connected People, Processes, and Things

We are now in an age where everything is connected and intelligent machines are making their way to the forefront of many industries. These changes have given businesses that incorporate adaptive processes huge competitive advantages. Processes now must not only be efficient but should also help businesses sense, learn, predict and act rapidly at business-critical points of interaction across the enterprise. Such intelligent processes and applications will enable businesses to optimize every step of the customer experience—from marketing and sales, to order fulfillment and customer support—and identify new markers to enable creation of new products and services. While business process modeling (like BPMN) tools helped organizations improve workflow, newer technologies―social media, mobile, analytics, and cloud, etc.―are driving a new era of business process innovation. New technological innovations in machine learning, real-time analytics, contextual intelligence, mobility, and cloud are now enabling cognitive applications as a strategic tool for transforming organizations into customer focused, adaptive organizations. These tools and technologies enable real-time, collaborative decision-making by creating networks of subject matter experts (knowledge workers) and providing them the needed insights, information, next best actions, and recommendations, creating an optimal operating environment.

 

The Enterprise Digital Genome

Collaboratively discovering and dynamically adapting to the real-world and even real-time situations is the key for 21st century organizations to deal with unpredictability. There will be rapid advancements in technology, as well as, human interaction and or response to these technological changes. Soon, human and machine conversations will be an integral part of our daily lives.

Enterprises have implemented Business Process Management(BPM) and Business Intelligence(BI) tools across the enterprise and their supply chains. However, given the complexity of these tools and the depth of professional services required to deliver the true value of BPM and BI, customer experience optimization has been slowed by the cog in the wheels of adaptive process. In addition, organizations have been investing billions of dollars to measure and mitigate risk. This siloed approach is creating more complexity than ever before. To reduce and even eliminate these cogs, risks, and complexities, enterprises must now view performance and risk intelligence as an integrated strategy to simultaneously optimize customer experience and business innovation. It is not either performance management or risk management, it is the combination of these two glued by powerful augmented intelligence–the symbiosis of human and machine intelligence—that enables integration and automation of processes and analytics to improve information and collaboration and to incorporate best practices through a network of subject matter experts and organizational knowledge. We call this AI powered integrated performance and risk intelligence system and applications, EDGE (Enterprise Digital Genome™).

Imagine fully integrated performance and risk intelligence layered into the value stream of an enterprise. The customer support team is able to focus on customer needs, with easy access to the entire company’s repertoire of knowledge, similar cases, information, and expertise. To truly accommodate customers, companies must vest real power and authority in the people and systems that interact directly with customers, at the edge of the organization and beyond.  EDGE applications augment business processes to deliver true data-driven process infrastructure entering enterprises into the age of intelligent machines and intelligent processes. EDGE applications empower the knowledge worker to collaborate, derive new insights, and fine tune business processes by placing customers right in the center where they belong, to drive innovation and organizational efficiencies across the global enterprise.

EDGE apps also help organizations focus on improving or optimizing the line of interaction where our people and systems come into direct contact with customers. It’s a whole different thing; a new way of doing business that enables organizations to literally become one living-breathing entity via collaboration and adaptive data-driven biological-like operating systems.

Conclusion

The Enterprise Digital Genome (EDGE), in my opinion, is the future blueprint for the way of doing business. Business leaders should incorporate Systemic EDGE thinking as a way to radicalize, disrupt, and sharpen their business processes to better anticipate an increasingly unpredictable future and to better prepare for resulting emerging opportunities.

Microservices: Real-time Latency Diagnostics

Contributors: Sumit Arrawatia and Surendra Reddy

Recently, there has been a lot of buzz around Microservices. We have been working with a financial services company with a large microservices deployment to develop a real-time operational insights application, aimed at maximizing visibility of customer experience KPIs.  Fast response to customer needs directly impacts the company’s revenue.  This blog describes the part of application focused on optimizing service latency and a reference architecture, based on open source technologies, that scales to 1.2B events per day.

The concept of Microservices is not a new but the term has recently exploded in popularity, fueled by the success of containerization, programmable infrastructure, and new breeds of storage systems (NoSQL, NewSQL, etc.). Microservices allow you to compose large applications from small, independent, collaborating services. The main advantages of this approach are that these services can be deployed, scaled, and changed independently.  This allows the system to evolve smoothly over time to accommodate new functionality and new technologies (data exchange protocols, storage systems, languages, frameworks, etc.).

The advantages of Microservices come at a price: integration, deployment, and monitoring are more complex.  A single user interaction with the system (page request or API call) fans out to 10s of microservices (sometimes more than 1000) that work in concert to service the request.  Performance metrics, user-activity events, and application internal logs get spread over 10s of thousands of machines across multiple availability zones and data centers.  Traditional approaches to detecting and diagnosing system issues break down at this scale because:

  1. Information is dispersed.  Operational data is often spread over many machines in many formats.  Stitching together pieces of information from all theses sources creates a lot of friction when trying to diagnose system issues.
  2. Information is analyzed at the wrong granularity.  Monitoring or alerting at the machine or microservice level yields too many metrics and alerts for human operators to comprehend.  It can be hard to quickly assess the scope of an issue.  Because of cascading interactions between sub-systems, it can be very difficult to separate cause from effect.
  3. Metrics lack sufficient context.  When metrics are recorded, they are often missing some context required for downstream analysis and action.  For example, a microservice may log how long each request takes but may not record other information such as it’s deployment context (datacenter, availability zone, database bindings, and other shared resources it’s using), user context (web vs mobile, active user vs new customer) or business context (such as transaction value).  Historical context is not available either.  All these pieces of information need to be combined later to answer questions like “which components/systems are affected?”,  “what product functionality is affected?”,  “which customers are affected?”, and “has this happened before?”.
  4. The deployment model is informal.  Until an organization reaches the level of operational maturity where infrastructure is managed like code, they likely do not have an explicit model of their system.  Instead, they rely on community knowledge of how all the pieces of the infrastructure are wired together.  This can severely delay diagnosis of system issues because experts must be brought into examine different sub-systems.  Having a formal deployment model can make it much easier to dive into unfamiliar parts of the system (like having a GPS navigation device when entering a new city).

These factors limit the situational awareness of the operations teams and their ability to react quickly to issues or proactively address them.  A new architecture is needed to collect streaming data from multiple sources, enrich it with contextual information, present high-level visualizations, and support interactive drill down to diagnose an issue.  All of this must be done is near real time so that new events are visible to operators within a few seconds.

The real-time pipeline and application requirements were:

  1. Low latency  Events must be collected, processed, indexed, and query-able within a few seconds.
  2. High throughput – The system must handle several billion events per day with a wide gap between peak rates and average rates.
  3. High availability – Since it’s critical to operations, the system must be remain available in the face of faults.
  4. Integrate multiple data sources – Latency metrics come from log files while system deployment data comes from a REST API.
  5. Streaming enrichment – Add context to performance metrics to enable meaningful summary views and diagnostics.
  6. Flexible, interactive queries and visualization – Support fast, OLAP-style queries displayed with interactive visualizations on real-time and historical data.

Reference Architecture

In this architecture, Apache Kafka provides integration for multiple stream data sources.  To enrich event streams (transform, join with dimensional data, score, classify) we leverage Apache Samza’s fault-tolerant, local state and add accessible Javascript and Python APIs on top. Druid enables interactive, real-time OLAP queries to support the 10,000 foot views while Elasticsearch handles fast drill down to individual records, tied together by a trace id.

Pipeline-Architecturev2

We’re building visualizations to allow the operations team to quickly detect and diagnose latency anomalies at multiple logical levels of the infrastructure (e.g. datacenter, zone, network segment, database shard).  These top level views quickly give a sense of scope of an issue and where to look for more detail.  Since there are 1000+ services that can be grouped in many ways, this treemap visualization highlights which groups need attention (big and red).  The size of each box denotes it’s importance to system stability (using total time performing service as an indicator).  It’s colored is based on how anomalous it’s latency is compared to normal.

latency_tree

From there, you can drill down to individual services and see trends over time. This helps to quickly isolate the issue and take timely action to resolve it.

Screen-4

process-map-1

Result

The system has already been adopted and heavily used by the operations team and has greatly improved their response time.  Previously it took on the order of 30 minutes to detect, diagnose, and respond to latency issues.  With the new system in place, response time is closer to 5 minutes.  What’s more, the new system allows them to detect issues much earlier than before and take preventative action.  Recently the tool highlighted sub-optimal performance in a replicate database.  The team was able to identify a mis-configuration and fix it before customers were affected.

What’s Next?

Of course, there’s more we can do to improve this.  We want to integrate more data sources, improve the UIs, and automatically surface to the user the most likely cause. We’re also applying this pattern of stream-based enrichment plus real-time query to other contexts such as customer service interactions.

On Founding the (Big Data) Foundry

(Contributed by Michael Hay, VP/CTO, Hitachi Data Systems)

Steve Hoover, PARCAs a start-up, an established research organization, an established networking devices plus IT infrastructure vendor, and an established IT infrastructure and devices vendor we observed that a variety of Bring Your Own X events that must transpire before launching a next generation solution.  At PARC, our ability to setup, experiment, and teardown computing infrastructure is the key driver to deliver breakthrough ideas. For example in the PARC and SAP real-time analytics exploration project, we spent 6 months on infrastructure setup before we could start doing data science. In addition very complex computing landscapes were needed to iterate machine-learning algorithms on very large volumes of data. Our lesson: Having PARC researchers, who are highly specialized in solving complex problems, use their precious time for infrastructure plumbing and data wrangling was and is wasteful.  In addition to these, data security and privacy issues can’t be ignored. If there is way to rapidly compose, use and tear down different computing landscapes on-demand, we could have conducted far more data science experiments resulting in more and higher quality outcomes.

Another example:  Boeing’s case, related to Edge on AWS, where they realized BDFQUOTE-SKRthat Amazon doesn’t have loads of architects, data plumbers, and data scientists on hand to answer your every question. Instead Boeing had to Bring Their Own Teams to build Edge. More generally, there are still largely unanswered questions about data privacy, sovereignty, and safety, which not only persist but also due to Black Swan events are gaining steam. Therefore, having reviewed publicly available literature and mashed it up with our own experiences we realized that well there had to be a better way.   And as a group Quantiply, PARC, Cisco Systems and Hitachi Data Systems set out to find a better way.

Lew Tucker, Cisco SystemsTherefore, these observed and experienced industry-wide challenges provided the catalyst to create the Big Data Foundry. As a group we seek to not only attack these challenges, but also explore and resolve new problem spaces. What we’re launching provides a place whereby it is feasible to quickly set up your software stacks, combine talents from the members (i.e PARC, Quantiply, HDS and Cisco), conclude your project and then pick the deployment model best meeting your requirements. In essence we’ve all heard the challenges from the market around Big Data/Advanced Analytics and the emerging space of the Internet of Things typified by questions like: “How can I buy a full and complete stack of software and hardware where this stuff just runs?” Well this is the kind of work that we we’re tackling, and our journey’s just beginning. In fact already we’ve unleashed the PARC researchers as well as selective startups onto Big Data Foundry. What we’ve witnessed are both processing time reductions and provisioning speed improvements, and the key is not only can you work this with the founding members, but you can buy the stack as well.

QUOTE-HDSAs we look ahead to the Internet of Things (IoT) revolution with an estimated 25 billion connected devices expected in the next 5 years we believe the Foundry has a role to play. Specifically, we think that there are profound challenges associated to new application and solution assemblies spanning from people to technology. A quick survey of items that come to the top of our mind include: Development costs, social engineering issues, access to new technologies, risks and concerns related to data governance, and scarcity of resources all represent obstacles on the path to realizing seeing true value from IoT. It is our intention to begin tackling many of these challenges in a pragmatic and executable fashion showing clear value along the way.

With the notion of reality and pragmatism we’d like to end this post with a photo of the team. Interestingly on this day we were concluding discussions about business models, value oriented use cases, how to connect with sales teams and how bind our works to generally available solutions. If you’re thinking about the pedigree of folks and organizations engaged it might be a bit surprising we debated these points. However because we want to make a difference we quickly pivoted towards these areas!

DSC01861 copy

Thriving on Adaptability: Best Practices for Knowledge Workers

Screen Shot 2015-03-31 at 8.09.44 PM

Adaptive Case Management is ultimately about allowing knowledge workers to work the way that they want to work and to provide them with the tools and information they need to do so effectively.

As Surendra Reddy points out in his foreword:

“Imagine a fully integrated ACM system layered into the value stream of an enterprise. The customer support team is able to focus on customer needs, with easy access to the entire company’s repertoire of knowledge, similar cases, information, and expertise, as if it were a service. To truly accommodate customers, companies must vest real power and authority in the people and systems that interact directly with customers, at the edge of the organization and beyond. ACM augments business processes to deliver true data-driven process infrastructure entering enterprises into the age of intelligent machines and intelligent processes. ACM empowers the knowledge worker to collaborate, derive new insights, and fine tune the way of doing business by placing customers right in the center where they belong, to drive innovation and organizational efficiencies across the global enterprise.

“ACM also helps organizations focus on improving or optimizing the line of interaction where our people and systems come into direct contact with customers. It’s a whole different thing; a new way of doing business that enables organizations to literally become one living-breathing entity via collaboration and adaptive data-driven biological-like operating systems. ACM is not just another acronym or business fad. ACM is the process, strategy, framework, and set of tools that enables this evolution and maturity.

“ACM, in my opinion, is the future blueprint for the way of doing business.”

Thriving on Adaptability describes the work of managers, decision makers, executives, doctors, lawyers, campaign managers, emergency responders, strategists, and many others who have to think for a living. These are people who figure out what needs to be done, at the same time that they do it.

In award-winning case studies covering industries as a diverse as law enforcement, transportation, insurance, banking, state services, and healthcare, you will find instructive examples for how to transform your own organization.

This important book follows the ground-breaking best-sellers, Empowering Knowledge WorkersTaming the UnpredictableHow Knowledge Workers Get Things Done, and Mastering the Unpredictable and provides important papers by thought-leaders in this field, together with practical examples, detailed ACM case studies and product reviews.

Decoding the Data Science Pipelines

00-Analytics-Process-08052014-1024x782

Applications and challenges of OpenStack

The OpenStack and Enterprise Forum, recently concluded at the end of January 29, at the Computer History Museum in Mountain View, California, was moderated by Lydia Leong, Research Vice President with Gartner.

Panel 2 of “OpenStack: Breaking into the enterprise” brought together Raj Dutt, Senior Vice President of Technology with OpenStack, JC Martin, the Chief Architect of Global Cloud Services at eBay Inc., Rodney Peck, the Architect for Storage at PayPal and Surendra Reddy, CTO with PARC, a Xerox Company.

After a chat with Raj Dutt debating the advantages within the OpenStack Enterprise (read more about it here), Leong let Martin and Peck detail their own experience with OpenStack.

“In 2008 we started developing our own private cloud, and the main driver was agility. At the time there was no project like OpenStack so we had to do our own, and when OpenStack started to mature we looked at our options and we started adoption of open source cloud to replace our in-house, custom cloud management system, because it gave us more advantages compared to proprietary in-house solutions.”

Applications

According to Martin, OpenStack is used in three main types of projects:

1)  developer projects
2)  Q.A. / analytics
3)  production

The teams at eBay Inc. and PayPal are now merged together, with eBay doing the OpenStack deployment.

Bonus points: architecture, language and size

 

“What made you decide to use OpenStack?” asked Leong. “What else have you considered using?”

“Two, three years ago we tried to find partners to develop our cloud, we talked to many software vendors and we looked at cloud.com before they were acquired by Citrix,” Martin started. “The main reason we selected OpenStack was the architecture which was very distributed. The language is also a plus, because we can find developers more easily than Java. People in the community are more attracted by languages like Python. Also, the size of the community was an important factor. For us, the main decision point was ‘who was going to win at the end’, and that was indicated by the size of the community and the ecosystem, so we started to build around OpenStack,” explained Martin.

Following Martin’s intro, Peck clarified that he was brought in “specifically as an OpenStack developer, because they had already chosen OpenStack.”

Size & closeness to the community matter

Leong next asked, “what was the initial experience of deploying OpenStack: challenges, what went well, and so on…”

“At first everything is nice and friendly, and then you get to a certain size and you get a lot of issues,” began Peck. “We had to find solutions for these issues and then try to work with the community to find out if our solutions were the best, or if they were going to be supported in the future. In that sense, we tried staying close to the community, leveraging that and not going with the vendors’ solution completely.”

This allowed them to skip vendor distribution all together.

Challenges

“What were some of the challenges you encountered?” inquired Leong.

“As it was mentioned earlier, a lot of the vendors say they inter-operate, but really they don’t. When things go big, they tend to burst at the seams,” joked Peck.

“You are running a solution at scale, how big is your solution today?” asked Leong.

“Multiple thousand hypervizors, more or less distributed across multiple data centers,” replied Martin.

Confusion

“How much operation does it take to keep this OpenStack cloud running happily?”

“If you are just talking about the operation of the OpenStack components… One of my colleagues did a presentation at the last Summit clarifying that OpenStack is not cloud, OpenStack is just the automation system, the framework that allows you to operate the cloud. There’s many other things that go around OpenStack, like monitoring, all your processes, capacity management, fulfillment, etc. For us this is something that we used to do before, because we were running our cloud. We have a system for on-boarding full racks of compute servers, automatically, and converting that into OpenStack compute.

“For us this was not a gap, it was a quick add-capacity,” clarified Martin. “But for others without those capabilities it would require more effort, so the answer is ‘it depends on your existing capabilities.’ If you’re starting from scratch, it can be a little overwhelming,” warned Martin.

Engaging the community

“What was the most effective way to engage the (OpenStack) community?” asked Leong.

“We attend various summits, we try to foster new projects and to participate in existing projects, following-up and contributing back to the community,” said Martin.

Source; http://siliconangle.com/blog/2014/02/03/180419/

OpenStack: a candid conversation

On Wednesday of this week, the OpenStack foundation invited tech professionals from start-up organizations and legacy Enterprise institutions to share their experiences with the open source cloud platform. The goal of the conference was to impart to those Enterprises yet to adopt a cloud strategy the ease with which they could deploy an OpenStack architecture in their own organization.

In the third keynote panel session, moderated by Lydia Leong of Gartner, Surendra Reddy, CTO of PARC, spoke about the role of OpenStack in his own organization. PARC, utilizing an open innovation practice is, it should be noted, an independent, wholly owned subsidiary of Xerox, dating back to 2002. PARC has been on the leading edge of such endeavors as the Ethernet, laser printing and ubiquitous computing.

Reddy began by discussing his role and the role and responsibility of PARC. Primarily, PARC focuses on Big Data applications, drawing in data from submitted sensor data and running that data through a series of parallel algorithms. One of the reasons PARC settled on OpenStack when compared to other solutions like AWS, Joyent and VMWare is because, as Reddy points out, “Our researchers are not programmers. They are not DevOps guys. I don’t have any engineers to do this automation layer.” He continued, “We looked for something where, all in, we could start working without any programming effort.”

Watch the #OEForum Opening Keynote Panel Session: Part 3

Another reason Reddy cited for PARC’s deployment of OpenStack was the agility his small support team required to conduct their research. “We need lightweight containers with which we can execute without taking a lot of deployment time. I only have six people in our support team. That’s where OpenStack really helps a lot to give the tools to the researchers to focus on their primary area of research.”

PARC, on the OpenStack architecture, was able to build the entire framework that allows them, with the push of a single button, to deploy what they termed a ‘disposable Hadoop cluster.’ “I can launch a 10 terabyte Hadoop cluster…in less than two to three minutes today and run out experiments and then take it down when we are done. That’s not possible with Amazon or any other cloud service.”

While OpenStack is on track to release their Icehouse iteration in just a few months, Reddy conceded his organization is still two to three releases behind, using the Folsom release since early last year.

OpenStack ecosystem maturity : updates should be seamless

 

Leong was interested to know if Reddy and PARC were finding the pace of OpenStack releases to be a challenge to their organization. Reddy pulled no punches in explaining why he has decided to hold PARC back to the Folsom release. “It’s a big pain in the neck to upgrade them,” he says. “When we upgraded from Essex to Folsom, we had to re-do everything. Those transitions are really painful.” He claimed his allegiance to Folsom centered around the fact that release still maintains the capability of completing upwards of 90 percent of his workload. “It does my work so why should I go with other versions?”

This, then, led Leong to ask the obvious question, “What would be compelling enough to cause you to want to go through the upgrade cycle pain?” Reddy’s expectations for future releases echoed, I’m sure, a lot of the thoughts and concerns of Enterprise professionals in the audience.

“Zero loss upgrade. Just roll it in the new version,” he stated. “It should work seamlessly without re-doing anything. Our OpenStack base deployment is a very complex one.” Short of offering a zero downtime, zero maintenance and risk-free upgrade, Reddy claims, “I can live with the existing version. It does my job.”

Aside from the headache associated with his last upgrade, Reddy states he has been very happy with OpenStack within his organization. “We are looking at our next big step.” PARC’s goal is to work with hardware vendors in the creation of an open innovation lab aimed at democratizing data centric algorithmic research. “Our next step is to take this and spin it out to the higher level so even our researchers in other labs will have access.” The second tack to this strategy is to bring their customer base into the mix as well. “We want customers to connect their sensors and devices into this network and use PARC by utilizing our researchers to analyze their data for them. That’s our next goal for OpenStack.”

Reddy’s panel appearance was as honest an assessment of the strengths and limitations the OpenStack solution provides as could be expected from a professional in the field. Much of his insight should be considered required viewing for anyone who has yet to pull the trigger on deploying OpenStack within their own organization.

Source; http://siliconangle.com/blog/2014/01/31/openstack-a-candid-conversation-oeforum/
photo credit: http://www.flickr.com/photos/mamchenkov/363542398/

PARC and SAP Co-innovation: Adding Graph Analytics to SAP HANA

Surendra Reddy, PARC; Cirrus Shakeri, SAP; Heinz Ulrich Roggenkemper, SAP; Hartmut Vogler, SAP; Jens Doerpmund, SAP

parc-sap-logos250 (2)Graph analytics is a crucial element in extracting insights from Big Data because it helps discover hidden relationships and connecting the dots. A graph, meaning the network of nodes and relationships, treats the linkage between objects as equally important as the objects themselves. You can think of social networks or supply chains as obvious examples, but graphs include any network of objects such as customers, products, purchase orders, customer support calls, product inventory, etc.

PARC has invented a set of machine learning and reasoning for analyzing large graphs in real time. As you can imagine, high dimensionality and a rich tapestry of relationships in datasets need highly scalable algorithms. After four months of exploration with Hadoop + Hive, Native Map/Reduce, R/MR, and Mahout under different execution environments like multi-core, multi-threaded, and parallel computation we found the optimal solution by integrating PARC’s reasoning and insight discovery with SAP HANA. Automated reasoning needs multiple iterations of algorithmic runs, which need to go back and forth between graph analytics and HANA’s analytics.

PARC researchers have been exploring graph analytics, egocentric collaborative filtering, automated reasoning, graph based clustering, Bayesian Networks (BN), Probabilistic Graph Models (PGM), scalable machine learning, and contextual intelligence to be at the forefront of Big Data research. PARC’s main goal is to reduce and/or eliminate the need for complex ETL processes and introduce and invent automated machine learning to enable business people to directly explore the data and discover insights with reduced need for data sciences expertise.

SAP HANA is a fast, massively parallel ACID-compliant database platform for both analytical and transactional data processing. Both transactions and analytics are supported within the in-memory columnar engine, and all data processing and calculations take place in memory. HANA provides business and predictive libraries (e.g. for planning, text processing, spatial analytics), which can be called from within a rich stored-procedure language. What is unique about HANA is that it enables customers to perform complex analytical processing directly on top of the OLTP data structures, thus eliminating redundant data transfer and storage. Via HANA Live, customers have access to a large number of non-materialized and easy consumable business views for real-time reporting and application development.

HANA’s real-time response combined with PARC’s fast graph reasoning algorithms helped us to generate qualitatively superior output, including clusters with higher modularity, rapid discovery of hidden patterns and insights. But what is really exciting for us is the qualitatively innovative solutions that we are building based on this co-innovation. There is a match between PARC’s graph analytics and SAP HANA’s analytics that is unique in terms of turning the speed of computations into new ways of solving problems. For example, we can simulate the spread of diseases, optimize when and where vaccinations should be done, analyze viral marketing, detect next-best-action, optimize supply-chains with up-to-the-second transactions, and detect frauds with input data in real-time. Without HANA, PARC algorithms would require the development of a lower level data processing platform that is equally fast.

With the combination of HANA and PARC’s graph analytics (HiperGraph)we can finally deliver on the promise of a closed feedback loop in the enterprise where transactions are analyzed in real time to provide the error signal for real-time decision making and corrective actions. With HANA + HiperGraph graph analytics the intelligence that is implicit in large volumes of structured and unstructured data in many varieties of sources from inside or outside of the enterprise can be delivered to the users in the form of smart business applications. While HANA provides the unified computing platform for data processing, its combination with graph analytics adds the capability of ‘connecting the dots’ (literally via nodes and edges) and thus generating the intelligence from the data that is bigger than the sum of the parts. For example, a business application can be built that acts as an Intelligent Assistant to enterprise employees by connecting their daily work to similar projects or colleagues that they would otherwise not know about.

Ultimately, combining PARC’s graph analytics with SAP HANA’s analytics results in superior customer experience via personalization and real-time interaction with information. For example, the combination of HANA and graph analytics can provide real-time and interactive purchase recommendations for retail customers where their feedback results in re-computing the recommendations on-the-fly. Today’s Big Data analytics is based on the labor-intensive approach that relies on the scarce data scientists for analyzing data and extracting insights from it. With the combination of HANA and PARC’s graph analytics, Big Data analytics can be put in the hands of every employee in the enterprise by enabling them to make data-driven decisions.

For more details about the PARC-SAP co-innovation in the domain of Big Data please see the following white paper at http://www.parc.com/publication/3475/parc-and-sap-co-innovation.html.  Right now SAP and PARC are entering a new phase of our partnership in order to bring this co-innovation to the market.  In the coming months, we will provide more details on the technology and product roadmap.  Stay tuned!  Visit http://www.parc.com/services/focus-area/bigdata/ or follow us at @SAPInMemory and @PARCinc for updates.

ABOUT THE AUTHORS

Surendra Reddy is the Chief Technology Officer (CTO), Cloud and Big Data Futures, leading the High Performance Analytics Research and Innovation at PARC. He provides the leadership for driving the Big Data platform innovations, IP commercialization strategy, and establishing strategic alliances for GTM at PARC. Surendra Reddy is also leading the PARC Graph Analytics research on SAP HANA in collaboration with SAP.

Cirrus Shakeri is a Senior Director of the HANA Platform Strategic Projects at SAP focusing on bringing the value of Big Data to everyone in the enterprise via semantic search, recommendation systems, and intelligent business assistants. Cirrus’ mission at SAP is to help advance HANA into an artificial intelligence platform that turns everyone in the enterprise into a superhero with special powers!

Heinz Ulrich Roggenkemper serves as an Executive Vice President for Development of SAP Labs.

Hartmut Vogler is a Development Architect in the HANA Platform Strategic Projects team at SAP. Being with SAP since 1999, he has worked in different research and innovation team on a wide variety of topics and is now focusing to turn HANA into the new intelligent application platform for SAP. Hartmut holds more than 20 US and international patents.

Jens Doerpmund is a Senior Director and member of the “Business Suite on HANA” team. After spending more than 15 years on topics related to BI and Data Warehousing, he is currently focusing on graph analytics and machine learning techniques to provide real-time business insights from applications running on HANA.

Harnessing the combined power of SAP HANA and  Graph Analytics  for real-time insights

Authors:  Surendra Reddy (PARC), Cirrus Shakeri (SAP),  Heinz Ulrich Roggenkemper (SAP), Hartmut Vogler (SAP),  Jens Doerpmund (SAP)

Read the full white paper here

Researchers at PARC give us a glimpse of the future

Apr 29, 2013 (San Jose Mercury News – McClatchy-Tribune Information Services via COMTEX) — It was a high-tech speed-dating session, Silicon Valley-style: I would sit in the storied memorabilia-laden Room 2306 in the bowels of PARC, the former Xerox research and development center in Palo Alto that gave us the “ball” mouse, the Ethernet and the graphical user-interface that inspired the Apple Macintosh. And seven of PARC’s resident geniuses would drop by and in 15-minute bursts blow my mind with the technical wizardry each was working on to someday transform our lives.

First up was Lawrence Lee, senior director of strategy, to give me the view from 30,000 feet. Spun off by Xerox in 2002 as its own profit center, PARC leverages industrial-strength brainpower from universities and research centers around the world, helping client companies and the U.S. government by doing what Lee calls the “early high-risk R&D work” before developing products that can be marketed and put into use.

“We look at the markets and explore how industries like health care or finance can use these new technologies to disrupt the models in place,” says Lee. “Even though a lot of what we do is business-to-business, personal technology is always important in our work, and the end user is always on our minds.” The Digital Nurse Assistant One project now in the development stage is the “Digital Nurse Assistant,” a combination of tablet devices, monitors and sensors that Lee says will fundamentally change the nurse- patient relationship. By studying nurses’ workplace patterns and routines, such as ordering tests and locating medicines, researchers say the “assistant” will eliminate wasted time.

“By doing behavioral studies in hospitals, we found that nurses only spend about a third of their time with patients,” says Lee. “There’s a lot of time spent coordinating care and just waiting, so we’re trying to change that with technology.” These tools will reduce redundancies in workflow, help track medications and supplies to cut down on waiting times, and basically put all of a patient’s personal information at the care provider’s fingertips. All coming soon to a hospital near you.

The human-robot interaction Next came Mike Kuniavsky, a “user experience designer” who helps organizations design new technologies and trains them to be more innovative. One key focus is “ubiquitous computing,” a phenomenon also known as the “Internet of Things” that essentially describes an environment where computers and sensors are seamlessly interwoven together.

“The idea,” he said, “is that as computers become cheaper, they’re fragmenting into shards that will embed themselves into every part of our lives, from appliances to furniture to buildings. A car now has something like 30 different computers with multiple sensors. We’re looking at how to take all the information coming in from these sensors and monitors and then connect them all in ways that will improve lives.” An example of ubiquitous computing is wearable technology, such as the still-beta-stage Google Glass. Here the computer is essentially embedded into eyeglasses that a user would control with voice and gestures, such as starting the computer by jerking his head upward a bit and then giving the computer voice commands for tasks like searching the Internet without having to grab a smartphone.

One project Kuniavsky’s team is looking at is how humans and robots interact. “As we have more and more robots in our homes, we’ll have to work closely with them and understand our relationship to them. If you tell it to do something, but it doesn’t do it, how does the robot tell you why it didn’t do it “We’re looking at how to make humans and robots part of a team, as opposed to thinking of them as personal servants. And they won’t be like the robots you saw as a kid on TV, because trying to make a robot replicate a human beings impossible,” Kuniavsky said.

Batteries of the future The ideas were coming fast and furious. Rob McHenry, an energy technology program manager, gave me a peek into the future of energy storage, explaining how PARC scientists are changing the way batteries are manufactured and monitored. He said that by using a technique known as “co-extrusion printing,” researchers have figured out ways to use metallic paste to improve battery life by as much as 30 percent. And that, of course, could soon have a tangible impact on all of us, improving the lives of our smartphones and electric vehicles. He says that two years from now this new technology will be used in factories to crank out more powerful, efficient and longer-lasting batteries.

PARC scientists also are developing a low-cost fiber-optic sensing system that for the first time will be able to monitor what’s actually happening inside of a battery in real-time. They say the technology will help detect internal faults and improve safety, all while ensuring that batteries operate as efficiently as possible.

Virtual product development Tolga Kurtoglu, program director for digital design and manufacturing, told me how PARC software is being used for “virtual product development,” enabling consumers to essentially design and build furniture and electronic devices themselves, collaborating online with others and turning the traditional manufacturing model on its end.

PARC envisions an entirely new ecosystem in manufacturing, including crowdsourced design, social network funding, and more. They expect these trends will lead to a new paradigm in manufacturing that could threaten today’s vertically integrated, large-scale manufacturing industry, much like the PC revolution threatened the mainframe computer industry.

By the time I was done with my speed-dating sessions with Janos Veres, program manager for printed electronics, and Surendra Reddy, PARC’s go-to guy for cloud futures and big-data analysis, my head was ready to explode. I could still see a world, a bit hazy but not too far off in the future, where my washing machine would know my routine, objects across large swaths of my life would be interconnected, my car would know where I was going before I got behind the wheel, and the line between my smartphone and my watch and my clothes and even my body would continue to blur.

Contact Patrick May at 408-920-5689 or follow him at Twitter.com/patmaymerc projects underway at parc The Digital Nurse Assistant: PARC ethnographers (observers of human society and culture) studied exactly what nurses do each day to better understand how to help them, and ultimately provide their patients a better hospital experience.

Human-robot interaction: While the center’s work in “ubiquitous computing” and “context-aware services” seem a bit baffling, the focus is on using “robots” (think high-tech appliances more than the Jetsons’ maid Rosie (aka Rosey) that can anticipate a user’s situation, say, inside the home, proactively serve their needs, and personalize recommendations while learning more about the user over time.

Batteries of the future: Scientists at PARC are working on technology that will allows users to get much more life and power out of batteries than they can today, using a low-cost fiber-optic sensing system that monitors what’s actually happening inside the battery in real-time to detect faults and improve safety.

Virtual product development: Researchers are working on so-called “design automation,” which strives to enable individual designers to crowdsource their projects (think new toasters or cool furniture) in real-time online and work directly with manufacturers, large and small, all over the world.

Source: PARC ___ (c)2013 the San Jose Mercury News (San Jose, Calif.) Visit the San Jose Mercury News (San Jose, Calif.) at www.mercurynews.com Distributed by MCT Information Services